text
stringlengths
1.59k
23.8k
id
stringlengths
47
47
dump
stringclasses
8 values
url
stringlengths
15
3.15k
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
2.05k
4.1k
score
float64
2.52
5.34
int_score
int64
3
5
By James W. Loewen This is the sesquicentennial of the Reconstruction era in the United States, that period after the Civil War when African Americans briefly enjoyed full civil and political rights. African Americans — 200,000 of them — had fought in that war, which made it hard to deny them equal rights. Unlike with the 150th anniversary of the Civil War, however, few historic places tell us what happened during Reconstruction. They could: Every plantation home had a Reconstruction history, often fascinating, but these manors remain frozen in time around 1859. They tell a tale of elegance and power, and Reconstruction was the era when that power was challenged. Moreover, it is still true, as W. E. B. Du Bois put it in Black Reconstruction 80 years ago, that “one cannot study Reconstruction without first frankly facing the facts of universal lying.” Here are five common fallacies that Americans still tell themselves about this formative period. 1. Reconstruction was a failure. This view came to dominate public thinking from 1890 until about 1940, when world events and the Great Migration began to reshape the country’s perception of race and racism. During this period, known by historians as the nadir of race relations, white Americans became incredibly racist. Communities across the North became “sundown towns” that banned African Americans (and sometimes Jews and others) after dark. Beginning with Mississippi in 1890, every Southern state instituted literacy tests and poll taxes to effectively remove African Americans from the citizenship they were supposed to have been guaranteed by the 14th Amendment. Reconstruction was portrayed during this era as a terrible time, especially for whites but really for everyone, a failure of a government propped up only by federal bayonets. “No people were ever so cruelly subjected to the rule of ignorant, vicious, and criminal classes as were the Southern people in the awful days of Reconstruction,” the New Orleans Times-Picayune proclaimed in 1901. Some people today even think that Reconstruction was an effort to physically rebuild the South, rather than to aid its political re-entry into the Union. In 2013, for example, the Smithsonian American Art Museum mounted a huge exhibit, “The Civil War and American Art.” “Reconstruction,” the museum claimed, “began as a well-intended effort to repair the obvious damage across the South as each state re-entered the Union.” The curator said that the rebuilding “soon faltered, beset by corrupt politicians, well-meaning but inept administrations, speculators, and very little centralized management.” On the contrary, former Confederates saw Reconstruction as a problem precisely because it was succeeding. New Republican state administrations passed popular measures such as homestead exemption laws that abated taxes on residences, making it harder for people to lose their homes. They also repaired roads and bridges and built new schools and hospitals. Soon, Republicans were drawing 20 percent and even 40 percent of the white vote and almost all the Black vote. Democrats grew desperate. After abortive attempts to win Black votes, they resorted to intimidation and violence. These tactics were central to the restoration of white Democratic rule across the South by 1877. And thus Reconstruction ended, but not because it failed. 2. African Americans took over the South during Reconstruction. The official Mississippi history textbook used in the 9th grade across the state in the 1960s flatly declared Reconstruction a period of “Carpetbag and Negro Rule.” This propaganda was effective: When I asked a seminar of Black freshmen at Tougaloo College near Jackson, Mississippi, in 1969 what happened during Reconstruction, 16 of the 17 students said Blacks took over the governments of the Southern states, but because they were too soon out of slavery, they messed up, and whites had to take control again. In 1979, after I moved to Vermont, I was stunned to hear the minister of the largest Unitarian Church there repeat the same summary in a sermon. This alleged Black dominance supposedly made Reconstruction a time of terror and travail for white Southerners. The Mississippi history textbook put it baldly: “Reconstruction was a worse battle than the war ever was. Slavery was gone, but the Negro problem was not gone.” Fear of “Black domination” is still pervasive among white supremacists; note Dylann Roof’s statement to Black churchgoers in Charleston, South Carolina, as he shot them: “You are taking over our country.” But in fact, the terror and travail during Reconstruction happened mostly to African Americans and their white Republican allies. In Louisiana in the summer and fall of 1868, white Democrats killed 1,081 people, mostly African Americans and white Republicans. Around the same time in Hinds County, Miss., whites killed an average of one African American a day, especially targeting servicemen. Whites mounted similar attacks across the South. Far from suffering under Black dominance, all of the Southern states had white governors throughout Reconstruction. All but one (South Carolina) had white legislative majorities. Mississippi’s Constitutional Convention of 1868 is still called the “Black and Tan Convention,” but only 16 of its 94 delegates were Black. Of course, a government that is 17 percent Black looks “Black” to people used to the all-white governments before and after. 3. Northerners used Reconstruction to take advantage of the South and get rich. Many Americans still learn this canard, epitomized by the term “carpetbaggers.” The story—as exemplified in the 2011 edition of the textbook The American Journey—is that fortune-hunters from the North “arrived with all their belongings in cheap suitcases made of carpet fabric.” Penniless, they would then make it rich off the prostrate South. John F. Kennedy said in his Pulitzer Prize-winning book Profiles in Courage, “No state suffered more from carpetbag rule than Mississippi.” The first clue that this view might be far-fetched comes from the fact that the economies of most Southern states were in ruins. Fortune-seekers will go where the money is, and it was not in the postwar South. Instead, immigrants from the North were mostly of four types: missionaries bringing Christianity (and often literacy) to newly freed people; teachers eager to help Black children and adults learn to read, write, and cipher; Union soldiers and seamen who were stationed in Mississippi and liked the place or fell in love; and would-be political leaders, Black and white, determined to make interracial government work. 4. Republicans “waved the bloody shirt” to hide their lack of substantive policies. “Waving the bloody shirt” has come to mean trying to win votes through demagoguery—blaming opponents for things they didn’t do or did long ago. Its first use of this sort refers to Republicans blaming Democrats for the carnage of the Civil War years after it ended. Kennedy made this claim in Profiles in Courage, writing that “Republican leaders . . . believed that only by waving the bloody shirt could they maintain their support in the North and East, particularly among the Grand Army of the Republic.” In his 2005 biography of Republican politician John A. Logan, Gary Ecelbarger accuses Logan of “waving the bloody shirt” beginning in 1866 and “for decades to come.” Actually, the bloody shirt was a real shirt, owned by a white Republican, A.P. Huggins. He was superintendent of the Monroe County Public Schools, a majority-Black school system in Aberdeen, Miss., and took his job seriously. White supremacist Democrats warned him to leave the state, but he refused. On a March evening in 1870, they went to his home, rousted him from bed in his nightshirt and whipped him nearly to death. His bloody shirt was taken to Washington as proof of Democratic terrorism against Republicans in the South. The violence decried happened during Reconstruction, not the Civil War, so it was not anachronistic. Nor was it demagogic to use the phrase (or wave the shirt); violence at Southern polls posed a real issue—indeed, the most important issue in the United States at the time. 5. Republicans gave up on Black rights in 1877. Every textbook says the Compromise of 1877 meant that “the federal government would no longer attempt to… help Southern African Americans,” to quote The American Journey. “Violence was averted by sacrificing the Black freedmen in the South,” according to another textbook, The American Pageant. Republicans did eventually abandon civil rights, but not right after the Compromise of 1877 effectively ended Reconstruction. Until 1890, African Americans still voted across Dixie. In his inaugural address in 1881, Republican President James A. Garfield said: “The elevation of the Negro race from slavery to the full rights of citizenship is the most important political change we have known since the adoption of the Constitution of 1787. No thoughtful man can fail to appreciate its beneficent effect upon our institutions and people. . . . So far as my authority can lawfully extend they shall enjoy the full and equal protection of the Constitution and the laws.” As late as 1890, Republicans in Congress almost passed the Federal Elections Act, which might have given some force to the 15th Amendment’s voting rights provisions. President Benjamin Harrison had argued for such a measure the previous year. After the act failed to pass, Democrats, as was their custom, tarred Republicans as “a bunch of n—– lovers.” In the past, Republicans replied that what white supremacists did to Black voters in the South was an outrage, but now they were silent, choosing to move on to other issues. After the Federal Elections Act failed to pass, each succeeding Republican president was worse on civil rights. Teddy Roosevelt was worse than Harrison, Harding worse than Roosevelt, Hoover than Harding. With the nomination of Barry Goldwater in 1964, the GOP switched sides entirely, appealing now to white supremacist Southern Democrats. They have been its core constituency ever since. In 2016, Donald Trump took the presidency, installing cabinet-level officials with overt ties to white supremacists. In other ways, too, we still have not reached the level of interracial cooperation we attained during Reconstruction. On Aug. 3, 1870, for example, A. T. Morgan, a white state senator from Yazoo City, Miss., married Carrie V. Highgate, a Black teacher from New York, in Mississippi, and then got re-elected! In the North, not a single suburb of Chicago kept out African Americans in 1870. Today Kenilworth, Ill, its richest and most prestigious, has not a single black household, in keeping with its founder’s decree back in 1902. Today, Republicans make it harder for African Americans (and students and poor people) to vote, just as Democrats did after 1890, albeit on a smaller scale. The tragedy of Reconstruction is not that it failed, but that its successes were curtailed in 1877 and then reversed in 1890. Correcting the myths about the first Reconstruction will help us as we try to build better race relations today. James W. Loewen, emeritus professor of sociology at the University of Vermont, is the author of Lies My Teacher Told Me, Lies Across America: What Our Historic Sites Get Wrong, and The Confederate and Neo-Confederate Reader. James W. Loewen website. Article originally published on Jan. 21, 2016, at The Washington Post. Updated and republished here.
<urn:uuid:12e1d507-dd84-4643-ab47-decc594dd909>
CC-MAIN-2021-43
https://www.zinnedproject.org/materials/five-myths-about-reconstruction/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00270.warc.gz
en
0.971707
2,428
3.96875
4
This study guide was created to help make that process easier for you. Also, it contains practice questions for your benefit as well. So if you’re ready, let’s get started. What is the Circulatory System? The circulatory system is made up of the heart and blood vessels that work to transport blood throughout the body. The two primary blood vessels include: Arteries carry oxygenated blood away from the heart to other tissues and organs of the body. Veins carry deoxygenated blood back to the heart where it’s pumped into the lungs in order to pick up new oxygen molecules. Circulatory System Worksheet and Practice Questions: 1. What does the circulatory system do? It distributes blood and lymph throughout the body. 2. What forms of circulation does the human body have? Pulmonary Circulation and Systemic Circulation. 3. What is a Pulmonary Circulation? Pulmonary circulation is the portion of the cardiovascular system which carries deoxygenated blood away from the heart to the lungs and returns oxygenated (oxygen-rich) blood back to the heart. 4. What is Systemic Circulation? Systemic circulation is the part of the cardiovascular system which carries oxygenated blood away from the heart to the tissues of the body and returns deoxygenated blood back to the heart. 5. What are the fatty deposits on the walls of the arteries in coronary heart disease? The fatty deposits on the walls of the arteries in coronary heart disease are called plaques. 6. What does the right atrium do? It is at the right upper chamber of the heart that receives deoxygenated blood from the body through the vena cava and pumps it into the right ventricle which then sends it to the lungs to be oxygenated. 7. What does the right ventricle do? It is at the lower right chamber of the heart that receives deoxygenated blood from the right atrium and pumps it under low pressure into the lungs through the pulmonary artery. 8. What does the left atrium do? The left atrium is one of the four hollow chambers of the heart. It plays the vital role of receiving blood from the lungs via the pulmonary veins and pumping it to the left ventricle. 9. What does the left ventricle do? It is at the left lower chamber of the heart that receives blood from the left atrium and pumps it out under high pressure through the aorta to the body. 10. Why is the muscle of the heart on the left side thicker? This is because blood is pumped out of the heart at a greater pressure from the left ventricle since it pumps the blood at high pressure to the aorta. 11. What is the function of the valves? There is a valve through which blood passes before leaving each chamber of the heart. The valves prevent the backward flow of blood. 12. What are the functions of atrioventricular valves? The atrioventricular valves are found between the atria and the ventricles. They make sure blood flows is in one direction. 13. What is the function of The function of the valve is to prevent the backflow of blood into the right atrium. 14. What is the function of bicuspid valve? The function of the valve is to prevent the backflow of blood into the left atrium. 15. What is the function of the renal artery? The renal artery supplies the kidney with blood. It carries blood to the kidney and the kidney then filters the blood. 16. What is the function of the renal vein? It carries the blood filtered by the kidney. 17. What is the function of the hepatic artery? It is a short blood vessel that carries the blood to the liver, pylorus of the stomach, duodenum 18. What are the hepatic veins? These are blood vessels that transport the livers deoxygenated blood and blood that was filtered by the liver. 19. What is the superior vena cava? This is a large vein which carries deoxygenated blood from the head, arms and upper body (pulmonary circulation) into the right atrium of the heart. 20. What is the inferior vena cava? This is a large vein which carries deoxygenated blood from the body (systemic circulation) into the right atrium of the heart. 21. What is the aorta? This is the largest artery in the body. Aorta begins at the top of the left ventricle which is the heart’s muscular pumping chamber. The heart pumps blood from the left ventricle into the aorta through aortic valves. 22. What are the coronary arteries? These arteries feed the cells of the heart. 23. What are the functions of blood vessels? It delivers oxygen and nutrients to cells and 24. What is the function of arteries? Arteries carry blood away from the heart. 25. What is an arteriole? It is a small branch of an artery leading into capillaries. 26. Why are the walls of the arteries rich in elastic fibers? It is because when fibers recoil they push the blood even further. 27. What are Capillaries are the link between the arteries and veins. They have many different structures doing different jobs. 28. What are Veins carry blood back to the heart 29. What is a It is a very small vein which collects blood from the capillaries. 30. Why do veins have valves? The valves in veins close to prevent the blood from flowing back and the muscles are squeezing the blood to push it upwards. 31. What systems make up the circulatory system? It is the cardiovascular system and lymphatic system. 32. What system works to maintain homeostasis in the body? 33. What is homeostasis? It is the equilibrium of the internal environment of the body. 34. What do body cells need to function properly? Food, oxygen, and other substances. 35. What does Blood supplies oxygen, nutrients, vitamins, and antibodies while it takes away waste and carbon dioxide. 36. What is the function of the lymphatic system? It transports excess fluid away from tissues and maintains the internal fluid environment. 37. What is the function of arteries? It carries oxygen-rich blood to body tissues. 38. What is the function of veins? It carries oxygen-poor blood back to 39. What is the function of systemic circulation? It carries blood from the heart to the tissues and then brings it back. 40. What is pulmonary circulation? It carries blood to the lungs and back. 41. Where is the left atrium? It is at the left and top portion of the heart. 42. Where is the right atrium? It is at the right and top portion of the heart. 43. Where is the right ventricle? 44. Where is the left ventricle? It is at the lower left portion of 45. What is the pump of the circulatory system? 46. What does the blood carry in the circulatory system? It carries oxygen, carbon dioxide, and all other material. 47. What is the purpose of the pulmonary circulation? To transport blood from the right ventricle of the heart to the lungs. 48. What transports blood from the right ventricle of the heart to the lungs? 49. After blood reaches the lungs it is then returned to which chamber of the heart? 50. What is the function of systemic circulation? It pumps blood to the rest of the body. 51. After blood reaches the rest of the body it is then returned to which chamber of the heart? 52. What is the function of arteries? It carries blood away from the heart. 53. What is the function of the veins? It carries blood to the heart. 54. What artery is responsible for carrying deoxygenated blood? 55. What vein is responsible for carrying oxygenated blood to the heart? 56. What is the structure of capillaries? It is the smallest blood vessels in the body. 57. What is the diameter of capillaries? The diameter is the size of one red blood cell. 58. What is the three-layered walls of all blood vessels aside from capillaries? 59. How many layers are present in arteries? 60. What is the innermost layer of the blood vessel? 61. What tissue composes tunica interna? It is composed of simple squamous epithelium (endothelium). 62. Where is endothelial lining on the tunica interna connected? It is at the basement membrane with glycoproteins and collagen fibers. 63. What is the purpose of the basement membrane with glycoproteins and collagen fibers? It glues the epithelium to the underlying tissue. 64. What structure is also found in the tunica interna? Internal elastic lamina. 65. What is the function of the elastic lamina? It allows expansion 66. What is the tunica media? 67. What tissue does the tunica media contain? It has elastin and smooth muscle. 68. What type of tissues can be seen in the largest blood vessels? Mostly elastin and very little smooth muscle. 69. What is the outermost layer of a blood vessel? 70. What type of tissue is found in the tunica externa? Loose fibrous connective tissue. 71. The tunica externa is principally composed of what? Collagen and elastin. 72. What has their own circulatory system in the external portion of the blood vessel? Larger blood vessels. 73. What blood vessels 74. What has the same three distinct layers as arteries? 75. Where do veins and arteries differ structurally? They differ in the wall thickness. It doesn't get much better than this Respiratory Therapist T-shirt. Grab yours today. 76. What type of walls do veins have? They have way thinner walls than arteries. 77. What causes the arteries to have thicker walls? 78. What type of pressure is found in veins? Relatively zero pressure. 79. What composes the medium and large veins? One-way valves and pocket valves. 80. What is the function of one-way valves or pocket valves? It assists in the movement of blood back to the heart. 81. What forms valves? 82. These one-way pocket valves are formed with? Two-flap pocket valves 83. What assists one-way valves and pocket valves in bringing blood back to the heart? Respiration and contraction of smooth muscle 84. What are capillaries? It is the smallest blood vessels in the body. 85. What is the function of capillaries? It is the site of exchange between tissues. 86. What does blood give up in capillaries which is the site of exchange in the body? Nutrients and oxygen. 87. What does blood receive in the capillaries which is the site of exchange in the body? Carbon dioxide and waste products. 88. Which blood vessel does not have three layers? 89. What tissues form capillaries? Tunica interna without the internal elastic lamina. 90. What other tissues are sometimes but not always found in capillaries? Simple squamous epithelia with a basement membrane possible with a small number of connective tissues. 91. Where can we see capillaries? In most tissues of the body. 92. What are the largest arteries in the body? Conducting or elastic arteries. 93. What is the purpose of calling large arteries in the body elastic arteries? Tunica media is almost entirely elastin with very small amounts of smooth muscle. 94. What do you call the medium and small arteries? Distributing and muscular arteries. 95. What is the purpose of calling medium and small arteries distributing and muscular arteries? The media is now going to be mostly smooth muscle with very little elastin. 96. Small and medium arteries are principally designed to? Dilate or constrict. 97. What is the purpose of dilating or constricting arteries? To increase or decrease blood supply to a tissue. 98. What do tunica media in small and medium arteries principally consist of? It consists of smooth muscle. 99. What is rarely found in small and medium arteries? 100. What are the smallest arteries in the body? Resistance arteries/ arterioles. You can now get access to our Cheat Sheet Database for FREE — no strings attached. 101. What can be found in the resistance arteries which is associated with the tunica media? Two to three layers of smooth muscle. 102. What should be identified for a small artery to be a resistance arteriole? Three layers; intima, media, and 103. What is the function of the precapillary sphincters? It dictates the flow of blood into the capillary bed. 104. What regulates the precapillary sphincters? 105. What should a precapillary sphincter do to dictate metabolic needs? Dilate or constrict. 106. What signs would show the dilation of a precapillary sphincter? Low O2, high CO2, and low pH. 107. What would dictate if a precapillary sphincter would constrict? Plenty of O2, low CO2, High PH, and plenty of glucose. 108. What are baroreceptors? 109. Where are baroreceptors located? The aortic arch and carotid sinus. 110. What are aortic and carotid bodies? They are chemoreceptors. 111. What is the function of the aortic and carotid bodies? 112. What are the three basic types of capillaries? Continuous, Fenestrated, and Sinusoids 113. What is the most common type of capillary? 114. Where are continuous capillaries found? Muscle, lungs, and the brain. 115. What holds together the endothelial cells in continuous capillaries? 116. What is also found in continuous capillaries? 117. What is the function of the intercellular cleft? Small openings between adjacent cells. 118. What can pass through the intercellular cleft openings? Small molecules to pass; water, ions, glucose, hormones etc. 119. What type of tissue is associated with continuous capillaries? 120. What principally consists of the basal lamina? Basement membrane composed of glycoproteins and a few collagen fibers. 121. What contains some of the continuous capillaries? 122. What is the function of the pericyte? It helps with the growth of capillaries and help with 123. What is the structure of fenestrated capillaries? It has prominent holes called fenestrae right through the endothelial cells, does also have a basement 124. What is the function of fenestrated capillaries? It allows a variety of different molecules to pass. 125. Where do we find fenestrated capillaries? Primarily in kidneys and intestines. 126. What is the function of fenestrated capillaries in the kidneys? It pushes blood through the capillaries at relatively high blood pressure and a wide variety of molecules will pass through these waste products, glucose, ions. 127. What are sinusoid capillaries? It is found in the liver and 128. What is the general name for veins? 129. Why are capacitance vessels called capacitance? They have the capability of expanding somewhat to receive whatever volume of blood the capillaries are delivering to them. 130. What are the smallest veins? 131. What is the diameter of Post-Capillary venules? 132. Which venule is the smallest of the middle-sized veins and has 1-3 layers of smooth muscle? 133. What is the diameter of muscular venules? 134. What type of vein is the first to have one-way valves? 135. What is the diameter of muscular veins? 136. What is the diameter of the vein to develop one-way pocket valves? 137. What is the diameter of large veins? Larger than 10 mm. 138. What are some examples of large veins? Superior and Inferior Vena Cava. 139. What type of structure is present in large veins? Three layers, large diameter, one-way pocket valves. 140. What is the function of the venous sinus? It is a large vein and there is no smooth muscle associated with it. So there you have it. That wraps up our study guide on the cardiovascular and circulatory system. I hope that, after going through this information, you now have a better understanding of this topic. We also have a full guide on the Respiratory System as well that I think you will enjoy. Thanks for reading and as always, breathe easy my friend. The following are the sources that were used while doing research for this article: - Faarc, Kacmarek Robert PhD Rrt, et al. Egan’s Fundamentals of Respiratory Care. 12th ed., Mosby, 2020. [Link] - Jardins, Des Terry. Cardiopulmonary Anatomy & Physiology: Essentials of Respiratory Care. 7th ed., Cengage Learning, 2019. [Link] - “How Does the Blood Circulatory System Work?” National Center for Biotechnology Information, 12 Mar. 2010, www.ncbi.nlm.nih.gov/books/NBK279250. - “The Circulatory System and Oxygen Transport.” National Center for Biotechnology Information, 2011, www.ncbi.nlm.nih.gov/books/NBK54112. - Moore, A. “The Cardiovascular System.” PubMed, Sept. 2003, pubmed.ncbi.nlm.nih.gov/12919173. - “An Overview of the Cardiovascular System.” National Center for Biotechnology Information, 1990, www.ncbi.nlm.nih.gov/books/NBK393. Disclosure: The links to the textbooks are affiliate links which means, at no additional cost to you, we will earn a commission if you click through and make a purchase.
<urn:uuid:7bea4f5f-c090-4e3d-aa7f-4a82d246c471>
CC-MAIN-2021-43
https://www.respiratorytherapyzone.com/circulatory-system/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00670.warc.gz
en
0.906383
4,048
3.8125
4
I just finished co-running Stoic Camp New York-2017, together with my friend Greg Lopez. We had 20 students and an amazing time up in Stony Point, on the West Bank of the Hudson River, north of New York. As part of the introductory session, we went through the basics of Stoic theory as reported by Diogenes Laertius in book VII of his Lives and Opinions of the Eminent Philosophers, which covers the early and middle Stoa (i.e., before Seneca, Musonius, Epictetus and Marcus). I’m going to propose the passages we used here, organized by subject matter, as a handy vademecum for the Stoic practitioner. Each section below begins with a selection of pertinent quotes from Diogenes, and ends with a mini summary and commentary of my own. I hope it’s going to be useful. The parts of philosophy Philosophic doctrine, say the Stoics, falls into three parts: one physical, another ethical, and the third logical. They liken Philosophy to a fertile field: Logic being the encircling fence, Ethics the crop, Physics the soil or the trees. … No single part, some Stoics declare, is independent of any other part, but all blend together. Diogenes of Ptolemaïs, it is true, begins with Ethics; but Apollodorus puts Ethics second, while Panaetius and Posidonius begin with Physics, as stated by Phanias, the pupil of Posidonius, in the first book of his Lectures of Posidonius. This is the classic division of philosophy into three fields: logic (having to do with good reasoning), physics (understanding of the world), and ethics (how to live one’s life). The three are connected in that — as in the analogy of the garden — the logic and physics are necessary to protect (from bad reasoning) and nurture (through a sound understanding of the cosmos) the ethics. Notice that there was disagreement among the Stoics on the best way to set up the curriculum, one of a number of pieces of evidence that the philosophy was open to internal disagreement, not run like a cult (in the way of the Pythagoreans, for instance). The nature of impressions A presentation (or mental impression) is an imprint on the soul: the name having been appropriately borrowed from the imprint made by the seal upon the wax. Freedom from precipitancy is a knowledge when to give or withhold the mind’s assent to impressions. By wariness they mean a strong presumption against what at the moment seems probable, so as not to be taken in by it. Without the study of dialectic, they say, the wise man cannot guard himself in argument so as never to fall; for it enables him to distinguish between truth and falsehood, and to discriminate what is merely plausible and what is ambiguously expressed, and without it he cannot methodically put questions and give answers. Overhastiness in assertion affects the actual course of events, so that, unless we have our perceptions well trained, we are liable to fall into unseemly conduct and heedlessness. For presentation comes first; then thought, which is capable of expressing itself, puts into the form of a proposition that which the subject receives from a presentation. Again, some of our impressions are scientific, others unscientific: at all events a statue is viewed in a totally different way by the trained eye of a sculptor and by an ordinary man. “Impressions” are a combination of sensorial input and automatic judgment, as, for instance, when I suddenly feel fear because I have heard an unfamiliar sound in the house at night. Impressions ought to be examined in the light of reason, so that we can decide whether to give assent to them or not. This requires a certain degree of cognitive distancing (avoiding overhastiness), and also the ability to engage in sound reasoning (logic). This is the basis of Epictetus’ discipline of assent, which Pierre Hadot connects with the topos of logic and the virtue of prudence (practical wisdom). Notice that practice makes for better (more “scientific”) judgments of impressions. Living according to nature An animal’s first impulse, say the Stoics, is to self preservation, because nature from the outset endears it to itself, as Chrysippus affirms in the first book of his work On Ends. And nature, they say, made no difference originally between plants and animals, for she regulates the life of plants too, in their case without impulse and sensation, just as also certain processes go on of a vegetative kind in us. But when in the case of animals impulse has been superadded, whereby they are enabled to go in quest of their proper aliment, for them, say the Stoics, Nature’s rule is to follow the direction of impulse. But when reason by way of a more perfect leadership has been bestowed on the beings we call rational, for them life according to reason rightly becomes the natural life. For reason supervenes to shape impulse scientifically. This is why Zeno was the first (in his treatise On the Nature of Man) to designate as the end “life in agreement with nature” (or living agreeably to nature), which is the same as a virtuous life. … Again, living virtuously is equivalent to living in accordance with experience of the actual course of nature, as Chrysippus says in the first book of his De finibus; for our individual natures are parts of the nature of the whole universe. And this is why the end may be defined as life in accordance with nature, or, in other words, in accordance with our own human nature as well as that of the universe. … And this very thing constitutes the virtue of the happy man and the smooth current of life. By the nature with which our life ought to be in accord, Chrysippus understands both universal nature and more particularly the nature of man, whereas Cleanthes takes the nature of the universe alone as that which should be followed, without adding the nature of the individual. So the Stoics had what we would today call an evolutionary theory of virtue (just like their “cradle argument” was a theory of human developmental psychology, connected to the concept of oikeiosis): plant life is regulated without impulses and sensation, which however do play a role in animal and human life. But human beings have reason as well. So to apply reason to the question of how to live is the same as living according to nature. If you do that, according to the Stoics, you figure out that this specifically means living a life of virtue (though Diogenes doesn’t say this here, that’s because virtue is the only thing that is always useful to improve the human lot, an argument that goes back to Socrates in the Euthydemus). Notice two more things: first, the reference to a smooth flow of life if we live virtuously; second, again, vibrant disagreement among the Stoics on specific issues of doctrine, in this context whether we should think in terms of human nature or the nature of the cosmos at large. Virtue, in the first place, is in one sense the perfection of anything in general, say of a statue; again, it may be non-intellectual, like health, or intellectual, like prudence. That it, virtue, can be taught is laid down by Chrysippus in the first book of his work On the End, by Cleanthes, by Posidonius in his Protreptica, and by Hecato; that it can be taught is clear from the case of bad men becoming good. Panaetius, however, divides virtue into two kinds, theoretical and practical; others make a threefold division of it into logical, physical, and ethical; while by the school of Posidonius four types are recognized, and more than four by Cleanthes, Chrysippus, Antipater, and their followers. Apollophanes for his part counts but one, namely, practical wisdom. Amongst the virtues some are primary, some are subordinate to these. The following are the primary: wisdom, courage, justice, temperance. Particular virtues are magnanimity, continence, endurance, presence of mind, good counsel. And wisdom they define as the knowledge of things good and evil and of what is neither good nor evil; courage as knowledge of what we ought to choose, what we ought to beware of, and what is indifferent. Similarly, of vices some are primary, others subordinate: e.g. folly, cowardice, injustice, profligacy are accounted primary; but incontinence, stupidity, ill-advisedness subordinate. Further, they hold that the vices are forms of ignorance of those things whereof the corresponding virtues are the knowledge. Virtue itself and whatever partakes of virtue is called good in these three senses — viz. as being (1) the source from which benefit results; or (2) that in respect of which benefit results, e.g.the virtuous act; or (3) that by the agency of which benefit results, e.g. the good man who partakes in virtue. To begin with, the word “virtue” (arete, in Greek) applies to any kind of human excellence, of which the moral (intellectual) virtues are a subset. Diogenes lists the four cardinal Stoic virtues, but makes clear that there are several sub-virtuous, so to speak. The full list is detailed in a table on p. 28 of this paper by Matthew Sharpe on Stoic virtue ethics. Well worth the reading. Diogenes then says that to each virtue corresponds a given vice, and — most importantly — that virtue is a type of knowledge, and vice a type of ignorance (best understood as unwisdom). Notice, once more, evidence of debate among the Stoics on all these subject matters. Preferred vs dispreferred indifferents Goods comprise the virtues of prudence, justice, courage, temperance, and the rest; while the opposites of these are evils, namely, folly, injustice, and the rest. Neutral (neither good nor evil, that is) are all those things which neither benefit nor harm a man: such as life, health, pleasure, beauty, strength, wealth, fair fame and noble birth, and their opposites, death, disease, pain, ugliness, weakness, poverty, ignominy, low birth, and the like. This Hecato affirms in his De fine, book vii., and also Apollodorus in his Ethics. and Chrysippus. For, say they, such things (as life, health, and pleasure) are not in themselves goods, but are morally indifferent, though falling under the species or subdivision “things preferred.” Further, they say that that is not good of which both good and bad use can be made; but of wealth and health both good and bad use can be made; therefore wealth and health are not goods. The term “indifferent” … denotes the things which do not contribute either to happiness or to misery, as wealth, fame, health, strength, and the like; for it is possible to be happy without having these, although, if they are used in a certain way, such use of them tends to happiness or misery. Of things indifferent, as they express it, some are “preferred,” others “rejected.” Such as have value, they say, are “preferred,” while such as have negative, instead of positive, value are “rejected.” Value they define as, first, any contribution to harmonious living, such as attaches to every good; secondly, some faculty or use which indirectly contributes to the life according to nature: which is as much as to say “any assistance brought by wealth or health towards living a natural life”; thirdly, value is the full equivalent of an appraiser, as fixed by an expert acquainted with the facts — as when it is said that wheat exchanges for so much barley with a mule thrown in. Again, of things preferred some are preferred for their own sake, some for the sake of something else, and others again both for their own sake and for the sake of something else. … Things are preferred for their own sake because they accord with nature; not for their own sake, but for the sake of something else, because they secure not a few utilities. An important bit here is the very clear explanation that preferred indifferents are so-called because they have “value” (that’s why they are preferred) but they do not affect our moral character (indifferent). I have explained this concept in earlier writings in terms of what modern economists call lexicographic preferences. Notice, at , a piece of syllogistic reasoning aiming to prove that wealth, health, etc. are not “goods.” Once you understand how the Stoics used these terms, it becomes obvious why this is true: wealth, for instance, can be used to do good as well as to do evil, therefore it is logically independent of good and evil, and so it is not, by itself, good or evil (though one can use it in good or evil fashion). At we get a pretty clear explanation of why certain things are preferred (or dispreferred): either they contribute to harmonious living, or to the life according to nature, or because of the barley and the mule thing… On good and bad emotions Passion, or emotion, is defined by Zeno as an irrational and unnatural movement in the soul, or again as impulse in excess. The main, or most universal, emotions, according to Hecato in his treatise On the Passions, book ii., and Zeno in his treatise with the same title, constitute four great classes, grief, fear, desire or craving, pleasure. They hold the emotions to be judgements, as is stated by Chrysippus in his treatise On the Passions: avarice being a supposition that money is a good, while the case is similar with drunkenness and profligacy and all the other emotions. Pity is grief felt at undeserved suffering; envy, grief at others’ prosperity; jealousy, grief at the possession by another of that which one desires for oneself; rivalry, pain at the possession by another of what one has oneself. Fear is an expectation of evil. Desire or craving is irrational appetency, and under it are ranged the following states: want, hatred, contentiousness, anger, love [meaning irrational, obsessive passion], wrath, resentment. … Hatred is a growing and lasting desire or craving that it should go ill with somebody. Wrath is anger which has long rankled and has become malicious, waiting for its opportunity. … Resentment is anger in an early stage. Pleasure is an irrational elation at the accruing of what seems to be choiceworthy. … Malevolent joy is pleasure at another’s ills. And as there are said to be certain infirmities in the body, as for instance gout and arthritic disorders, so too there is in the soul love of fame, love of pleasure, and the like. … And as in the body there are tendencies to certain maladies such as colds and diarrhoea, so it is with the soul, there are tendencies like enviousness, pitifulness, quarrelsomeness, and the like. Also they say that there are three emotional states which are good, namely, joy, caution, and wishing. Joy, the counterpart of pleasure, is rational elation; caution, the counterpart of fear, rational avoidance. … And they make wishing the counterpart of desire (or craving), inasmuch as it is rational appetency. … Thus under wishing they bring well-wishing or benevolence, friendliness, respect, affection; under caution, reverence and modesty; under joy, delight, mirth, cheerfulness. Now they say that the wise man is passionless, because he is not prone to fall into such infirmity [i.e., unhealthy passions]. This is a pretty complete treatment of the oft-misunderstood topic of Stoic emotions. It should be clear, however, that the Stoics sought to avoid only the unhealthy emotions, what they called “passions” (pathē), and to nurture healthy emotions (eupatheiai). Diogenes gives a lengthy list of both, so to thoroughly explain what the Stoics meant. Notice, incidentally, the common parallel in Stoicism between physical and mental / spiritual health. At we are introduced to the Stoic theory that emotions are a form of judgment, i.e., they have a cognitive component. So the “impressions” from above, which are automatic, generate a sort of proto-emotions (propatheiai), and it is to those that we apply our judgment. If we do this incorrectly, we turn the proto-emotion into an unhealthy passion. Schematically: propatheiai + incorrect assent => pathē Something like this is confirmed by modern cognitive science, where neuroscientists talk of “fear,” say, referring to the involuntary feeling arising from the rush of adrenalin when we perceive a threat, while psychologists use the same word referring to the mature, cognitively mediated emotion of the type “I ought to be afraid of terrorist attacks.” So when the Stoics say that the wise person is “passionless” (apatheia) they don’t mean that she lacks all emotions, but that she is unaffected by unhealthy one, as in: propatheiai + correct assent => apatheia The goal of Stoicism here is to produce emotionally healthy individuals. Hard to object to it, no? That’s Stoicism in a nutshell! Let me leave you with a bonus quotation from Diogenes: They will take wine, but not get drunk.
<urn:uuid:0fbec866-6852-424d-aca0-40b484af9055>
CC-MAIN-2021-43
https://howtobeastoic.wordpress.com/category/diogenes-laertius/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585504.90/warc/CC-MAIN-20211022084005-20211022114005-00070.warc.gz
en
0.955118
3,824
2.546875
3
Application of nanoCAD / Приложение nanoCAD / AutoCAD 1. Introduction in nanoGeometry 2. FairCurveModeler COM app AutoCAD 2.1. Command V_Model. Команда V_Model 2.1.1. Modeling curves 2.1.2. Modeling of surfaces 2.2. Set of base commands 1. Introduction in nanoGeometry Appendix FairCurveModeler (further Application) is intended not just to model the beautiful curves and surfaces. First of all, the app is intended for the design of high-quality products. Namely, products with functional surfaces. And there are a lot of products. What is the functional surface? This surface which quality directly determines the quality of the product as a whole. This external surfaces of aircraft, ships, automobiles; the working surfaces of blades of pumps, compressors and turbines of aircraft engines, propellers; the working surfaces of tillage machinery; cam surface in the cam mechanism; road surface; canal surface. The characteristic curves on which are based functional surfaces - a functional curves: guide curve of plow, profile of wing or of blade of compressor, turbine, pump; flat profile of the cam; plane trace of road, etc. The authors performed a deep theoretical research on the analysis of quality requirements for functional curves, independent of the specific conditions of work and type of product. These requirements are summarized in the following concept as a set of necessary requirements for the geometry quality of functional curves: 1) a high order of smoothness. The order of for the the smoothness of function curves is required at least the third order. This is the minimum order that ensures the continuity of the torsion of a space curve. The order of determines the the smoothness of local smoothness of the curve. 2) The minimum number of extrema of curvature (vertices of the curve). This parameter determines the smoothness of the integral curve. Obviously, in terms of energy, for movement of the flow of gas / liquid / soil with the pulsation of curvature of the trajectory is required more energy than with its absence. 3) Small value of variation of the curvature (the difference between the maximum and minimum curvature) at the site with an extremum of curvature. This requirement complements the the second requirement. 4) other conditions being equal small value of the potential energy. Of the two curves of the same order of smoothness and the same number of extrema of curvature the curve with a lower potential energy - the best curve. The requirement to minimize the potential energy is justified as follows: - milieu, streamlined functional surface at high speeds behaves as an elastic body. The deformed elastic body takes the form of the minimum potential energy. Hence the energy for elastic deformation of the medium that moves along a path with a lower potential energy is required less. - in case of the milieu flow of concave surfaces with friction energy costs for the movement of the milieu is less than less than the potential energy of curve of movement. If the original geometric determinant as a base polyline or a tangent polyline permits the construction of curve with the minimum number of vertices, so methods of construction curves should provide the minimum number of vertices. At specifically, if the points lie on a conical curve, the method should provide an accurate approximation of the conical curve. Application FairCurveModeler provides these stringent requirements of quality of the functional surfaces. The application has a lot of innovative methods of geometric modeling and geometric approximation, which give sinergetic effect in efficiency of Application. In the words of Prof. Osipov V.A. "There are Geometry of breadth and Geometry of depth." Methods of Application - a "geometry of depth." Methods are unique not only in existing CAD-systems, but also in the geometric nuclei CAD-systems. In a sense, these methods - methods of "geometric nanotechnology." The basis of modeling the curves of high quality is a method for modeling a virtual curve (v-curve). V-Curve has no analytic or piecewise analytic expression. Points of the v-curve are generated algorithmically. In the limit, generated points belong to the curve of the class C5. The application uses innovative methods of geometrically stable (isogeometrically, shape preserving) approximation of v-curve by a cubic NURBzS curve (cubic rational Bezier curve) and b-spline curve of high even degree m, (m = 6/8/10). These methods keep the quality of v-curve of the second step of subdivision and allow to use the industry standard representation of curves in the form of NURBS curves with the same number of segments as of original polyline. The innovative entering by the authors of the dual determinant of v-curve - enables simultaneous modeling of the curve as on base polyline and on a tangent polyline. This capability expands the range of solved geometric problems, for example, 1) allows you to model a high-quality road trace on a tangent polyline of theodolite moves; 2) to model a plane convex cam profile at positions of the flat soles of the pusher; 3) to form a curve on points, but by the constraints of the form by tangent lines. At the same time base polyline and tangent polyline - a classic forms of geometric determinants, which are familiar to the designer and do not require a knowledge of sophisticated features of modeling of NURBS curves on s-frames. Creating a surface, regardless of the method requires building a set or a net of curves. Methods of construction of v-curve and geometrically stable approximation by NURBzS curve and by b-spline curve allow to form sets and nets of high-quality curves. The authors have developed innovative methods of geometrically stable (Isogeometric) constructing spline surfaces of high degrees on different kinds of geometric determinants. Dual determinant of v-curve is generalized to the determinant of surface. Spline surfaces can be modeled on base 3D Mesh, on 3D Mesh with the tangent lines, on 3D Mesh wirh tangent lines and tangent columns. Developed innovative methods of testing and controlling the shape of the surfaces. You can control the shape of the families isoparms surface. Until the control and management of form of any arbitrary isoparms of surface. An important promising direction of development of application is to develop the method of modeling of topologically complex surfaces with the smoothness of of high order at any point up to the system with a developed structure of options. The system will be an analog of the so-called T-Splines, but with high quality of integral surface. The theoretical foundations in details are given in the authors articles in the section Библиотека плагинов и статей The concept of quality of functional curves practically tested on a general-purpose design of the plow. Only by following the proposed concept and the use of Application that implement a list of demands of the concept, and only by improving the geometry of the prototype was produced a striking result: at the same time was improved the quality of plowing and was obtained a fuel economy. The concept and implementation of application FairCurveModeler - versatile and cheapest means of improving the quality of the designed product. That is, you can only by following the requirements of the concept and using the this application, without design tweaks, but by improving the geometry of your previous project or well-known project get more quality project and the product. Moreover, the application FairCurveModeler is not requires highly skilled designer. Even in a non-uniform arrangement of points FairCurveModeler creates high-quality v-curve. Without exhausting fit of curves to the desired quality in a shorter time you design a better product. Authors can help you to develop specialized applications on the basis of this concept and application FairCurveModeler. There are certain scientific and program backlogs in the development of of specialized applications on the following topics: - Profiling plane convex cam; - Tracing the road in the plane; - Modeling and improve airfoils. More about specialized applications you can find in the Using Specialized Applications The authors also have an interest in the feasibility of the concept and application FairCurveModeler in the aircraft industry, shipbuilding, automotive, architecture, industrial design. Requests for development of specialized applications we receive on mail [email protected]. 2. FairCurveModeler COM app nanoCAD / AutoCAD We present version of FairCurveModeler as COM application nanoCAD / AutoCAD. Представляем вариант FairCurveModeler как COM приложение nanoCAD / AutoCAD. This application is a clone of nanoCAD application for AutoCAD. Moreover, this application can work in nanoCAD and in AutoCAD. The application implements interactive modeling of curves and surfaces with all facilities working in a graphical environment nanoCAD / AutoCAD and with the ability to control the quality of curves and surfaces by the Application and by own commands of CAD-systems. The application is developed on COM-automation technology. The application consists of DLL-component and software interface programmed on AutoLISP. Communication between COM-server and nanoCAD / AutoCAD is performed via the so-called Geometric buffer. Geometric buffer consists of 3 folders: Exec, Temp, Result, which are located in the Tools folder of the application. This application enhances the basic functionality of web FairCurveModeler by following features: - Function modeling of sites of clothoid; - Editing function of sites of s-frames of NURBS surfaces of arbitrary degrees and format by the deformation of area on the basis of sample deformation function F (u, v); - Editing function of plots of s-frames of NURBS surfaces of arbitrary degrees and format by the formula Coons; 2.1. Command V_Model. Команда V_Model Command V_Model - basic command of Application. The command has a complicated structure of the string menu. Depending on the selected object will be available to those or other functions of the application. 2.1.1. Modeling of curves. Моделирование кривых On page Fair Curves. the functions of modeling of curves of high quality are described. To demonstrate the functionality of Application the library of scripts are prepared in the folder "Examples / Examples Curve" of Application. Scripts can be used for learning the Application. On page Scripts modeling of curves given a list of scripts. Set of scripts covers all the basic features and options of Application. On page Examples. Fair Curves. videos are given showing the execution of the some scripts. 2.1.2. Modeling surfaces On page Geometric determinants of surfaces are described geometric determinants, which are used in the construction of surfaces. On page Options of construction are described the functions of modeling of set of forming curves, set of guide curves and net of curves on surface carrier 3D Mesh. Are described the functions of construction of uv-loft surfaces, NURBzS surfaces, b-spline surfaces, NURBS surfaces. At Technology of construction are described in detail methods for constructing spline surfaces of different formats on various types of geometric determinants. On page surfaces are described the options of working with spline surfaces of different formats. To demonstrate the functionality of Application the library of scripts are prepared in the folder "Examples / Examples Surface" of Application. Scripts can be used for learning the Application. On page Scripts modeling of surfaces is given a list of scripts. Set of scripts covers all the basic features and options of Application. On page Examples. Fair Surfaces videos are given showing the execution of the main scripts. 2.2. Basic set of commands On page Base commands is described basic commands of Application. For ease of operation a number of options of modeling curves from the command V_Model are issued in the form of individual commands: command of creating a v-curve and approximating by cubic NURBzS curve; command of approximatiing v-curve by b-spline curve of even high degree m (m = 6/8 / 10); the command of subdividing the specification of curve; command of increasing the degree of NURBS curves; testing curves with display of graphs of curvature and evolute and printing the macroparameters of curve (variation of the curvature, the value of the potential energy). Are developed advanced options in Application: LISP-program for approximation of curves on the GD Hermite presented by table of the coordinates of points, tangent vectors and curvature values; command of creating and approximation a site of Clothoid spiral. Examples of work with the basic commands in AutoCAD. In the examples you just need certain LISP-text fragments to insert into the AutoCAD command line and press ENTER.
<urn:uuid:3b7d190b-647f-4888-b180-fe37cfb5d2cc>
CC-MAIN-2021-43
http://spliner.ru/index/help_surfaces_0_eng_autocad/0-73
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00151.warc.gz
en
0.891052
2,867
2.75
3
A Land of Every Possible and Impossible Colour English artist Nora Cundell (1889-1948), along with her sister Violet and brother-in-law Charles Eaton, traveled to Arizona to see the Grand Canyon in 1934. Nora had read descriptions of the canyon and northern Arizona written by her friend J. B. Priestley, the English author and playwright. Priestley and his family made several visits to the Grand Canyon, the Vermillion Cliffs, Painted Desert, and Navajo country. His writings intrigued Nora enough to plan a trip to America to see those places for herself. Nora’s plan was to first visit the Grand Canyon, but the canyon, unfortunately, was shrouded with what Nora described as a “thick sea fog.” Although disappointed, they decided to go down to the Painted Desert. At Cameron on the Little Colorado River, it was suggested that they should not miss Marble Canyon, about seventy-five miles to the north. Driving through the desert, Nora marveled at the array of colors. The hills were “rounded and fluted into the most fantastic shapes and of every possible and impossible colour—grey, brown, red, green, purple.” The cliffs were “tall, jagged and scarlet, and the hills were dotted with cedars, junipers, and an occasional hogan.” They drove through a sandstorm, with blowing sand rattling the outside of the car. Nora described the sand as being coarse and red, and noted that it stained water and clothing to the color of “weak tomato soup.” Nora, Violet, and Charles crossed Navajo Bridge over the Colorado River and arrived at Vermillion Cliffs Lodge (today known as Mable Canyon Lodge) set in the barren desert and sheltered by a few cottonwood trees, along with a gas station and several cabins. Inside the lodge the main room had a huge fireplace and polished floors covered with skins and Navajo rugs. Comfortable chairs and books everywhere created a place that “looked more home-like than anywhere I’d seen for a long time.” Nora was born in London and spent her early years at her beautiful home Further Dimmings in Dorney Village, Windsor, but after her first visit to the Vermillion Cliffs, she felt that she had finally found the place that was always meant to be her home. Nora had fallen in love. Nora's and her sister Violet's houses were on the same block with Violet's house, Hither Dimmings, being in the front on the road, and Nora's house, Further Dimmings, at the back of the block. Thus, Hither and Further. Nora returned to Marble Canyon the following year, in August 1935, sailing on the Red Star Line’s passenger ship Pennland. In planning for her extended trip to the states, she learned to ride a horse, rented out her house for the winter, and purchased great quantities of painting supplies. Upon her arrival in New York City, she purchased a seven-year-old model A Ford coupé which she drove across the country to Arizona. There she spent the winter at Marble Canyon with Buck and Florence Lowrey and their children David, Mamie, and Virginia. Nora painted the dramatic landscape as well as local Navajo people, often dealing with uncooperative weather conditions. One day she went out sketching and looked up to see a cloud of red sand advancing toward her. She quickly packed her canvas and oils in the car, aware that staying outside in the wind was not an option, unless she wished to compete with a Navajo sand painter. On another occasion she climbed a narrow trail to the top of the mesa to do some watercolors, and found that her paints were frozen. When the snows blocked the road, she would drive her car the few cleared miles from the lodge and sit in the frozen plain to paint, or simply to absorb the wild and silent stillness of the land under a blanket of snow. Nora also made extended pack trips with David Lowrey and guide Ed Fisher. On one trip Ed and David captured a wild colt with a white star on his nose. They presented the horse to Nora, and she named him Windsor after her home in England. They rode through Last Chance Canyon and down into Rock Creek, a narrow tributary of the Colorado River where they made camp. Before leaving the next morning they carved their names on the cliff, although Nora admitted that it was “an unworthy piece of touristry.” David Lowrey took Nora north to Kaibeto where they observed a rare Navajo Fire Dance—she later painted the unforgettable scene—and also recorded that they got terribly lost on the way back to Marble Canyon due to a snowstorm that obliterated the traces of the dirt shortcut they were following. Another time David took Nora to a rodeo in Phoenix. Nora also accompanied David as “acting assistant deputy sheriff” when they escorted two robbers (following them in the Lowrey’s DeSoto) to the jail in Flagstaff. Nora noted that David had his six-shooter, handcuffs, and a rifle in the back seat. She never did figure out her role if the robbers tried to get away, but she marveled that it was an unusual job for a “respectable British spinster.” After three more extended trips to Arizona, Nora wrote Unsentimental Journey, published in 1940. This volume is a journal of her time visiting and exploring the wild country in northern Arizona. Author P. T. Reilly who wrote extensively about the Arizona Strip described it as being “probably the least known and most remote part of the United States up to the time she was there.” Nora’s paintings and writings do give a sense of the vast beauty of the mesas and rugged canyons north of the Colorado River. Nora's last stay with the Lowrey's was in 1938. Due to the Depression, Buck and Florence could not keep up the payments and, consequently, had to move away. Ramon Hubbell took over the lodge and store. Nora did not wish to stay with the new managers, and she left Marble Canyon Lodge following along behind the Lowrey's automobile, a sad parting from the people and the land she had come to love. The second World War interrupted Nora's plans to return to Arizona. During the war years in London, Nora drove an ambulance at night. And there were changes at Marble Canyon. Young David Lowrey, Nora's companion on many camping and exploring trips was killed in action in 1945 when his ship was hit by a Japanese mine. Even though there was a substantial difference in their ages (Nora was the same age as David’s parents), it was apparent that Nora had a deep, and possibly romantic, affection for the handsome westerner. As a Dream That is Past: Nora’s Last Stay at Marble Canyon In late February 1947, Nora returned to the states, sailing from Southampton on the passenger ship America. It was an unusually cold winter and on the ship Nora suffered from chillbains. Her hands and feet were swollen and black and she could not wear shoes for several weeks after she arrived in America. Nora made arrangements to stay at Lee’s Ferry, near Marble Canyon. She visited with her old friends the Lowreys, and they took her to Lee’s Ferry in April where she stayed until summer. Jim Klohr was employed by the government to monitor Colorado River water levels near the old ferry crossing. Nora rented a little stone cabin, built in 1910, above the rock house where Jim and his wife Christina lived. She paid one dollar a day for board and meals. She had to be frugal with her money because at that time England had post-war currency restrictions and she could not take much money out of the country. Nora’s cabin was one-half mile from Paria River, and a half-mile from Lee’s Ferry, the historic crossing at the Colorado River, and the only crossing in the area until the completion of Navajo Bridge in 1929. Nora painted every morning and took long walks in the evenings. Sometimes Christina hiked with Nora; Christina collected petrified wood while Nora sketched and painted, and occasionally, Nora gave Christina painting lessons. When Nora left for England that summer, she planned on returning to stay at Lee’s Ferry later that winter. Sadly, she was diagnosed with cancer. After undergoing surgery, she spent time recovering with the J. B. Priestley family at their home on the Isle of Wight. Priestley recalled that “always, gleaming somewhere in the background, like the amethyst cliffs and diamond air of the Arizona Desert,” was their wistful longing and remembrance of the American Southwest. But Nora was not to recover, she passed away on August 3, 1948. When Nora went to Marble Canyon the first time, she thought she would stay for a few month’s painting and enjoy new experiences, “a very unsentimental journey.” But as more time went on, she grew to love everything about the wide, lonely desert: standing on the high, narrow bridge in the darkness and hearing the sound of the river far below, and the night sky of “deep, fathomless sapphire.” She wrote, “the night skies of Arizona are something to keep in the heart for ever,” and of sunsets with the whole sky “one vast circle of fire.” She would remember the scent of cedar wood fires, and the far-off song of a Navajo horseman riding along in the darkness. Recalling her thoughts as she left Marble Canyon for the last time, Nora wrote, “Behind me, I felt that the whole, vast Vermillion Cliffs, and all that they had stood for, were crumbling and dissolving, as a dream that is past.” According to Nora's wish to return to the Vermillion Cliffs that were so close to her heart, her ashes were returned to Arizona and scattered at Marble Canyon. Her friends gathered in May 1949 for a memorial service conducted by Preacher Shine Smith. Nora’s ashes were scattered at the base of the cliffs. A brass plaque was attached to a massive boulder where her ashes were scattered.
<urn:uuid:d9bb5c07-51e7-4024-b5ba-9d25c8aa053f>
CC-MAIN-2021-43
https://www.noracundell.com/the-artist/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584567.81/warc/CC-MAIN-20211016105157-20211016135157-00470.warc.gz
en
0.982408
2,149
2.78125
3
“How To Know The Night Sky As Well As You Know The Streets of Your Own Home Town” by Brian Ventrudo, Publisher, One-Minute Astronomer “Do not be afraid to become a star-gazer. The human mind can find no higher exercise.” – Garrett P. Serviss There are few sights as beautiful as the sky on a dark, clear night. The Moon, the crackling stars, and the graceful arc of the Milky Way across the sky have held humanity in awe since the time of our earliest ancestors. As you read this page, you’ll discover an astonishing resource that will make it easy to learn the stars and constellations as well as you know the streets of your own home town. And you won’t just learn a few bright stars. You’ll get a personal tour of hundreds of stars and the major constellations in the northern and near-southern sky, along with an introduction to the brighter galaxies, star clusters, and nebulae visible with the unaided eye or with a simple pair of inexpensive binoculars. Once you follow the sky tours in this resource, not one person in a thousand will know as much about the night sky as you do. And you will learn to easily find some of the most spectacular sights in the night sky, including… • A hazy patch of stars in the constellation Cancer… once used by ancient sky watchers to forecast oncoming storms, long before Galileo discovered this mysterious cloud was really a cluster of blue-white stars • The rich star fields towards the center of our galaxy in the constellation Sagittarius, home todozens of nebulae and star clusters within easy reach of a beginning star gazer with binoculars • Two immense spiral galaxies visible to the naked eye (and lovely in binoculars), the light of which you see has spent more than 2 million years crossing the void of intergalactic space (these are the most distant objects you can see without optical aid) • The “demon star” in the constellation Perseus that eclipses like clockwork every few days (you can easily see this star with the naked eye) • Two dazzling star clusters in Taurus that look better in a $50 pair of binoculars than in a $10,000 telescope • A glowing blister of interstellar gas in Orion that’s right now giving birth to hot, silver-blue young stars • A number of massive ancient red supergiant stars that are inexorably moving to the end of their lives as catastrophic supernova explosions The resource I’m talking about is Stargazing For Beginners: A Binocular Tour of the Night Sky, by the editors of One-Minute Astronomer. Even if you’ve tried to improve your knowledge of the heavens before, you haven’t tried anything like this. And you haven’t got results like this either. How can I be so sure? Because it’s helped countless people learn their way around the night sky… for more than 120 years. Stargazing For Beginners is based on a classic work of popular astronomy called “Astronomy With An Opera Glass” by Garrett Serviss. First written more than a century ago, Serviss’ book became essential reading for backyard stargazers. Many professional astronomers, as well as accomplished amateurs like Walter Scott Houston first learned the stars with this book. Even modern astronomy experts like Stephen J. O’Meara have a copy of Serviss’s original book in their personal library. But can a book first written in 1888 still be useful to 21st century stargazers? The answer is absolutely… yes! That’s because Serviss’ work takes a casual, friendly approach to learning the night sky that remains appealing across the decades. And since the positions of the stars change slowly, the Serviss’s sky tours are just as accurate today as they were 120 years ago. Of course, the astonishing advances in astronomy over the past century have dated most of the scientific explanations in the original work. But our modern version of Serviss’s book,Stargazing For Beginners: A Binocular Tour of the Night Sky includes a complete update of the sciencerelated to the stars and deep-sky sights described in the sky tours. That means Stargazing For Beginners gives you the best of both worlds… modern scientific explanation combined with the easy charm and fascinating historical tales of the original work. Stargazing For Beginners also gives you up-to-date advice on choosing and using the optical instrument of choice for these sky tours: a simple pair of binoculars. Whether you already own a pair, borrow from a friend, or invest in a new set, you’ll discover… • Why binoculars are better than a telescope for learning the stars and constellations • The critical optical specifications of binoculars… what they are, why they matter, and how they affect your view of the stars • How to test drive a pair of binoculars for astronomy (and which binoculars to avoid at all costs) • How much to invest in a pair of binoculars for astronomical observing… and why spending more money is not always the best way to go • Why larger-aperture binoculars may not be the right choice for you, especially if you’re over the age of 40 (this tip alone may save you $100 or more ) • And the truth about image-stabilized binoculars for astronomy… even if you can afford them, are they really worth the extra money? As a stargazer, you’ll discover the greatest reward in observing the night sky lies in your imagination, as you reflect upon the the astonishing forces at work in the cosmos. That’s why we make sure you get a taste of the science behind what you see in the night sky, including… • How newly-born stars create shimmering nebulae out of the very gas and dust from which they were born • A type of variable star in the constellation Cepheus that astronomers use as a “cosmic yardstick” to measure the size of the universe. • The “end-game” of stars like our sun, and how they expire by throwing off their outer layers as a beautiful planetary nebula • Dense remnants of dead stars as massive our sun yet only as small as the Earth • Tightly-bound clusters of stars that are almost as old as the universe itself (you can see many of these clusters with binoculars from your backyard, once you know where to find them) Plus you’ll learn about two key motions of the Earth, and how and understanding of these motions help you accurately read a simple star map over the course of a season. After just a few nights, you’ll understand how the sky changes from hour to hour and month to month, and you’ll be able to read the sky like a pro. Of course, the Moon is one the mostspectacular sight in the sky. Even the most modest binoculars reveal dozens of fascinating features on the ancient surface of Earth’s only natural satellite. That’s why we’ve included a free supplementary report called Observing The Moon And Planets With Binoculars to help you discover the most prominent craters, mountains, and dark-grey lava seas. You get a tour of the major surface features of the Moon and a full lunar map to help you easily find the features. And you’ll learn… • The absolutely best time to see features on the lunar surface, and why the full Moon is just about the worst time to see anything • A relatively new crater in the southern lunar highlands that sprayed ejected material over the surface of the Moon as far as 2,000 km away • Where to find the ranges of rugged mountains that tower more than 20,000 feet over the lunar surface • An east-to-west tour of the main “seas” of the Moon, including maps of the locations of the sixApollo lunar landings of 1969-1972 • Why one side of the Moon is forever hidden from Earth-bound observers And while binoculars aren’t the best tool for seeing the other planets in our solar system, you’ll also discover how to see the periodic dance of the four largest moons of Jupiter around the giant planet. It’s like seeing a miniature solar system changing night-to-night (even hour-to-hour), right in the field of view of your binoculars. The Easiest Way To Know The Night Sky Stargazing For Beginners: A Binocular Tour of the Night Sky is available as an immediate download as an e-book PDF format so you can start learning your way around the night skyright away. It’s illustrated with more than twenty maps and images to help you find your way around the sky each month of the year. And it’s specially formatted to read on a computer screen, or to print out to read later at your convenience. But we want to make it as easy as possible for you to learn the night sky. That’s why, when you download your copy of Stargazing For Beginners, you also get a specialbonus set of star maps you can print out separately to bring out under the stars with your binoculars as you learn the sky. You can even have them laminated, if you like, to keep them from wrinkling in the damp night air. And to save you the trouble of juggling maps, a book, and binoculars, you also get a free audio version of Stargazing For Beginners. Organized into five MP3 files, you can load each file onto your iPod or favorite MP3 player and follow along in real time while you look at the sky with binoculars or your unaided eye. Or you can listen to the audio version of the book in your car, on the bus, or while walking to work. It will help you learn the sky that much faster. You will not find a more comprehensive package to help you learn the the night sky. See the Wonders of Deep Space From Your Backyard You came to this web page because you want to learn the stars and constellations. This is your opportunity.Stargazing For Beginners gives you everything you need to know to identify many dozens of stars in the major constellations of the northern and near-southern hemisphere. With friendly, easy-to-follow tours and custom-made star maps for all four seasons, you willacquire expertise of the night sky that few people will ever enjoy. And you’ll most certainly be ready to acquire a deeper knowledge of astronomy, and to effectively use a telescope to see further into the heavens. • The complete 93-page book, in PDF format, with the complete tours of the northern and near-southern sky in all four seasons, along with highlights of the brightest and most fascinating sights in the deep sky • A set of custom star maps, taken from the e-book and enlarged for use outside, in real time, as you learn your way around the sky • The bonus supplementary report called Observing The Moon And Planets With Binocularsto help you discover surface features of the Moon and get fine views of the bright planets, including Jupiter and its four brightest moons, and… • An audio version of Stargazing For Beginners to load onto your favorite audio player to listen to at your convenience, or when you’re actually touring the sky with your maps and binoculars. You need only supply a pair of binoculars, your own restless intellect, and a determination toknow more about the universe. How much will this cost? Look around the big bookstores and you’ll quickly see that a beginner’s book on basic astronomy runs $15 or $20, at a minimum. A set of basic star maps from a specialty website might go for another $10-$15. And most science audio books on a site like Audible.com will set you back $20 or even $30. That’s as much as $65 for the basic information in Stargazing For Beginners and the associated bonuses. But you won’t pay $65 or this information. You won’t even pay $35. We’re making the entire package available to you for only $27. I’m sure you’ll agree, that’s an extremely affordableprice for making a lifelong acquaintance with the wonders of the deep sky. Simply click here to get started. You’ll be out under the stars on the next clear night, beginning your lifelong journey exploring the fathomless depths of deep space from your backyard. Brian Ventrudo Publisher, One-Minute Astronomer P.S. You can try Stargazing For Beginners at absolutely no risk to you. If, after 60 days, you don’t think it’s the right tool to help you learn your way around the sky, simply send for a full refund. No questions asked. No hassles. So, with nothing to risk, you can get started right away… P.P.S. If you’re looking to a guide to the stars of the southern hemisphere, click here…
<urn:uuid:f90c818b-9a99-4061-9b98-bfd00fb21a8f>
CC-MAIN-2021-43
http://become-a-star-gazer.blogspot.com/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00509.warc.gz
en
0.915073
2,828
2.671875
3
Probiotics are the good bacteria and yeast that benefit the body, specifically the digestive system. Unlike the germs that cause diseases, these are the good ones that fight on our side against ailments. Probiotics can be found in natural foods as well as from food supplements. It’s not exactly clear how natural probiotics work, however, research shows that when the gut loses the good bacteria, taking probiotics can help replace them. They function to balance the good and bad bacteria to ensure the body works as it should. Several bacteria types are classified as probiotics and each has a different health effect. Most of the probiotics, however, come from two main groups: Lactobacillus is the most common type and is commonly found in yogurt and fermented foods. Various strains of this bacteria group help with diarrhea, and lactose intolerance. Bifidobacterium is present in some dairy foods and helps with irritable bowel syndrome symptoms among other health conditions. Saccharomyces boulardii is a yeast classified under probiotics. It is known to help in the fight against digestive problems including diarrhea. Benefits of Probiotics 1. Weight Loss According to a meta-analysis, it is possible to reduce one’s body mass index by consuming probiotics. Ingesting more than one type of these natural probiotics over a period of 8 weeks results in significant weight loss. This research also showed that probiotics can improve blood sugar control and significantly affect sensitivity to leptin, the hormone that regulates appetite. This is crucial for those fighting with weight problems and type two diabetes. 2. Skin Conditions Some studies have shown that taking probiotics over time helps improve the skin health. With the right bacterial strain, it is possible to control skin problems like rosacea, eczema, and psoriasis. It all goes back to controlling inflammation. Since many skin conditions stem from inflammation, the probiotics can effectively minimize skin problems. 3. Improved Immune Health One can get tired of always being sick if every month you are on a different medication for a new infection. Research shows that the use of probiotics reduces instances of upper respiratory infections. This is because probiotics function to crowd out the infectious bacteria, therefore, lowering the symptoms of the need for medication in case of an infection. A better balance of the two opposite types of bacteria helps to control diseases, therefore, warding off issues of flu and cold. Some types of bacteria promote the manufacture of natural antibodies. These include the IgA-producing cells, natural killer cells, and the T-lymphocytes. The use of probiotics has can also fight off allergies. Those who use probiotics have been shown to suffer fewer episodes of the sniffles. It also reduces pro-inflammatory markers among the users. This may be due to the fact that natural probiotics alter the permeability of intestinal walls, therefore, keeping the pro-inflammatory agents from crossing into the bloodstream. 5. Improved Mental Health It has been reported that those who take probiotics feel happier and less affected by the troubles of life. Other researchers, though not sure of the link between probiotics and moods, suggest that the bacteria lowers the levels of anxiety, stress, and depression. They, however, think there is an effect on the gut-brain axis that signal between the nervous system and the GI tract. 6. Promotes Digestion Probiotics have an incredible benefit to the gut hence its popularity. These bacteria are known to restore the natural balance of GI tract bacteria. In case one has an imbalance that may result in sickness, taking probiotics restores health. When the population of bad bacteria is higher than that of the good ones, you will feel sick. It causes several digestive disorders and diseases which may only go away when you use antibiotics. These health conditions may progress to issues such as allergies, mental health disorders, and obesity. With regular use of probiotics, the gut and by extension, the body remains healthy. 7. Prevents and Treats Diarrhea When you include probiotics in your diet, it may prevent diarrhea or reduce its intensity if its already begun. The use of antibiotics can kill good bacteria in your gut. This is where probiotics come in and restore the lost good bacteria and intestinal flora to make your gut healthy again. 8. Oral Health Bacteria affects more than just our digestion. They play a role in our oral health. For this reason, the market is a flush with probiotic lozenges and gums to help maintain healthy teeth and gum. This helps prevent many oral diseases. Probiotics help prevent both mouth and throat infections that can cause painful symptoms. A dosage of probiotics may be what you need to deal with bad breath naturally. This will, however, work only in conjunction with good oral practice like brushing and flossing. 9. Vaginal Health Women have used both oral and vaginal Lactobacilli to help control vaginal infections. Natural probiotics help in treating bacterial vaginosis however, more evidence is required. Those who use probiotics are likely to encounter fewer pregnancy complications due to bacterial vaginosis. 10. Cardiovascular Health Taking probiotics regularly contributes to a lower LDL level. This is achieved by the lactic acid bacteria that break down bile in the gut and prevents its reabsorption, therefore, preventing bile from entering the bloodstream in the form of cholesterol. What Foods Have Probiotics 1. Organic Yogurt This is one of the most popular sources of probiotics in everyday diets. Yogurt is made from milk fermented using lactobacillus and bifidobacteria. As you consume this delicious snack, you are feeding your gut with beneficial bacteria. Yogurt helps reduce diarrhea resulting from antibiotics use and can relieve the symptoms of irritable bowel syndrome. Those with lactose intolerance will find this drink beneficial due to the lactic acid bacteria that helps break down the lactose in milk. The bacterium turns this lactose into lactic acid hence the characteristic sour taste. Remember not all yogurt contains live probiotics as some may get killed during milk processing leaving only a few or no good bacteria. Always choose brands with live bacteria to enjoy these health benefits. This fermented food contains lots of probiotics that can strengthen the gut, help in digestion and prevent digestive problems. Some brands of yogurt contain added probiotics including Lactobacillus acidophilus and Lactobacillus casei that contributes to the significant amount of probiotics in the drink. This, like yogurt, is a fermented milk drink though it is made by adding kefir grains to goat or cow milk creating the delicious and healthy snack. The kefir grains should not be confused with cereal since they are lactic acid bacteria cultures and yeast. There have been many health benefits associated with kefir including protecting against infections and resolving some digestive issues. Even though yogurt takes the lead as a popular source of natural probiotics, kefir is a better source. This drink contains several strains of good bacteria and yeast hence its diverse benefits and potency. It is beneficial to those with lactose intolerance and other digestive problems. Kefir has lactobacilli strains hence its benefit to those with lactose intolerance. These lactic acid bacteria digest lactose turning it into lactic acid. Due to its action, it reduces bloating. The bacteria are known to colonize the gut, therefore, healing the digestive tract and the body. This is a finely shredded cabbage which is then fermented by lactobacilli. This makes the European traditional food with significant health benefits apart from its nutritional value. This is often used on sausages or as a side dish. It gets its sour taste from the lactic acid coming from the bacterial digestion of the cabbage. Sauerkraut can be stored for months if the container is airtight. And apart from its probiotic properties, it is a source of fiber, iron, manganese, sodium, and vitamins B, C, and K. this food also provides antioxidants essential for eye health. It is crucial that you choose the unpasteurized type to get the live bacteria. This food has been in use for millennia and is not going away any time soon. It is fermented using lactobacillus which when ingested will bring balance to the gut, therefore, boosting immunity. This improves the population of healthy bacteria in the digestive tract and contributes to overall health. This food is made by fermenting soybean to form a firm patty with a nutty flavor. It originated from Indonesia and has become popular worldwide as a source of high protein meat alternative. When fermented, the soybean loses the phytic acid which prevents iron and zinc absorption. So the fermentation makes it better in addition to being a probiotic food source. The bacteria also produce vitamin B12 which naturally does not exist in soybean. Overall, tempeh is a very nutritious food with probiotics for good health. It is fermented with a yeast starter hence its benefits. The yeast helps fight infections in the GI tract for a healthy body. The results of the fermentation is a meaty, tender piece with neutral flavor hence its popularity as a seasoning item. Apart from its benefits to the gut, tempeh is an excellent source of protein and calcium. Kimchi is a Korean, spicy, fermented side dish. Its main ingredient is cabbage but it can be made from other vegetables. In addition to fermentation, it is flavored using a mix of seasonings including garlic, salt, red pepper, ginger, and scallion. Since it is fermented using Lactobacillus kimchii, and Lactobacillus brevis among other lactic acid bacteria, it is a very good source of this gut healthy probiotic. One does not have to go out of his way to eat this food since it can be used as a side dish very easily. Its cabbage source gives Kimchi high content of minerals and vitamins like B2, B6, and K. Kimchi made from cabbage is high in some vitamins and minerals, including vitamin K, riboflavin (vitamin B2) and iron. This Japanese seasoning is made by fermenting soybean with a fungus known as koji. Other ingredients can also be added as desired, these can include rice, rye, and barley. The resulting soup is commonly used in miso soup. Its salty and is available in brown, red, yellow, and white varieties. It is a good source of protein, fiber, vitamins, phytonutrients, manganese, and copper. Other than these nutrients and the probiotic benefit, miso has been associated with a reduced risk of breast cancer among middle-aged Japanese women. Those who regularly consume miso soup have a reduced risk of stroke. This goes to show how beneficial the probiotic is. It is made by fermenting soybean using the fungus Aspergillus oryzae. Miso is a complete protein and is effective at stimulating the GI tract, boosting the immune system and reducing the risks for several cancers and stroke. Kombucha is made by fermenting green or black tea in a colony of good bacteria and yeast. This healthy probiotic tea is consumed in the Asian countries and other parts of the world. The tea has several health benefits due to its nutrients and natural probiotic potential. Since it is fermented, Kombucha contains some amounts of alcohol giving it the carbonation. The tea has probiotic benefits and antioxidant properties with benefits to the immune system. The tea is fermented with lactic acid bacteria hence its increased concentration of lactic acid. This requires that it be taken in moderation to prevent lactic acid build up in the bloodstream. It also contains a yeast known as SCOBY. 8. Green Peas This may seem far-fetched, however, a 2014 study found that green peas contain the potent probiotic Leuconostoc mesenteroides. This probiotic is often associated with low-temperature fermentations. They effectively stimulate the immune system making it stronger. The probiotic also protects the gut’s mucosal barrier, therefore, preventing toxins and harmful bacteria from crossing into the bloodstream. 9. Green Olives The green olives in brine undergo fermentation under lactobacillus hence the lactic acid. This bacteria is naturally present on them and nothing needs adding to make it a delicious probiotic. The major bacterial strains in olives are the Lactobacillus plantarum and Lactobacillus pentosus. The fruits have a high potential for decreasing bloating, especially in those with IBS. The combination of probiotic strains plays a great role in keeping the balance of bacteria in the gut for great health. This, just like Miso and Tempeh is made from fermented soybean. It is a staple in Japanese kitchens and is mixed with rice to make breakfast enjoyable. This food contains the bacterial strain Bacillus Subtilis. The fermentation gives Natto a slimy texture, distinctive smell, and a strong flavor. It is a rich source of protein and vitamin K2. This makes the food beneficial to cardiovascular health in addition to the gut and immune system. Regular consumption of natto is associated with higher bone mineral density due to the high vitamin K2 in natto. It prevents skin problems like eczema, acne, psoriasis all which stem from inflammation. Regular consumption of foods rich in probiotics has been shown to give the body significant benefits. Users have stronger immune systems, reduced skin problems, a quieter and healthier gut and help with lactose intolerance among other benefits. These all depend on the type of natural probiotic in the food. The best options are those with more than one type of probiotic to give various health benefits. Embed This Image On Your Site (copy code below):
<urn:uuid:e98e8dc0-5a47-4baf-a035-ebb4f5369366>
CC-MAIN-2021-43
https://www.healthremediesadvice.com/3-improved-immune-health/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00029.warc.gz
en
0.935342
2,856
2.828125
3
What is a tabstrip? You've probably seen programs that have dialog windows with tabstrip controls. They appear to be a set of cards, each with a tab at the top. Click a tab and bring the attached card to the front of the pile. Here is one example that shows the second tab card clicked and brought to the front: Let's Make a Tabstrip! The tabstrip is part of the common control DLL, comctl32.dll. When we want to access the DLL, we must first make a call to initialize it: 'initialize DLL calldll #comctl32, "InitCommonControls", ret as void Once we've done that, we use CreateWindowExA to create the control. You may be asking why we use a "CreateWindow" function to create a control. Both windows and controls are created with this function. We need to establish a struct and some constants for creating and manipulating the control. Liberty BASIC doesn't have true constants. We can mimic them by using variables that we take care not to change within our code. To differentiate them from variables, we type them in all caps. 'constants: TCIF.TEXT = 1 TCIF.IMAGE =2 TCS.MULTILINE = 512 TCM.INSERTITEMA = 4871 TCM.GETCURSEL = 4875 TCM.SETCURSEL = 4876 struct TCITEM,_ mask as ulong,_ dwState as ulong,_ dwStateMask as ulong,_ pszText$ as ptr,_ txtMax as long,_ iImage as long,_ lParam as long We need to get the handle of our window, and then get the instance handle with GetWindowLongA. The instance handle is needed by the CreateWindowExA function. hwndParent = hwnd(#1) 'retrieve window handle ' Get window instance handle CallDLL #user32, "GetWindowLongA",_ hwndParent As long,_ 'parent window handle _GWL_HINSTANCE As long,_'flag to retrieve instance handle hInstance As long 'instance handle We can now create our tabstrip control. We aren't using an extended style flag in this example. We use a class name of "SysTabControl32". This tells the function that we want to creat a tab control. The next argument can be null, since the tab control doesn't have a caption. The next argument is important. It sets the style flag for the control. The style bits are put together with the "OR" operator. All controls must have the _WS_CHILD style, since controls are children of the parent window. To make the control visible, we must also include the _WS_VISIBLE flag. _WS_CLIPSIBLINGS clips child windows relative to each other; that is, when a particular child window receives a WM_PAINT message, the WS_CLIPSIBLINGS style clips all other overlapping child windows out of the region of the child window to be updated. If WS_CLIPSIBLINGS is not specified and child windows overlap, it is possible, when drawing within the client area of a child window, to draw within the client area of a neighboring child window. We'll also use the style for multiline tab controls. The following arguments set the location and size of the control. These are relative to the client area of the parent window. We also need the handle and instance handle of the parent window. The argument for the menu is null, because tabstrips don't have a menu. The function returns the handle to the tab control. ' Create control style = _WS_CHILD or _WS_CLIPSIBLINGS or _WS_VISIBLE _ or TCS.MULTILINE calldll #user32, "CreateWindowExA",_ 0 As long,_ ' extended style "SysTabControl32" as ptr,_ ' class name "" as ptr,_ ' title style as long,_ ' style 10 as long,_ ' left x 10 as long,_ ' top y 370 as long,_ ' width 250 as long,_ ' height hwndParent as long,_ ' parent hWnd 0 as long,_ ' menu hInstance as long,_ ' hInstance "" as ptr,_ ' window creation data - not used hwndTab as long ' tab control handle We now have a control in place, but it doesn't have any tabs! We'll have to send messages to the tab control to add tabs, using the SendMessageA function. This requires that TCITEM struct that we created earlier. We fill the struct with information about the tab to be added. The mask member requires bits set that indicate which members of the struct are to be valid in the API call. These bits are for TCIF.TEXT and TCIF.IMAGE. The iImage member is set to -1, since no images will be displayed on the tabs in this demo. The pszText$ member is filled with the desired tab label. The txtMax member is not strictly needed for this function. It would be used to retrieve the tab label, however, so it is placed here for reference. Once the struct is filled, the tab is added by sending the tab control the message TCM.INSERTITEMA. One argument is the index of the tab being added. Remember that indexes are zero-based, so the first tab has an index of 0, the second tab has an index of 1 and so on. 'set mask and fill struct members: TCITEM.mask.struct = TCIF.TEXT or TCIF.IMAGE TCITEM.iImage.struct = -1 'no image TCITEM.pszText$.struct = "First Tab"+chr$(0) 'TCITEM.txtMax.struct=len("First Tab")+1 'used when retrieving text, not needed here 'add first tab: calldll #user32, "SendMessageA",_ hwndTab as long,_ TCM.INSERTITEMA as long,_ 0 as long,_ 'zero-based, so 0=first tab TCITEM as struct,_ ret as long We add additional tabs in exactly the same way. We'll have three tabs in our demo. Here is the way we add the remaining two tabs. 'add second tab: TCITEM.pszText$.struct = "Second Tab"+chr$(0) 'TCITEM.txtMax.struct=len("Second Tab")+1 'used when retrieving text, not needed here calldll #user32, "SendMessageA",_ hwndTab as long,_ TCM.INSERTITEMA as long,_ 1 as long,_ 'zero-based, so 1=second tab TCITEM as struct,_ ret as long 'add third tab: TCITEM.pszText$.struct = "Third Tab"+chr$(0) 'TCITEM.txtMax.struct=len("Third Tab")+1 'used when retrieving text, not needed here calldll #user32, "SendMessageA",_ hwndTab as long,_ TCM.INSERTITEMA as long,_ 2 as long,_ 'zero-based, so 2=third tab TCITEM as struct,_ ret as long If you had a look at the control right now, you would notice that the font used for the captions of the tabstrips is rather ugly. That is easily fixed. We can get the default gui font on the user's machine with a simple call to GetStockObject. This retrieves the handle to the font, which we then use in SendMessageA with a message of _WM_SETFONT to change the font on the captions. calldll #gdi32, "GetStockObject",_ _DEFAULT_GUI_FONT as long, hFont as long 'set the font to the control: CallDLL #user32, "SendMessageA",_ hwndTab As long,_ 'tab control handle _WM_SETFONT As long,_ 'message hFont As long,_ 'handle of font 1 As long,_ 'repaint flag ret As long We need to have some way to know when the user clicks on the tabs so that we can rearrange our tab pages. Liberty BASIC cannot read message sent from the tab control to the parent window. We can, instead, use a timer to determine which tab has been clicked. We keep track of the current tab and if the selected tab is different from the current tab, we do our changeover routine. We use SendMessageA with a message of TCM.GETCURSEL and the function returns the ID of the tab that is selected. timer 300, [checkForTab] '............. [checkForTab] 'see if selected tab is the same 'as previously selected tab and 'change controls if tab has changed timer 0 'turn off timer 'get the current tab ID calldll #user32, "SendMessageA",_ hwndTab as long,_ 'tab control handle TCM.GETCURSEL as long,_ 'message to get current selection 0 as long, 0 as long,_ 'always 0's tabID as long 'returns selected tab ID if tabID <> oldTab then 'change page displayed oldTab = tabID 'for next check of selected tab gosub [clear] call MoveWindow tab(tabID), 20,40,350,210 end if print #1, "refresh" timer 300, [checkForTab] 'reactivate timer wait Now that we know how to create and manage the tab control itself, we'll need to know how to handle the other controls that are to appear on the tab pages. One easy way to do this is to include all needed controls in the window, placing the commands before the "open" statement for the window. Then we'll need to move the correct controls onto the window depending upon which tab is selected, and move all of the others off the window. We can do this with the "locate" command, being sure to "refresh" the window after the controls are moved. This is easy to do, but it requires quite a few lines of code to move each single control every time the user selects a tab. We'll use a different method that simulates "container controls" that are available in some other languages. Read about Container Controls below. A container control holds other controls. Whenever anything happens to the container, the controls contained upon it are affected as well. Move the container and the child controls move with it. Hide the container and the child controls are also hidden. At first I didn't think we had this capability in Liberty BASIC, but then I rememebered that we have a window with style "dialog_popup". This style has no titlebar. We can create a dialog_popup window for each tab and use if for that tab's page. Any controls on this window will move with it, so when we move a container window onto the program window, all of its controls move with it. We only need to make one call to move a control for each tab. We don't have to move each and every control used by the program. Let's set up three dialog_popup windows to act as our three tab pages. We'll put a few controls on each one. 'first page Statictext #tab1.s1, "First Tab Page!", 145, 75, 180, 30 Button #tab1.b1, "Button 1", [buttonOne], UL, 145, 140, 90, 24 open "" for window_popup as #tab1 'second page Textbox #tab2.t2, 40, 40, 180, 30 Button #tab2.b2, "Button 2", [buttonTwo], UL, 40, 80, 90, 24 open "" for window_popup as #tab2 'third page graphicbox #tab3.g, 0, 0, 350, 210 open "" for window_popup as #tab3 We can make a call to SetParent to make our dialog_popup windows children of the main program window. To handle this in a loop, we can get the window handles to these "container" windows and store them in an array. hTab1=hwnd(#tab1):hTab2=hwnd(#tab2):hTab3=hwnd(#tab3) dim tab(3) 'hold tab window handles in array tab(0)=hTab1:tab(1)=hTab2:tab(2)=hTab3 'set popups to be children of main program window for i = 0 to 2 call SetParent hwndParent,tab(i) next Whenever we want to change the page that is displayed, we can access a subroutine that moves all of the container windows offscreen in a loop. This gives us a blank tab control. [clear] 'hide all windows for i = 0 to 2 call MoveWindow tab(i), 3000,3000,350,210 next return Once the tab control is clear, we can move the desired container window onto it. call MoveWindow hTab1, 20,40,350,210 We've wrapped the SetParent and MoveWindow functions in Liberty BASIC functions like so: Sub SetParent hWnd,hWndChild CallDLL #user32, "SetParent", hWndChild As Long,_ hWnd As Long, result As Long End Sub Sub MoveWindow hWnd,x,y,w,h CallDLL #user32, "MoveWindow",hWnd As Long,_ x As Long, y As Long,_ w As Long, h As Long,_ 1 As Boolean, r As Boolean End Sub That is just about all we need to know. There is one "gotcha" though. If we include a graphicbox on one of the container windows, we will generate an error when the program ends. To avoid this, we do a GetParent call to get the parent window of the graphicbox. We'll store this handle in a variable for use later. When the program ends, we use SetParent to give the graphicbox its proper parent window again. 'because of graphicbox, get parent on third tab window for use later hTab3Parent=GetParent(hTab3) '........................ [quit] timer 0 'because of graphicbox, restore parent to third tab window call SetParent hTab3Parent, hTab3 close #1:close #tab1:close #tab2:close #tab3:end '........................ Function GetParent(hWnd) calldll #user32, "GetParent",hWnd as ulong,_ GetParent as ulong End Function Look at the whole demo here.
<urn:uuid:1e35d65c-6b17-4663-a109-e8fc9a703438>
CC-MAIN-2021-43
https://www.libertybasicuniversity.com/lbnews/nl108/tab.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.859014
3,169
2.890625
3
What is Endocannabinoid system? If you’re into natural health, you’ve no doubt heard of the CBD craze. You probably have friends and family on both sides of the argument, and maybe you have questions about the truth behind it. Before you can understand how and why CBD Oil works, you first need to understand the endocannabinoid system and the function it has in your body. The term “endocannabinoid” can be sliced into 2 parts: “Endo” is short for endogenous, meaning “produced from within” and “Cannabinoid” refers to a chemical compound that interacts with receptors throughout the body involved in the release of neurotransmitters. Your body naturally produces cannabinoids and is what the nervous system uses to communicate between nerves and cells throughout the body). These same cannabinoids can be found in hemp plants. So putting the two terms together, the endocannabinoid system refers to a network of these natural, internal cannabinoids in various locations and their involvement with how our brain and nervous system communicates to the rest of the body. Since the first cannabinoid receptor was discovered in 1988, researchers have discovered the endocannabinoid system is neuromodulatory (meaning it controls and regulates the central nervous system and its many functions). As such, it interacts and helps govern all other body systems (i.e. lymphatic system, digestive system, etc). We now know that endocannabinoids and their receptors exist throughout the body, working to regulate virtually all bodily processes with the ultimate goal of achieving homeostasis – aka balance and stability. Cannabinoids work with the human body through the endocannabinoid system (ECS), a group of receptors that work to regulate your health and promote homeostasis. The ECS has two primary receptors, CB1 and CB2. CB1 receptors are mainly for the brain and nervous system. CB2 is mostly for the immune system. Different cannabinoids bind to these receptors to produce a wide array of effects and benefits. Each cannabinoid has their own characteristics, meaning you want as many as possible to maximize the desired result of homeostasis. Purpose & Benefits of the Endocannabinoid System: Homeostasis Life is all about balance and equilibrium. Science has known and studied the concept of homeostasis for over a century, but its exact mechanisms of action have been a little blurry. We know the body has a natural way of regulating body temperature, blood pressure, blood sugar, and more. But what regulates them? What tells the body to start sweating when you get too hot? Research is leaning more and more toward identifying the endocannabinoid system as the governing system that determines the health and function of all the others. So all your body functions involved in homeostasis: - Temperature regulation - Immune function - And more Are directly impacted and even regulated by the endocannabinoid system. It signals and activates these receptors with one purpose in mind – to create and maintain body balance. How the Endocannabinoid System Works To understand how the endocannabinoid system works, we have to look at its 3 primary components: - Endogenous Cannabinoids As a nod to their importance, there are more endocannabinoid receptors in your brain than any other place in the body! The endocannabinoid system runs like a lock and key communication mechanism withreceptors being the locks – in other words, the site in which something is received. In this case, it’s messages. There are two main types: CB1 receptors: These are primarily located in the brain, nervous system, and spinal cord. They help govern things like memory, pain level, appetite, and mood. CB2 receptors: These exist in many organ systems (liver, kidney, spleen, etc) but are most prominent in the immune system and the skin. Their primary role is regulating inflammation responses and pain management. If receptors are the locks, think of cannabinoids as the keys. Only certain types of cannabinoids can “fit” or activate certain receptors. Each time a cannabinoid binds to a receptor, it generates a unique response, imparting a set of instructions to the cell (how to feel, what to do, etc). Endocannabinoids are those produced inside the body, and there are two main types: Anandamide: This is known as the “bliss molecule,” and it’s the brain chemical responsible for the runner’s high, boosting feelings of happiness, increasing nerve cell creation, and fighting depression. 2-arachidonoylglycerol (2-AG): This is the most widespread endocannabinoid and is uniquely involved in pain and immune responses as well as appetite management. Then there’s the cleanup crew called enzymes. They work to break down and recycle endocannabinoids so they don’t stick around for longer than needed. Even too much of a good thing can be bad, and that’s the case here. There are two primary types to ensure the two main endocannabinoids are processed properly: - Fatty acid amide hydrolase (FAAH): Breaks down anandamide. - Monoacylglycerol lipase (MAGL): Breaks down 2-AG. The Endocannabinoid System and CBD: A Healing Relationship The endocannabinoid system isn’t limited to our naturally occurring cannabinoids. It interacts with external ones as well. That’s where cannabidiol, or CBD, comes in. CBD is a natural cannabinoid found in hemp plants. Instead of binding to receptor sites like endocannabinoids, CBD impacts them indirectly by suppressing the FAAH enzyme responsible for breaking down anandamide. As a result, you’re left with more of the “bliss molecule” floating around and all of the happy and pain-relieving effects that go with it. Because of this, CBD has been shown to help alleviate symptoms associated with: - Nerve and muscle-related pain - Inflammatory conditions - And more It’s important to note that high quality CBD has virtually no THC – the cannabinoid responsible for “highs” associated with marijuana. CBD is available as pure hemp oil, concentrates, capsules, our Best CBD oil tincture, topical creams, and more. The discovery of the endocannabinoid system and CBD’s effect on it represent a breakthrough in natural medicine. Instead of using synthetic substances that our bodies were never meant to process, cannabidiol provides a safe, natural alternative that our bodies are already programmed to receive. Since cannabidiol does not contain THC, it does not replace nor compete with our body’s own endocannabinoids – it simply helps them work better. Choosing Quality CBD: How to Avoid the Scams For your body to truly take advantage of the Best CBD oil effects, it must be optimized for the endocannabinoid system in several ways. If it’s not, your body can’t absorb it, and you miss out on all it has to offer. As research surfaced on the endocannabinoid system and incredible benefits of cannabidiol, manufacturers started jumping on the CBD bandwagon – and cutting corners in the process. So to help you not waste your hard-earned money, here are the most important characteristics you need to consider. Without each of the following being at just the right levels and potencies, your CBD oil won’t be optimized for maximum use: - Active Molecule Count (AMC): True potency is determined by the Active Molecule Count (AMC) – or how much actual cannabidiol the oil contains. Most companies have an AMC in the 2% to 28% range. As you shop, look for CBD oil with verified testing that identifies a high-potency Active Molecule Count (above 50%). - Carrier Oil: Most companies use low-quality MCT oil – a fatty acid derived from coconut oil. This leads to a low absorption rate of just 7%, causing it to be flushed from your body in only 12 hours. - Hemp Grade: Most CBD oil is low grade, originating from China, and is injected with a cheap CBD isolate in an effort to increase potency. The problem with this route is that all the cofactors that work together to form true, high potency CBD oil are missing. When it comes to hemp oil, the sum of its whole is greater than its parts – isolating components is not effective. Check the stats on any CBD supplements you’re considering to see if they stand the test of purity, potency, and bioavailability like our CBD here at Native Nutrition. Our Best CBD oil drops features: - Active Molecule Count of 60% - Third-party lab tested - Super blend of carrier and driver oils in proper ratios to maximize absorption - Absorbed at a rate of 60%, remaining in your system for 2-3 days - Crafted by leading scientists with a potent phytocannabinoid-rich hemp strain for maximum benefit - Patented strain of Whole Plant, Broad Spectrum Hemp Oil with over 80 phytocannabinoids, terpenes, flavonoids, omega fatty acids and other nutrients to help you achieve optimum health and wellbeing. - Made in FDA/GMP-approved facility - Guaranteed to be 100% THC free. The endocannabinoid system is a wellness breakthrough that’s just now being studied extensively. We mentioned two cannabinoid receptors, but researchers are working to identify a third. Who knows what new, exciting benefits are around the corner. One thing is clear – the endocannabinoid system is critical in our body’s ability to maintain homeostasis, and anything we can do to support it should be considered. Just be sure to do your homework and source your CBD oil and Cannabigerol (CBG) oil from a reputable company with research and third-party lab testing to back up their claims.
<urn:uuid:edeb3a30-1beb-43dd-88e3-f51f213cf056>
CC-MAIN-2021-43
https://www.nativenutrition.com/what-is-endocannabinoid-system
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00710.warc.gz
en
0.932285
2,114
2.578125
3
Most reasonable people can see the benefits of using fully autonomous systems, particularly to help prevent injuries or death, as is the case with advanced driver assistance systems increasingly found in automobiles. When it comes to autonomous systems that are designed to take life rather than preserve it, there is significantly more debate. Currently, the U.S. and other nations do not have any weapons systems that can operate fully autonomously, which is defined in military parlance as selecting, aiming, and firing at a target without a human being "in the loop," or somehow in control of the weapon system. However, a variety of military weapons systems operate semiautonomously, requiring some human control or input to select or choose targets, but relying on pre-programmed algorithms to execute a strike. A good example of this is the Lockheed Martin Long Range Anti-Ship Missile (LRASM) system, slated to enter service in the U.S. defense system within the next two years. The LRASM can be fired from a ship or plane and autonomously travel through the air, avoiding obstacles outside of the target area. Published reports indicate humans choose and program the algorithms to seek out and identify potential targets, thus keeping a human in the loop. While the exact factors that make up the target selection algorithm are classified, it is likely a weighting of elements such as the target's size, location, radar signature, heat profile, or other elements that positively identify the target. Another example of a system with semiautonomous capabilities is Samsung's SGR-A1, a military border sentry robot in development for deployment on the border between North and South Korea. Essentially an unarmed guard tower, the system is designed to assist border guards by scanning the area for those who might try to cross the border. The system is armed with a light machine gun that can dispense tear gas or rubber bullets, and is equipped with cameras, a laser range finder, and a pattern recognition algorithm designed to discern between people and animals. Currently, the system is designed to be operated under human control for target verification, though developers have given it the capability to use its sensors to detect, select, and shoot at targets autonomously. It is the last capability that has watchdogs worried. Systems such as LRASM and the SGR-A1 are now only approved for use with a human approving targets to be killed by the system, but there is considerable concern the U.S. and other world powers are on the fast track to developing machines able to kill people independently. "I think it's pretty clear that military mastery in the 21st century is going to depend heavily on the skillful blending of humans and intelligent machines," says John Arquilla, professor and chair of Department of Defense Analysis at the U.S. Naval Postgraduate School in Monterey, CA. "It is no surprise that many advanced militaries are investing substantially in this field." Indeed, in late 2014, U.S. Secretary of Defense Ash Carter unveiled the country's so-called "Third Offset" strategy, essentially an attempt to offset the shrinking U.S. military force by incorporating technologies that improve the efficiency and effectiveness of weapons systems. While many of the specific aspects of the strategy are classified, industry observers agree a key tenet is increasing the level of autonomy in weapons systems, which will improve warfighting capability and reduce the number of humans required to operate weapons systems. "For over a decade, we were the big guy on the block," explains Major General (Ret.) Robert H. Latiff, an adjunct professor at the Reilly Center for Science, Technology, and Values at the University of Notre Dame, and Values Research Professor and an adjunct professor at George Mason University. "But with the emergence of China, and the reemergence of Russia, both of whom are very, very technically capable, and both of whom have invested fairly significantly in these same technologies, it goes without saying that the DoD (the U.S. Department of Defense) feels like they need to do this just to keep up." Military guidelines published by the U.S. Department of Defense in 2012 do not completely prohibit the development and use of autonomous weapons, but require Pentagon officials to oversee their use. That is why human rights groups such as the Campaign to Stop Killer Robots are actively lobbying the international community to impose a ban on the development and use of autonomous weapons systems. "We see a lot of investment happening in weapons systems with various levels of autonomy in them, and that was the whole reason why we decided at Human Rights Watch back in 2012 to look at this," explains Mary Wareham, advocacy director of the Arms Division of the Human Rights Watch, and coordinator of the Campaign to Stop Killer Robots. "We're seeking a preemptive ban on the development, production, and use of fully autonomous weapons systems in the United States and around the world." "We're focusing quite narrowly on the point at which critical functions of the weapons system become autonomous," Wareham says. "The critical functions that matter to us are the selection and identification of the target, and the use of force." Wareham's main issue is that the fully autonomous weapons systems of the future may rely solely on algorithms to target and kill enemy targets, without a human in the loop to verify the system has made the right decision. Human Rights Watch is pushing to have a negotiated international treaty limiting, restricting, or prohibiting the use of autonomous weapons systems written and ratified within the next two or three years. "If this becomes [a decade-long process], then we're in big trouble," she admits, noting that at present, there are no such treaties in process within the international community. "At the moment, it is just talk," Wareham acknowledges. A key argument of groups such as Human Rights Watch is that these systems, driven by algorithms, may make mistakes in target identification, or may not be able to be recalled once deployed, even if the scenario changes. Others with military experience point out that focusing on the potential for mistakes when using fully autonomous weapons systems ignores the realities of warfighting. "We're seeking a preemptive ban on the development, production, and use of fully autonomous weapons systems in the United States and around the world." "I think one of the problems in the discourse is the objection that a robot might accidentally kill the wrong person, or strike the wrong target," Arquilla says. "The way to address is this is to point out that in a war, there will always be accidents where the innocents are killed. This has been true for millennia, it is true now, and it is true with all of the humans killed in the Medecins Sans Frontieres hospital in Afghanistan." Arquilla adds that while the use of artificial intelligence in weapons will not eliminate mistakes, "Autonomous weapons systems will make fewer mistakes. They don't get tired, they don't get angry and look for payback, they don't suffer from the motivated and cognitive psychological biases that often lead to error in complex military environments." Furthermore, military experts feel an outright ban would be impossible to enforce due to the secretive nature of most militaries, and likely would not be in the best interest of any military group or nation. Indeed, even with today's military technologies, getting the military or its contractors to discuss the exact algorithms used to acquire, select, and discharge a weapon is difficult, as disclosing this information would put them at a distinct tactical disadvantage. Therefore, even if a ban were to be put in place, devising an inspections system similar to those used for chemical and anti-personnel mines would be extremely complicated. "A ban is typically only as good as the people who abide by it," Latiff says, noting that those who will sign and fully abide by a ban make up "a pretty small fraction of the rest of the world." In practice, he says, "When something becomes illegal, everything just goes underground. It's almost a counterproductive thing." Work on autonomous weapons system has been going on for years, and experts insist expecting militaries to stop developing new weapons systems that might provide an advantage is foolhardy and unrealistic. As such, "There is absolutely an arms race in autonomous systems underway," Arquilla says. "We see this in both countries that are American allies, and also among potential adversaries. In particular, the Russians have made great progress. So have the British; they are putting together a fighter plane that can do everything a piloted fighter plane can, and can be built to higher performance characteristics, because you don't have a human squeezed by G-forces in the cockpit." Others agree, even if they admit that at present, there are no significant advantages to using fully autonomous weapons, versus the semiautonomous systems already in use. "I would imagine that [as autonomous weapons] become more capable, they will be seen to operate more effectively than systems with humans in the loop," says Lieutenant Colonel Michael Saxon, an assistant professor teaching philosophy at the U.S. Military Academy at West Point. "Once you introduce swarms or you have to respond to adversary systems that are autonomous, humans in the loop will create real disadvantages. This, of course, is all predicated on advances in these machines' capabilities." Still, observers suggest while a ban on autonomous weapons may not be the right course of action, a deliberate approach to developing and incorporating them into the military arsenal is prudent. "I think we're probably closer to the kind of capabilities we're talking about than most people think, and Russia and China are, too," Latiff says. "These are dangerous systems. In the wrong hands, these things could really be awful." A ban on autonomous weapons would likely have little impact on the development of weapons systems in the near future, which still will be overseen by humans, even if the actual decision to select and strike a target is made by an autonomous system. "An autonomous weapon operates without human control, but that does not mean that it is free from human input," Arquilla says. "There are a lot of elements to the chain [of command]; the unfortunate term is the 'kill chain.' There will be people and machines intermixed within the chain." Whether or not a ban is put into place, the international community is likely to be faced with significant moral and legal questions surrounding the use of autonomous weapons, and whether they will be developed in ways that are consistent with accepted ideas about ethics and war, Saxon says. "Once you introduce swarms or you have to respond to adversary systems that are autonomous, humans in the loop will create real disadvantages." "There are good arguments about increased moral hazard with autonomous weapons systems, that they make killing too easy," Saxon says. "I think they also have an effect on traditional military virtues that need to be examined. What does it mean to be courageous, for instance, when your machines take the risks and do the killing?" For Latiff's part, while he does not support a ban on autonomous weapons, he would support a non-proliferation treaty allowing militaries to research and test these systems to ensure they can be made as reliable and safe as possible. "At the end of the day, it's kind of like nuclear weapons," Latiff says. "Everybody's going to get them, and the people that don't get them are going to want them. The best we can hope for is that we slow it down." U.S. Department of Defense 2012 Directive on Autonomous Weapons: http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf Campaign to Stop Killer Robots: https://www.stopkillerrobots.org/ Video: Scary Future Military Weapons Of War-Full Documentary: https://www.youtube.com/watch?v=DDJHYEdKCBE ©2016 ACM 0001-0782/16/12 Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481. The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc. This article is thoughtful and well-balanced, but we need to ask, "How do you know whom to target?" From the Guantanamo base prisoners to the targets of autonomous vehicles in Pakistan or Yemen, we depend on intelligence from the ground to choose our victims--and the intelligence is often false. We can't get boots off the ground. We need to embed in and work with the populations we wish to protect. Remote warfare--unless it is unleashed completely from ethics, responsibility, and long-term consequences--is a fantasy. Thanks for reading the article. Your point is valid. I don't think anyone believes that autonomous weapons will eliminate traditional military activity. The importance of getting boots on the ground, as well as winning hearts and minds, likely will continue to be relevant for many years to come. The following letter was published in the Letters to the Editor in the March 2017 CACM (http://cacm.acm.org/magazines/2017/3/213824). "Can We Trust Autonomous Weapons?" as Keith Kirkpatrick asked at the top of his news story (Dec. 2016). Autonomous weapons already exist on the battlefield (we call them land mines and IEDs), and, despite the 1997 Ottawa Mine Ban Treaty, we see no decrease in their use. Moreover, the decision as to whether to use them is unlikely to be left to those who adhere to the ACM Code of Ethics. The Washington Naval Treaty of 1922 was concluded between nation-states entities that could be dealt with in historically recognized ways, including sanctions, demarches, and wars. An international treaty between these same entities regarding autonomous weapons would have no effect on groups like ISIS, Al-Qaida, Hezbollah, the Taliban, or Boko Haram. Let us not be nave . . . They have access to the technology, knowledge, and materials to create autonomous weapons, along with the willingness to use them. When they do, the civilized nations of the world will have to decide whether to respond in kind defensive systems with sub-second response times or permit their armed forces to be out-classed on the battlefield. I suspect the decision will seem obvious to them at the time. Joseph M. Saur Virginia Beach, VA Displaying all 3 comments
<urn:uuid:272162f6-0cea-429b-bc3b-b28201ac70d7>
CC-MAIN-2021-43
https://cacm.acm.org/news/210376-can-we-trust-autonomous-weapons/fulltext
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00270.warc.gz
en
0.957632
3,077
2.84375
3
Previously, we have shown that the earliest form of ecclesiology in Christianity has its origins in ancient Jewish practice. Though Jews did not use the terminology of the monepiscopacy, other than the Essenes, their hierarchy was for all intents and purposes the same. There was a High Priest who ruled as a president of the Jerusalem Sanhedrin and was only able to operate with the majority consent of that body. Lesser Sanhedrins wielded similar ecclesiastical authority in their respective regions. These bodies were led by a Nasi and their “elders.” Local synagogues were led by “rulers” (who were sometimes referred to as “elders”), but also priests. It appears the priests did not have a ruling function, however, but strictly a religious one unless they were also a “ruler.” The preceding organization appears very similar to a metropolita in function with two key differences. First, Judaism’s Great Sanhedrin was centered in Jerusalem, while the Church lacked such a center after the Apostles had left the city. Instead, the Apostles’ successors only functioned as a Great Sanhedrin when they came to a consensus (as opposed to a majority vote as the Sanhedrin operated), something that did not happen with everyone in the same room until the Council of Nicea I. Second, on the local level synagogues sometimes appeared to have as many as several rulers. Levitical priests and sometimes lay elders, who may have not been rulers in their own right, did not have ecclesiastical authority but they led worship. While we have indications that the early Church also functioned with several rulers in even a small parish in a fashion similar to a lesser Sanhedrin (see Phil 1:1 and page xi of this link), there is simply no analogue for the two-tiered leadership of the early synagogue. In this article, we continue to look at the workings of early Jewish ecclesiastical authorities in order to determine how Christian ecclesiology developed from Judaism. The Workings of Local Synagogues. We have very little written evidence about how a local synagogue works, but history has bequeathed us two relevant passages: And then some priest who is present, or some one of the elders, reads the sacred laws to them, and interprets each of them separately till eventide; and then when separate they depart, having gained some skill in the sacred laws, and having made great advancers towards piety (Philo, Hypothetica, 7:13). Theodotus, son of Vettanos, a priest and an archisynagogos (“ruler“),* son of an archisynagogos grandson of an archisynagogos, built the synagogue for the reading of Torah and for teaching the commandments; furthermore, the hostel, and the rooms, and the water installation for lodging needy strangers. Its foundation stone was laid by his ancestors, the elders, and Simonides (Theodotus Inscription). Philo presents a synagogue hierarchy of “some priest,” who is not necessarily above or below elders [i.e. rulers of the synagogue], and finally “they” [i.e. laity]. The Theodotus Inscription has a hierarchy of ruler>elders>then presumably laity. We can also see that rulers can also be priests, just as in Christianity a bishop is also a priest. From the preceding passages, we do not see any explicit role for the Sanhedrin. Rather, in a worship context the following was the hierarchy: priest with ruling capacity or ruler>elder>layman. Apparently, the attendance of a priest was either not common as “some priest” presided and not the same one, or there were many priests and they took turns in leading worship. In our previous article, we were able to infer from the Scriptures that Sanhedrin members were given precedence. Jewish “Ecclesiology” After the Destruction of the Temple. With the destruction of the Temple, local synagogue hierarchy simplified and the once Great Sanhedrin became a committee of advisers. Unilateral authority was held by the Nasi/Apostle and he was essentially the honorary President of the Sanhedrin. The Nasi’s role as president of the Sanhedrin appears to have been that of a figurehead, with no real interaction with the body (see The Zugot and The Great Bet Din). The Nasi was for all intents the “Patriarch” of Judaism and he was able to replace local rulers (archisynagogos) without any checks or balances. The Sanhedrin was increased from 71 members to 72, and so this would have made it a body unable to break deadlocks. This was probably not important as the institution lost power to adjudicate more and more things. Further, there was no longer a High Priest, so the Sanhedrin really was no longer the same. The first Nasi was a Pharisee, Gamaliel II. For the first time, the Jewish people were led religiously by a layman and this order of things would persist in Roman times. From this we may surmise a diminished role for Levites ecclesiastically, though they still had a function in local worship. Because of the preceding, Jewish observers have concluded that for all intents and purposes the Sanhedrin system had ceased to exist: When the Mishnah was compiled, towards the end of the second century CE., the Sanhedrin was already a thing of the more or less distant past. As an institution it does not seem to have survived the destruction of the Second Temple; it may even have been falling into decay for some time before that event (Introduction, Page XI). We do not have a lot of written evidence pertaining to post-Temple ecclesiology. However, we have some idea of the Nasi’s function and how local Jewish worship evolved in the writings of Saint Epiphanius: Since [Josephus the Patriarch was] very severe as an apostle [the Jewish name for the Patriarch, see Panarion 30:4,2] should be—as I said, this is their name for the rank—and indeed was a reformer, he was always intent on what would make for the establishment of good order and purged and demoted many of the appointed synagogue-heads, priests, elders and “azanites” (meaning their kind of deacons or assistants), many were angry with him (Panarion 30:11,4). As we can see in the preceding, the Patriarch had the ability to demote synagogue rulers, priests from their roles as well as elders and deacons. There is no indication he relied upon any authority emanating from the Great Sanhedrin. By the time the preceding is written, the late fourth century, it is also possible that Judaism had more “offices.” For example, the role of “priests” and “azanites” is not clear but without literal sacrifices the Jewish liturgy probably became more theatric and required more participants. The parallels with Christianity in the above passage are clear. One Jewish source recognizes this. It appears that the Nasi was the Patriarch and synagogue-rulers were akin to the local city/regional bishops. This state of affairs was probably already prevalent before the destruction of the Temple. This is corroborated by the New Testament Greek indicates that Crispus alone was the sole synagogue ruler in Corinth. It appears he changed his name to Sosthenes and the definite article precedes his title as the synagogue ruler. Corinth surely would have been the seat of Achaia where there must have been several synagogues. In the above passage priests were a hereditary office and they headed worship in some local synagogues, elders would head worship in synagogues which did not have priests (i.e. the “rulers” we see in the Scriptures), and “azanites” were Jewish deacons. Hence, it appears without Temple worship, the aforementioned tension between elders and priests has dissipated as there no longer is a real role for priests in the Pharisaical, rabbinic mode of worship post-Temple. Priests were simply local synagogue worship leaders with Levitical blood and elders were the same but composed of men from other tribes. They both would be under the leadership of “synagogue-heads,” which essentially took the place of lesser Sanhedrin. Sources are scanty so it cannot be known for certain how quickly Judaism evolved into what we see Saint Epiphanius writing of. However, the following is worthy of consideration. Without the Temple, any Levitical claims to leadership evaporated and so the Sadducee party completely disappeared. Even before the destruction of the Temple, diaspora-Judaism was already evolving into this direction. The final catalyst for this change in Judaism was more direct-Roman meddling. We do not know if lesser Sanhedrins continued functioning without the Great Sanhedrin being an intact institution. Thanks to Epiphanius, we know that the Nasi would have appointed his own functionaries to serve at the local level and a multiplicity of de jure local leaders would have been superfluous in such a system. So, it appears that Judaism was able to adjust to not having a vibrant Sanhedrin system as it was probably already evolving into this direction anyway. All it required was a catalyst to crystallize these changes into a more streamlined, less Levitical system. Possible Ramifications on Christianity. As we discussed before, Saint Ignatius was already positing a monarchic episcopacy by the early second century. In light of the preceding, this probably was not an invention of Saint Ignatius himself (a thesis which is strange being that he wrote to so many differing localities taking it for granted). Nor, was this something imposed by the Roman state as Christianity at this time was persecuted by the Romans. Rather, the most likely reason is that Christians were emulating what was happening within Judaism. As we discussed in the previous article, diaspora Judaism was already evolving. It was a Jerusalem-focused religion revolving around a hereditary priesthood but it was becoming a religion where the faith of individual Jews, regardless of tribe, made worship divorced from the hereditary priesthood outside of Jerusalem possible. Local worship, which did not revolve around a sacrifice, was increasingly led by non-priests holding celebratory roles. Christianity did not have a hereditary priesthood, so it inherited the Jewish ecclesiology without the dynamics of Levitical control at the macro or micro level. The preceding explains why during Apostolic times both Judaism and Christianity had a hierarchy with functionaries that served identical purposes within the hierarchy: 1. Local synagogues/churches with as many as several rulers (Acts 8:15 and Phil 1:1). 2. A Great Sanhedrin/college of Apostles. 3. Lesser Sanhedrin/local Archbishops with lesser rulers/bishops below them (Titus 1:5). But, Christians unlike the Jews had: 1. Local congregations led directly by bishops and/or elders who exercised priestly functions without being hereditary priests. Jews were led simply by rulers whose priestly status was unimportant. 2. Employed deacons as worship helpers as well as organizers for the congregations’ needs, while Judaism either used another elder for this function or did not develop the need for azynites liturgically until after the Temple. 3. The Christian equivalent to the Sanhedrin did not operate based upon majority vote nor were they given to exact numbers (i.e. 23, 70, etcetera) for their synods to correspond to the Sanhedrins of their times. What may account for these differences? It is clear that the Christian system was: 1. More strictly Scriptural in that it maintains the superior position of priests within its hierarchy. 2. Consistent with the hierarchy of the Essenes (“Bishop”>Priests>Seniors>Juniors) who unlike mainstream Judaism maintained Levitical leadership and did not have the sort of theological inconsistencies that existed in diaspora-Judaism’s eccesiology. 3. Rejected the Oral Law, with its formulations that justified the necessity of odd numbered councils to avoid votes split down the middle, but rather followed the spirit of the written Law. A verse like Ex 23:2 (“you shall not follow a crowd to do evil”) may have been interpreted to mean that a simple tyranny of the majority was not sufficient for the nation of Christians. Rather, ecclesiastical decisions must please “the apostles and elders, with the whole church” (Acts 15:22). Hence, after Apostolic times/the destruction of the Temple, Judaism appears to have returned to a hierarchy with less intersecting roles due to the priesthood not having a significant role anymore. This is the exact opposite of what was true in Biblical times, where the laity had no significant religious role. Nevertheless, with the increasing role of the Nasi and the dissolution of a functioning Sanhedrin, Christianity (which intellectually was still a Jewish movement up until their expulsion from the synagogues around the time of the Temple’s destruction) probably followed suit. The preceding is significant because if the lesser Sanhedrin were devolving into either mostly ceremonial or even non-existent bodies, churches may have not maintained their former proportions of local leaders. Why? Culturally, as Christianity was still very Jewish, it would have felt strange to not act like fellow Jews. Hence, the proportions of Bishops in Christian churches appears to have decreased to the point that Ignatius takes for granted that local churches have one Bishop. As we discussed before, this was probably a proportion that existed in some corners of Christendom already, such as Corinth (though Corinth and the entire isthmus of Achaia probably had one Archbishop like Sosthenes and several Bishops below him as indicated by 1 Clem 44). Nevertheless, ever since the first century, we have seen an increase in the magnification of the monoepiscopacy to the point where the largest Orthodox country in the world has about half the Bishops that ancient Tunisia had in the 5th century. Furthermore, no singular parish has its own Bishop as it did in the past. My point in bringing this up is not to say this is bad or good, but to show that the monoepiscopacy is in fact consistent with Jewish ecclesiology, even if the proportions of Bishops per parish has continued to decrease consistent with the decrease of local rulers after the destruction of the Temple. The hierarchy, and its rationale, remains the same. The bishop reflects Christ and his submission to a greater bishops reveals the same sacramental reality. Likewise, the priest submits to the bishop for the same reason, the deacon to the priest, and the laity to the deacon. What we have to realize is that the proportions can always swing back into the opposite direction to the point where singular parishes may have several bishops. This would, in fact, not contradict the hierarchy as Saint Ignatius presents it, especially because ancient parishes with multiple Bishops would have still been accountable to Apostles or Archbishops like Sosthenes, Timothy, or Titus. Concluding thoughts. From the preceding, we can see that the strongest parallel between Judaism and Christianity is the hierarchy. Both sects did not seem to have completely autonomous local places of worship, but rather a hierarchical organization where local bodies of believers were accountable to regional and then a central authority. Interestingly, a Papal Supremacist view, with the Bishop alone working at this role in a central location, does not suffice to replace the work of the Sanhedrin in Jerusalem. The Jerusalem Nasi/High Priest could not operate unilaterally as Pope Victor I did. After all, the High Priest and the Sadducee party had Paul arrested but not excommunicated as this required at minimal a majority vote. This is something that the Pharisee party did not consent to. Only the conciliar view of the Orthodox within the monoepiscopal system, in fact, preserves the hierarchy while at the same time preserving the system of consensus that governed Judaism.
<urn:uuid:528529f8-08c5-4629-a1f9-ea6ad22c851f>
CC-MAIN-2021-43
https://orthodoxchristiantheology.com/2019/04/29/did-christian-ecclesiology-develop-from-judaism/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00429.warc.gz
en
0.974791
3,433
2.953125
3
The history of journalism, inclusively defined, encompasses the history of news and news media, including, among other things, the history of print, broadcast, and computer technology; of news work, news routines, and news workers; and of news organizations, including newspapers and other media outlets as well as wire services and feature syndicates. Defined more narrowly, the history of journalism refers to the emergence of a set of values and explanations that discipline, regulate, and justify news practices. Journalisms are socially constructed, and appear in different guises at different times in different national cultures in reference to different media. The history of journalism examines their construction in national and international settings, as well as anticipating their future prospects. Journalism As Historical Construct Commentators on political discourse began to apply the term “journalism” to some of the content of newspapers in the early nineteenth century. By the end of the nineteenth century, journalism came to refer to a specific kind of reportage in the various national cultures of the modern west. A form of the word “journalist” appears first describing the highly opinionated and politicized newspaper writers of post-revolutionary France. The word then appeared in English news reports but continued to refer to French essayists. It was subsequently applied to English and US essayists, but continued to refer to opinion writing until the second half of the century. Then it began to be applied to news-gathering practices, which were becoming increasingly routinized. This capsule history of the word underscores the fact that journalisms tend to exist within national systems, even though the history of journalism is really an international one. Printed newspapers first appeared in Europe at the beginning of the seventeenth century. They were a late feature of the socalled printing revolution, the long set of transformations that scholars like Elizabeth Eisenstein (1980) argue the invention of the printing press inaugurated and intensified. Among other things, these scholars assert that the ability of the printing press to mass produce ephemera helped standardize vernacular languages and create national publics. Benedict Anderson (1991) argues that a particular variant, print capitalism, was essential to the rise of modern nationalism. Early newspapers responded to religious and economic concerns. Most governments, anxious to keep public affairs out of the hands of ordinary people, created systems of censorship and tried to suppress political news. But practicalities made this difficult. Recurring periods of intra-elite conflict produced breakdowns in censorship systems, as when, for instance, the English Printing Act lapsed in 1695 because dueling parties in Parliament were unable to reach agreement on appropriate measures (Siebert 1952). The division of Europe into many warring jurisdictions meant that neighboring countries could host publications: the first Englishlanguage newspapers appeared in Amsterdam, and Swiss publishers readily served readers in France (Darnton 1979). As censorship systems failed, the various nations of western Europe and North America developed what Jürgen Habermas (1989) has described as a bourgeois public sphere. In this formulation, such a public sphere appeared as a space between civil society and the state and worked both as a buffer zone, preventing state interference in private life, and as a steering mechanism, allowing citizens to deliberate in an uncoerced manner to form public opinion, which would then guide the governing process. Scholars disagree on most aspects of Habermas’s formulation, but it seems clear that by the beginning of the nineteenth century in the modern west the major function of the press was its involvement in governance. The newspaper became a key part of a system for representing public opinion. At first, opinion pieces in newspapers maintained a careful decorum, including the use of pseudonyms and the maintenance of an appearance of personal disinterest, meant to give the impression of rational deliberation. The most famous example of this was the publication of the Federalist Papers, co-authored by James Madison, Alexander Hamilton, and John Jay but published pseudonymously over the name of Publius, arguing in favor of the ratification of the US Constitution in an apparently neutral and dispassionate manner. This decorum was always somewhat deceptive, and in some political situations, including the factional politics of the early United States, wore thin quickly. Shortly into the nineteenth century, a frankly partisan model of newspaper politics prevailed in western Europe and North America. It was this style of newspapering that occasioned the first use of the word journalism. Although it is hard to generalize across national media systems, journalism seems to have a shared history in gross terms in the modern west. In most national histories, there was first a transition from opinion to factual observation, followed by a split between correspondence and reporting, followed by the emergence of a professional journalism centered on objective expert reportage. And, in most countries, this history was complicated by the emergence of pictorial journalism, followed by broadcasting. What distinguishes these national histories, however, is the different experiences with censorship and other forms of media regulation, as well as the differing states of political development. In the late nineteenth and the twentieth centuries, the west exported its models of journalism to other regions of the world. The shift in the meaning of journalism from opinion to fact came about in the context of the emergence of a mass daily press. This shift centered on the British Isles and North America. The United States was an early leader in newspaper circulation because it avoided censorship and taxes on knowledge, as well as because of positive Federal postal policies and a national commitment to create a media system that would allow for the representation of a dispersed and diverse citizenry as a unified public. By the 1820s, the United States had a partisan press system with a high popular readership. In the 1830s, cheap daily newspapers, or penny papers, began to circulate in urban centers; at the same time, the content of all newspapers shifted toward the sorts of eventoriented news that one associates with the modern concept of journalism. In Great Britain the growth of a popular press was delayed by the various stamp taxes on newspapers, which were finally repealed in 1851. The adoption of new production and transmission technologies furthered the growth of news audiences. Beginning as early as the 1810s but taking off in the 1830s, printing presses adopted first steam power, then rotary cylinder plates, followed by stereotyping, and finally linotype typesetting in the 1880s. And, beginning in the late 1840s, the telegraph enhanced the commoditization of news and the growth of wire services and press agencies. Jean Chalaby (2001) has argued that it was only at the moment of industrialization that the figure of the journalist emerged in something like its modern form. In earlier newspaper systems, news gathering had been done by correspondents, a term for letter-writers who dispatched reports from distant places. Correspondents were often amateurs, sometimes paid, sometimes under contract to a single news medium but often also contributing to many. Correspondents’ reports had a personal voice to them, though they were often written over a pseudonym or a set of initials. Readers expected that a correspondent’s observations would be inflected by strongly expressed attitudes. As newspapers became more commoditized – that is, as they began to think of themselves and their content more as items sold to consumers than as interventions in public life – they began to hire reporters (Baldasty 1991). Reporters were meant to faithfully record facts: to transcribe speeches, to present minutes of meetings, to compile shipping lists and current prices, to relate police court proceedings. Early penny papers, for instance, often emphasized crime news. In a typical late-nineteenth century newspaper, reporters worked for city editors whose aim was to make the flow of news copy rational and predictable – and often sensational. In most western countries, the late nineteenth century saw controversies over yellow journalism. Journalism historians often trace this term to Richard Felton Outcault’s cartoon strip “The Yellow Kid,” which appeared in both Joseph Pulitzer’s New York World and William Randolph Hearst’s New York Journal, the two most famous of the US yellow dailies. A more international pedigree for the term may reside in the quick discoloration of the cheap pulp newsprint these papers were printed on, or might refer even further back to the color of the paper bindings of earlier sensational cheap books devoted to crime and adventure. In some languages, crime novels are still referred to as yellows. Industrialized news practices came into conflict with the established norms and values for public communication. Industrializing newspapers adopted a set of norms that distinguished between their high-value or sacred mission and their more profane work of earning success in a competitive marketplace. In most western nations, the sacred mission of the press was to create an informed public that could contribute to its own governance in a constitutional state – usually but not always democratic in some measure. But the profane work of the news seemed to produce a misinformed public whose tastes and intellect had been affected by sensationalism. And the competitive marketplace seemed to favor greedy and increasingly monopolistic industrialists with a political agenda of their own. The modern notion of journalism mediates between the sacred and profane work of the press and applies to an occupational structure that merges the work of the correspondent and the reporter. In Anglo-American history, the key term in this journalism has been objectivity. Objective journalists are expert professionals, who are always aware of their own subjectivity – like the correspondent – but police it, separating their own values from impersonal reports. Michael Schudson (1978) has described this form of objectivity as arising from a dialectic of naïve empiricism and radical subjectivism. A similar dialectic is evident in the rise of pictorial journalism. Raw material for illustrations came from a variety of empiricist techniques, including photography and sketch artistry, which seemed to promise fidelity to an objective reality. Master engravers then turned these observations into lucid and often interpretive visual reports. A tribe of indicators can trace the rise of professional journalism in the west. Canons of journalism ethics began to appear at the beginning of the twentieth century, along with professional associations and schools of journalism. The forms of professional journalism – the byline, the inverted pyramid form and summary lead (which counterintuitively tells stories from end to beginning rather than from beginning to end), and the habit of balancing and sourcing – became familiar around the same time. The excesses of World War I intensified the drive for professionalization inside and outside of the news industry. Professionalization coincided with the invention of journalism history as a scholarly activity. There had been a tradition of anecdotal and autobiographical histories of printers and other newspaper entrepreneurs since the beginning of the nineteenth century, but professional journalism education called for histories that emphasized the progressive development of standards, tied in with a genealogy of press autonomy from public and private power. By the 1920s an alternative model of professionalism had appeared, first in the Soviet Union, then in other anti-capitalist states. In the most important cases – the Soviet Union and the People’s Republic of China – the creation of a statist professional journalism followed a long history of attempts by bourgeois journalism to overcome government censorship. When revolutions inspired by Marxist-Leninist philosophies overthrew authoritarian states in Russia and China, they adopted some notions of bourgeois professionalism, wedded them to vanguardism, and institutionalized the resulting construct in state monopoly institutions. Among the adopted elements were a claim of independence for journalists, a rhetorical commitment to serving ordinary people, and an adversarial mission mutated into the concept of self-criticism. A bright line separated journalists, trained and certified as professionals, from ordinary citizens. Or at least this was the ideology or theory of communist journalism. In practice, the journalism of communist societies rarely achieved the professional autonomy called for in theory. At best, such journalism achieved some standing as an agency of independent but loyal criticism. At worst, it functioned as a fully dependent propaganda wing of the party and state, a role in no way justified by Marxist philosophy. Such practice led critics to compare communist journalism with fascist totalitarian media systems in Hitler’s Germany or Mussolini’s Italy, regimes that dictated advancing the interests of the state as the role for journalism, with only incidental regard to accuracy and completeness. In the capitalist countries and elsewhere in the world, other alternative forms of journalism had appeared. Usually alternative journalisms were tied to a group within the larger society, whether based on some aspect of identity (gender, race, ethnicity, class) or on the advocacy of a particular position. General interest news media tended to look down upon these alternative journalisms in the same way that they looked down upon sensationalism – as nonprofessional and potentially pernicious. Globally, the twentieth century saw the rise of broadcast journalism. In some countries, particularly in North America, broadcast media, although state-licensed, were privately owned; in others, there were monopolistic national broadcast authorities. In either case, broadcasting seemed to intensify the process of professionalization. The paradigm case might be the BBC, with its high degree of independence and autonomy. But in other cases (such as Italy) state broadcasting was allotted along party lines. By the end of World War II, the modern notion of journalism had taken root in most of the world. The United Nations Universal Declaration of Human Rights and the report of the MacBride Commission enshrined freedom of the press as an international value, though these formulations were subject to varying interpretations. Many observers questioned the relationship between journalism and the supposed free market that had become a staple of western and especially US formulations. Forms of news considered distinctive to the Anglo-American tradition continued to spread in the late twentieth century. Investigative journalism spread to Latin America, for instance (Waisbord 2000). The retreat of state-supported broadcast authorities in Europe brought the introduction of more commercial television news programming. The collapse of the Soviet bloc sparked a wave of commercial media ventures in eastern Europe, often alongside a revitalization of partisan journalism. Meanwhile, within the west, the end of the twentieth century saw the erosion of what Dan Hallin (1994) has called the “high modernism of journalism.” The decline of the Cold War as a news frame, the rise of ethnic and racial diversity within and among countries, the feminist movement, and the renewed philosophical questioning of the value of objectivity undermined the credibility of journalism as an institution. The same trends occurred in the media environment itself, with the rise of the 24-hour television news service, of new so-called personal media like talk radio and the blogosphere, of the tabloid form and a hybrid journalism, especially in the Scandinavian countries, and of a new form of partisan media power associated with broadcast entrepreneurs like Silvio Berlusconi and Rupert Murdoch. With the erosion of the high modern moment came calls to rethink the role of the press as an institution within the governing process, on the one hand, and calls for a new citizen journalism or public journalism on the other. News practices are always in flux, and the journalisms that explain and govern them must therefore be continually reinvented. Journalisms come a beat after the news revolutions they regulate. - Anderson, B. (1991). Imagined communities: Reflections on the origin and spread of nationalism, rev. edn. London: Verso. - Baldasty, G. J. (1991). The commercialization of news in the nineteenth century. Madison: University of Wisconsin Press. - Barnhurst, K. G., & Nerone, J. (2001). The form of news: A history. New York: Guilford. - Chalaby, J. (2001). The invention of journalism. London: Palgrave Macmillan. - Darnton, R. (1979). The business of enlightenment: The publishing history of the “Encyclopedie,” 1775 –1800. Cambridge, MA: Harvard University Press. - Eisenstein, E. L. (1980). The printing press as an agent of change, rev. edn., 2 vols. Cambridge: Cambridge University Press. - Habermas, J. (1989). Structural transformation of the public sphere. Cambridge, MA: MIT Press. - Hallin, D. (1994). We keep America on top of the world: Television journalism and the public sphere. New York: Routledge. - Hallin, D., & Mancini, P. (2004). Comparing media systems. Cambridge: Cambridge University Press. - Rantanen, T. (2002). The global and the national: Media and communications in post-communist Russia. London: Rowman and Littlefield. - Schudson, M. (1978). Discovering the news: A social history of the American newspaper. New York: Basic Books. - Siebert, F. S. (1952). Freedom of the press in England, 1476 –1776. Urbana, IL: University of Illinois Press. - Waisbord, S. (2000). Watchdog journalism in South America. New York: Columbia University Press.
<urn:uuid:ab0c5c49-70eb-4f61-bd2c-b588da33e276>
CC-MAIN-2021-43
https://communication.iresearchnet.com/journalism/history-of-journalism/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00430.warc.gz
en
0.950545
3,509
3.953125
4
How Important Are Production Networks to the U.S. Economy? - As manufacturing grows more sophisticated, industries become more interconnected through production networks. - By analyzing input-output data, economists can measure the independence and interdependence of U.S. industries in the production of their goods and services. - Studying production networks can reveal how industries' output growth and job growth are increasingly correlated. The structure of modern industrial production is highly complicated. As the manufacturing process becomes more sophisticated, firms and sectors are increasingly interconnected with each other through production networks. As a result of these production networks, an economic downturn in one industry (referred to as an industry-specific shock) will be felt by all its industry partners. New research on production networks suggests industry-specific shocks actually account for at least half of the volatility in aggregate growth. See Atalay. An industry’s final output can be sold directly to consumers or passed down to another industry as an intermediate input for more production. One can view the production network as a river flowing from raw materials down to the final consumer. When an industry is closer to final consumers, we call it a downstream industry; when its production is closer to raw materials, it’s an upstream industry. Downstream industries are also referred to as buyer industries because they tend to buy more products from a broad swath of upstream industries, while upstream industries are referred to as supplier industries because they mainly supply materials to other industries. Industries can be both downstream and upstream relative to one another. For example, automobile production is a downstream industry for steel manufacturers but an upstream industry for a law firm that purchases vehicles so that its lawyers can meet with clients. This article aims to outline the production network structure of the U.S. economy by identifying the key industries that are central suppliers and buyers and by exploring the importance of the automotive industry to the U.S. economy during the 2007-09 Great Recession. Models of Production Networks Figure 1 displays three simplified theoretical models of production networks that illustrate the importance of one industry to the overall network. In the first case, all industries operate independently. They produce output with workers and physical capital but do not use inputs from other industries or sell their output to other industries as an input. All output goes directly to final household consumption. The second case is a network that resembles an O-ring. All industries sell to a single downstream industry and purchase from a single upstream industry. In contrast to the first case, if there is a disruption to industry 1’s manufacturing process, it would affect not only the downstream buyer (industry 2) but also the supplier (industry 5). This case also illustrates how industries can be both upstream and downstream relative to other industries. The third case is a star-type network. In this case, there is a central hub (industry 3), and the others are peripheral industries. Industry 3 could play a prime role as a buyer (downstream industry) in the economy (e.g., the automobile industry). The auto industry takes various products from other industries, including glass, electronic equipment and steel, then assembles them together to produce a car. Industry 3 could also play a role as a central supplier (upstream industry), such as the oil industry. In either case, if a negative shock occurs to industry 3, it would be transmitted to the rest of the economy. In contrast, a shock to industry 1 would have a contained impact. U.S. Input-Output Linkages Input-output tables produced by the Bureau of Economic Analysis (BEA) allow us to study the actual production network of the U.S. economy. Input-output tables quantify how much each industry buys from other industries. They are used by policymakers, economists and business owners to understand the structure of the U.S. economy. We can construct two measures from the input-output tables to learn about the predominant upstream and downstream industries in the U.S. production network. One measure is the material cost share, which is found by taking the material costs paid to an upstream industry as a ratio of the gross output of the purchasing industry. The material cost share helps identify which industries are important suppliers, or upstream industries, to several other industries. For example, for each $100 of output generated by the petroleum refining industry, around $50 is from a commodity purchase from the oil and gas extraction industry. In contrast, less than $5 flows from the petroleum refining industry to the oil and gas extraction industry for each $100 created by the extraction industry. Analyzing the material cost shares for the U.S. economy reveals that some industries appear to be important suppliers, or upstream industries, for others. For example, many industries rely on the “other services” industry; this industry includes legal services, computer systems design and related services, management of companies and enterprises, food services and drinking places, etc. Other noteworthy upstream or supplier industries are wholesale and retail, F.I.R.E. (finance, insurance and real estate), primary metals, and fabricated metals. Another measure we can construct from the input-output tables is the output share. This measure takes the output purchased by a downstream industry from an upstream industry and divides it by the upstream industry’s total output. The output share gives information on which industries are predominant purchasers, or downstream industries. For example, if industry A produces $100 and industry B purchases $50 from industry A, the output share measure is 0.5 from A to B. In the case of the earlier example, the output share from the oil and gas extraction industry to the petroleum refining industry is 0.82, meaning that the petroleum refining industry purchases $82 of each $100 produced by the oil and gas extraction industry. Fewer industries stand out as predominant buyers than those that stand out as predominant suppliers. The industries that stand out as large buyers are construction, motor vehicles (auto industry), other services and government. Of course, here we have ignored the main buyer of the economy—households. We do not consider them within the input-output framework since they mainly provide labor to the economy. The measures constructed from the input-output tables help us draw a few conclusions about U.S. production networks. First, industries tend to rely heavily on outputs from firms within the same industry. Second, there are a few dominant upstream (supplier) industries that stand out, while the downstream output share (purchasing) appears to be more evenly spread across many industries. Measures of Interdependence and Independence The previous section focused on industry-to-industry flow, looking at the entire web of the production network. In this section, we quantify an industry’s degree of integration with the rest of the economy by using two aggregated summary measures. The first measure is called “in-degree” and is calculated as the ratio of an industry’s total material costs over total final output (or total revenue). A high in-degree value implies that the industry is more reliant on using intermediate inputs for production. In the left panel of Figure 2, we plot a histogram of in-degrees for the U.S. production network, which is divided into 71 industries. The distribution of in-degrees is a bell curve, centered on the mean of 0.44. It implies that, on average, 44 percent of an industry’s revenue is used to pay for the inputs purchased from upstream industries. The range of in-degree distribution is small. The industry at the 75th percentile has an in-degree value 1.7 times larger than the industry at the 25th percentile. The apparel, leather and allied products industry has the highest in-degree value, at 0.75, followed by the motor vehicles, bodies and trailers, and parts industry. Next, we evaluate an industry’s importance as an intermediate input supplier for the whole economy by using a measure called “out-degree.” An industry’s out-degree is calculated by determining the output share of downstream purchasers’ material inputs that come from that industry, and then taking a sum over all the downstream industries. For example, if industry A sells its outputs only to three other industries and these three industries use only the materials from industry A, then industry A’s out-degree value is 3. A higher out-degree means that an industry has many downstream purchasers that are highly dependent on material inputs from it. We plot the distribution of out-degrees in the right panel of Figure 2. We see that many industries’ out-degree values are centered on 1, with a few outliers in the right tail of distribution. The outliers are mostly service-based industries, like professional, scientific and technical services, real estate, and management. The outliers are not surprising for out-degrees, as all industries have to employ certain services to operate. For example, every industry requires lawyers to assist with the legality of business operations. The range of the out-degree distribution is much wider than that of the in-degrees. The ratio of the 75th percentile industry to 25th percentile industry is 6.4. These numbers suggest that the distribution of upstream suppliers (out-degrees) is more dispersed than the distribution of downstream buyers (in-degrees). This section tells us that a lot of industries in the U.S. rely on intermediate goods for production; however, on the supply side, there are several industries that are smaller suppliers and a few industries that are dominant suppliers. Comovement of Linked Industries So far, we have looked at how industries are connected to the supply chain from a stationary perspective. Another useful perspective is to understand how the degree of connection in the input-output network determines the dynamics of industry output and employment. One would expect that if two industries are closely connected in the input-output network, there should be a strong comovement in the industries’ output and employment. We examine two measures for industry output—gross output and value added. The BEA defines gross output of an industry as the market value of that industry’s production in terms of goods and services. The BEA’s definition of gross output can be found at https://www.bea.gov/help/faq/183. Value added is the way the BEA measures gross domestic product (GDP). It’s a measure of the amount of output from an industry that could be attributed to only the labor and physical capital used to process the intermediate inputs during production. The value added of an industry is also the contribution of a private industry to overall GDP. A simple way to think of value added is that it’s the difference between an industry’s gross output and the cost of its intermediate inputs. The BEA’s definition of industry value added can be found at https://www.bea.gov/help/faq/184. As an additional measure of industry dynamics, we look at industry payroll employment growth as well. For more details on payroll employment and its survey source (Current Employment Statistics survey), see www.bls.gov/web/empsit/cesfaq.htm. To measure the closeness of two industries, we calculate the share of intermediate materials by taking the amount of materials exchanged between two industries and then dividing it by the total output from the two industries. This calculation is essentially an output-weighted average of the material shares between the two industries. For example, the material share from oil and gas extraction to petroleum refining is 0.5, and the reverse from petroleum refining to oil and gas extraction is 0.05. After weighting the material shares by each industry’s output, the closeness measure is then 0.275. Then we organize each industry pair into quintiles based on the intermediate material share. Next, we find the correlation of each industry pair’s gross output, value added and employment growth over time, and finally take the average of the correlation coefficient for each group of industry pairs. The resulting data are presented in Figure 3. Essentially, we have five industry groups that are organized from least connected (first quintile) to most connected (fifth quintile) and a correlation coefficient for each industry group that shows, on average, how the industries in each group move together. From the graphs, we see that the correlation between industries’ output and employment growth increases as the linkage between industries becomes stronger. This pattern follows our observation that economic activity of one industry likely passes through to its related industries. Case Study: The Auto Industry One of the key industries in the U.S. economy is the motor vehicles, bodies and trailers, and parts manufacturing industry, which we’ll refer to as the auto industry. The importance of the auto industry to the U.S. economy was brought to the forefront of policy discussions during the 2007-09 recession. A combination of high fuel costs, a product concentration on fuel-inefficient SUVs and the onset of a recession left U.S. automakers Chrysler and General Motors (GM) asking the government for help in 2008 as the two companies faced the prospect of bankruptcy. The third U.S. automobile manufacturer, Ford, did not need a bailout but still advocated for the government to bail out its competitors. The following quote is from the congressional testimony of Ford’s then-CEO Alan Mulally in 2008: “Should one of the other domestic companies declare bankruptcy, the effect on Ford’s production operations would be felt within days—if not hours. Suppliers could not get financing and would stop shipments to customers. ”See Carvalho, p. 24. Mulally was referring to the highly interconnected nature of the auto industry. If Chrysler or GM were to go out of business, the upstream industries Ford relies on for inputs would also fail, leading to a complete disruption of Ford’s production. Terms like “too big to fail” surfaced during the 2007-09 crisis to describe the phenomenon of these large, interconnected firms. A firm that is too big to fail is one that is a key hub to the U.S. production network. Its failure would be felt throughout the economy. We can quantify the size of the auto industry in the U.S. production network by using the metrics we’ve already explored. The industry has one of the highest in-degree values, meaning it largely relies on intermediate inputs for production. With the exception of apparel and leather production, the auto industry has the highest in-degree value for the U.S. economy, with 75 percent of output going to pay for intermediate materials. The auto industry purchases a large amount of inputs from the other services, wholesale and retail, metals manufacturing, and nonelectrical machinery manufacturing industries. The auto industry doesn’t have a large out-degree relative to the rest of the economy, likely because most of the industry’s finished products go directly to consumers. However, there are industries in the manufacturing sector—such as metals, textiles, and rubber/plastics—that purchase inputs from the auto industry. These downstream industries would be impacted by a negative shock also. Using the 2007-09 recession as an example of a negative shock, Figure 4 shows the interconnected nature of the production network surrounding the auto industry. While the 2007-09 recession was not necessarily a shock to the auto industry alone, the auto industry was one of the hardest hit by the recession. A deadly combination of decreased consumer demand for vehicles and tighter lending practices that made it hard for consumers to get financing hurt automakers worldwide. High gas prices leading into the recession had already decreased demand for larger vehicles, making the downturn especially fatal for U.S. automakers. If the 2007-09 recession was felt equally across the economy, we would see that all industries move similarly. Rather, comparing output patterns for the auto industry and other industries closely and not closely tied to it, we see that not all industries were impacted to the same degree. The black lines show the year-over-year growth rate of gross output, value added and employment for the auto industry from 2003 to 2013. The gray bar highlights the time period when the U.S. economy was in recession. The blue and orange lines show the average growth rates for the auto industry’s 10 most and least related industries based on material cost share. In each graph, we see that the black line drops during the 2007-09 recession and rebounds immediately following the recession. The blue line, which represents the auto industry’s top 10 suppliers, also shows a sharp drop followed by a resurgence in the gross output and employment growth graphs, and the same pattern—but slightly softer—in the value added graph. The orange line does not share the same degree of comovement as the black and blue lines. This line represents the growth rates of the 10 industries least related to the auto industry. These graphs highlight the importance of the network structure of the U.S. economy. The auto industry is a central hub for many upstream suppliers, and any shock to the auto industry will be felt far beyond the industry itself. However, industries that are relatively isolated from the auto industry won’t experience as much turmoil. In this article, we explored U.S. production networks. The production network is a complex subject that still needs greater understanding. To quantify the impact of one industry on the whole economy, economists need more theories and empirical evidence. We showed that the U.S. economy is characterized as a centralized economy, in which a number of key industries buy and supply most of the materials in the economy. While many industries are large buyers, there are fewer large suppliers. Some of the central supplier and buying industries, like wholesale trade, also tend to be large in terms of economic output in the economy. These linkages are important for understanding industry dynamics. As industries are increasingly dependent on each other, their output growth and employment growth are increasingly correlated. This dependency is true for both input and output relationships. - See Atalay. - Of course, here we have ignored the main buyer of the economy—households. We do not consider them within the input-output framework since they mainly provide labor to the economy. - The BEA’s definition of gross output can be found at www.bea.gov/help/faq/183. - The BEA’s definition of industry value added can be found at www.bea.gov/help/faq/184. - For more details on payroll employment and its survey source (Current Employment Statistics survey), see www.bls.gov/web/empsit/cesfaq.htm. - See Carvalho, p. 24. Atalay, Enghin. How Important Are Sectoral Shocks? American Economic Journal: Macroeconomics, October 2017, Vol. 9, No. 4, pp. 254-280. See https://doi.org/10.1257/mac.20160353. Carvalho, Vasco M. From Micro to Macro via Production Networks. Journal of Economic Perspectives, Fall 2014, Vol. 28, No. 4, pp. 23-48. See https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.28.4.23.
<urn:uuid:2e2b97c6-4bd4-441a-a1c1-cd70f7a46863>
CC-MAIN-2021-43
https://www.stlouisfed.org/publications/regional-economist/fourth-quarter-2018/how-important-production-networks-economy
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00030.warc.gz
en
0.936208
4,031
2.9375
3
While most of us have found it challenging to practice physical distancing for just a few months during the COVID-19 outbreak, Jessie was no stranger to isolation. The female Indo-Pacific bottlenose dolphin spent roughly 38 years inside a concrete tank, separated from her parents and siblings before passing away in 2016. Jessie was captured in 1978 when she was about six years old. To abduct the dolphin, poachers chased her through the waters of Penghu, an archipelago of roughly 90 islands off the southwestern coast of Taiwan, where they set up a series of traps and enclosures. She was eventually transported to Ocean Park, a marine-life theme park on the southside of Hong Kong Island, where she would be trained to do circus tricks, give rides and interact with visitors on a regular basis. Jessie lived the rest of her life in captivity until she died at the age of 44, following severe renal dysfunction and an abdominal infection. According to Change For Animals Foundation, a UK-based organisation dedicated to improving the lives of animals worldwide, there are at least 2,360 cetaceans (aquatic mammals) in captivity worldwide, ranging from dolphins to beluga and orca. The group estimates that at least 5,000 have died in captivity since the 1950s. With the welfare of these wild animals in mind, animal advocate Rachel Carbary launched Empty the Tanks Worldwide in 2013. The campaign coordinates global anti-captivity protests on the second Saturday in May each year. During its inaugural event, Empty the Tanks protests took place at 21 locations with captive cetaceans across 12 countries. By 2019, the event expanded to include participants at over 71 locations across 22 countries. As part of the Empty the Tanks campaign every year, protesters in Hong Kong gather at the entrance of Ocean Park, where they hold up a 3-metre-long model dolphin splattered with red paint, as well as large banners that read: “Release the Dolphins” and “No Performance; No Captivity”. “During protests, we have often been told by visitors of Ocean Park that they would only focus on the non-animal facilities [like roller coasters and rides] and that they don’t plan to visit the dolphin show and other animal exhibitions,” he says. As of June 2019, the local amusement park was home to 63 marine mammals, including Indo-Pacific bottlenose dolphins, California sea lions and spotted seals, as well as over 100 sharks and rays. In 2016, the park released a statement saying that captive animals like Jessie play an important role in “conservation and educational messaging.” Some attractions provide opportunities for guests to engage directly with dolphins, with the aim to instil a sense of appreciation for nature and wildlife. Dolphins are also included in the park’s biosonar research on dolphin echolocation as well as its artificial breeding programme. An Ocean Park spokesperson told Ariana via email that, as of 23 January 2020, there were a total 22 dolphins at the park; 16 of them were born in captivity, while six were adopted from the wild and arrived at the park in either 1987 and 1997. The spokesperson declined to answer any questions about the survival rate of dolphins in the breeding programme. “Ocean Park conducts dolphin artificial insemination based on the needs including: to maintain biodiversity, to use frozen semen from those who have passed away [a] long time ago and to use frozen semen which is exchanged from oversea facilities, etc. All these can help to maintain and to facilitate the medical, husbandry technique as well as scientific research developments,” wrote the spokesperson. Animal advocates and conservationists don’t agree with this position. Viena Mak, a committee member of the Hong Kong Dolphin Conservation Society, stresses that captivity is a form of abuse, which not only harms the individual animal but also the entire marine ecosystem. “Wild dolphins are often hunted in big numbers, and the younger ones are always the targets because they are easier to train. Their removal will greatly impact the population of dolphins in the wild,” she says, referring to the notorious Taiji dolphin hunts in Japan, which allow fishermen to kill or catch more than 1,000 dolphins and whales each hunting season. For instance, during the six-month-long hunting season between early September 2019 and end of February 2020, animal welfare charity Dolphin Project estimated that 560 dolphins were slaughtered, while 180 were taken captive. According to Mak, many of these captive animals will get sick or die due to harsh conditions in transit to theme parks around the region. Even if they manage to survive the journey, what awaits is no paradise. “Dolphins [and whales] are very sociable animals and they swim up to 100 miles a day in the wild. Once trapped in tanks, they are forced to live with other unfamiliar cetaceans in an extremely boring environment, which often results in depression and bullying,” says Mak. Several media outlets and whistleblowers have reported suspicious injuries and abnormal behaviour among the marine mammals at Ocean Park in recent years. In 2013, a video posted on YouTube showed Pinky, a then 14-year-old female Indo-Pacific bottlenose dolphin, slamming herself against a pool wall. A press release from the park explained that “breaching (jumping) is a play behaviour generally seen in both wild and captive dolphins. On rare occasions, the dolphin might land close to the wall or the edge of the pool.” However, Mak argues that it’s more likely a self-destructive behaviour developed due to stress from performances, noises and confinement. On 8 February 2020, local reports emerged about a 14-year-old dolphin named Ginsan, who had a broken jaw last year. Ocean Park disclosed the incident after being questioned by the media and claimed that the dolphin injured himself while chasing with other dolphins in the tank. Whereas, Mak believes that the tank’s limited space is to blame, as it makes dolphins more vulnerable to injury. In relation to its treatment of dolphins, an Ocean Park spokesperson said: “The Park uses positive reinforcement when interacting with animals. Our animal care, husbandry and enrichment are all based on positive reinforcement and positive humane practices certified by American Humane Association (an organisation committed to ensuring the safety, welfare and well-being of animals).” “We are committed to providing the best care for all of the animals under our care from cradle to grave. With the increasing average life expectancy of its resident dolphins, the Park has specially arranged aged dolphins to engage in enrichment activities that provide psychological challenges and physical exercise, as well as foster a voluntary, stress-free experience for the dolphins during routine and other health monitoring procedures. Furthermore, aged dolphins are not involved in conservation breeding at the Park.” At the beginning of 2020, Ocean Park published its Strategic Repositioning Plan, which mentioned that “Ocean Park will steer away from conventional animal shows and will focus its animal exhibits and displays on environmental protection, marine conservation and education.” Wong welcomes the decision but argues it’s not enough. He has publicly urged the theme park to cancel all exhibitions and interactive activities that include captive aquatic mammals, to ensure the animals are “free from exploitation.” Mak agrees. “The best thing the resort can do is let the dolphins and other captive animals retire,” she says, suggesting that Ocean Park relocates the animals to a permanent sanctuary that replicates their natural habitat. “It may take years to plan and to find the right place. They need to start doing it now; otherwise, the goal of emptying the tanks will be even harder to reach.” Beyond Hong Kong, animal advocates around the world are increasingly worried about the expanding marine park industry in mainland China. According to a 2019 report by China Cetacean Alliance, a coalition of international animal protection and conservation organisations, there were at least 80 ocean-themed parks operating in China as of April 2019 – more than double the number of parks in 2015 – with another 27 under construction. The report also estimated that these parks house an estimated 1,000 cetaceans, including at least 13 different species. Of these, bottlenose dolphins and beluga whales are the most common. Mak says that limited transparency within the industry in China has caused greater challenges for conservationists, making it difficult to keep track of the number of captive cetaceans and their wellbeing. Moreover, a lack of legal protections for animal welfare also makes captive cetaceans more vulnerable. They are simply sending the wrong message – that it’s okay to deprive these animals of their freedom just to satisfy human needs – to the younger generation.Viena Mak “Chinese laws and regulations lack a legal definition of ‘animal welfare,’” states the China Cetacean Alliance report. “Specific animal welfare concepts within the laws and regulations relevant to the ocean theme park industry are therefore lacking, and facilities flout the regulations regardless. It is clear that cetaceans in captivity in China remain without proper protection from conditions that cause suffering.” As an example, Mak recalls seeing dolphins in a tiny tank inside a shopping mall in Guangzhou in 2016. The tank was so small and shallow, she says, that the dolphins could not swim vertically in the water without exposing part of their torso. Mak says that some parks with captive dolphins or whales in China hold that the first-hand interaction with these animals is important for education and conservation. Mak points to Chimelong International Ocean Tourist Resort in Zhuhai, a city in southern Guangdong province, as an example. The theme park houses a wide variety of marine animals such as orcas, belugas and whale sharks, and frequently organises family tours and learning programmes for children, in order to provide the participants with “comprehensive knowledge about animal conservation.” Ariana reached out for comment but did not hear back. “They are simply sending the wrong message – that it’s okay to deprive these animals of their freedom just to satisfy human needs – to the younger generation,” Mak stresses. “Besides, these captive animals can’t possibly educate the visitors about nature, because they have lost their natural instincts and abilities due to their captivity.” Get Involved: #SelfiesForCetaceans Due to concerns surrounding COVID-19, Empty the Tanks Worldwide 2020 invites everyone to advocate on behalf of captive aquatic mammals, by posting pictures and videos to social media using the hashtag, #SelfiesForCetaceans on 9 May. Keep it simple or get creative with posters, stuffed animals or costumes. Photos will be shared by Empty the Tanks and the Dolphin Project’s social media channels to raise awareness about captive animals and support marine conservation. The three most creative submissions will receive a prize from the campaign.
<urn:uuid:f1f51a6b-6cc9-4809-861a-9f7026a73778>
CC-MAIN-2021-43
https://www.arianalife.com/topics/environment/animal-welfare/what-you-should-know-about-dolphins-and-whales-in-captivity/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00670.warc.gz
en
0.955715
2,273
2.859375
3
History has more than once demonstrated what a great influence on all humanity a single human mind can have. When Henry Dunant, a young man from Switzerland, decided to organise the first group of volunteers to help wounded soldiers after the Battle of Solferino in Italy in 1859, he did not know he was laying the foundations of the world’s largest humanitarian organisation The Red Cross was born in the heart of Swiss businessman Henry Dunant (1828–1910) on 24 June 1859, when he was an eye witness to one of the bloodiest battles of the century, the Battle of Solferino in Italy. A meeting of the Geneva Society for Public Welfare decided to organise an international conference in Geneva that was attended by experts from 16 countries. The conference adopted the red cross on a white background (the inverse of the Swiss flag) as a distinguishing sign. Its purpose was to identify and protect those who would take care of wounded soldiers. The Red Cross as an institution had entered the world. The conclusions of the international conference recommended the establishment of national volunteer societies. These became known as the national red cross and red crescent societies. In 1986 the Red Cross adopted a new name: The International Red Cross and Red Crescent Movement, though it is sometimes still known as the International Red Cross. At present the worldwide International Red Cross and Red Crescent Movement includes 185 national societies, 200 million volunteers and 275,000 employees. In the Kingdom of Hungary prior to 1918, it operated through the Hungarian Red Cross Society. The Red Cross already existed in Žilina before 1886, when it helped townspeople whose houses had been damaged in a great fire. The Czechoslovak Red Cross was founded in 1919. Its founder was Dr. Alica Masaryková. It was headed by a general staff with a division in Slovakia based in Martin and Bratislava. Local organisations functioned mainly in towns. In 1928 the division comprised 127 branches in 80 districts, including Žilina. The Czechoslovak Red Cross had been present in Žilina since 1919. In this year a mission of the English red cross led by Lady Muriel Paget purchased the manor house in Bytčica and established a sanatorium for children called the Žilina Children’s Clinic with an opening ceremony at 2 p.m. on 25/02/1921. Muriel Paget had met President Masaryk in Siberia during World War One and spent a week travelling him, which had sparked an interest in Czechoslovaks for her. She came to visit the newly established Czechoslovak Republic in February 1919 and when she learned about the poor condition of the children, she started to help them. She organised not just collections of clothing in England but also sewing for Czechoslovak children. She also raised funds for her work from a variety of sources and in March she returned to Slovakia with a mission. She established a convalescent home in Košice, a children’s home in Modra, food dispensaries for children in Žilina, Vrútky, Uzhhorod and Ružomberok and three kitchens in Turzovka and the surrounding mountains. When there was an outbreak of typhus in Slovakia, she established an isolation hospital for infectious diseases in Turzovka, where a nurse in the English mission, M. Callum, became ill and died. In 1920 Muriel Paget concentrated her activities in the area of Čadca and Žilina, where she earned her place in local history by establishing the children’s hospital in Bytčica. The hospital was established in a former manor house purchased by the Czechoslovak Red Cross. Its operating costs for its first year came were covered mainly by donations and it was managed for a year by the League of Red Cross Societies in Geneva. Children’s clinics were established in 16 locations with high levels of poverty of northern Slovakia (Žilina, Čadca, Turzovka, Makov, Veľká Bytča, Rajec, Považská Bystrica, Mariková, Fačkov, Dolný Kubín, Zázrivá, Námestovo, Tvrdošín, Spišská Nová Ves, Prešov and Kremnica) with the aim of advising mothers on how to look after infants and providing them with medicines, clothes, soap and care. In a year, the clinics treated 7,116 children and 1,032 mothers in 22,326 treatments and 12,616 medical visits. The activity of the children’s clinics was promoted through “Baby Week” celebrations which Lady Paget organised all over Slovakia in August 1921. The Czechoslovak Red Cross took over the children’s clinics on 01 January 1922, noting the exemplary work of the English mission. In 1920 and 1921 Lady Paget was also busy in the Baltic countries and she moved there after three years of work in Slovakia in February 1922. At the time, the newspaper wrote: “Her departure was accompanied by warm expressions of thanks for her pioneering social work in Slovakia and she left behind a great deal of work and personal memories of an energetic organiser who found the meaning of life in working to help others.” In the Czech National Library in Prague there is a transcript of a letter from an unknown sender with the initials A. M. Č. giving a detailed description of the opening of the children’s hospital in Bytčica. It may interest you to read the full text: The invited guests, representatives of various voluntary organisations, are arriving by train from Košice, Lučenec and Bratislava. Their destination: Žilina. There are a lot of women in uniform; these are the ladies from Lady Muriel Paget’s mission. Our group from Martin (Mrs Štekláčová, Miss Fabriciusová, Dr. Kofránek and myself) are on our way too. With us is that devoted friend of that friend of Slovakia Scotus Viator, Rev. Ruppeldt. All of us are heading for the train ready to set off for Rajec. We get into the carriage. We are the first of “our set” but only for a few minutes; soon a whole crowd comes after us; all familiar faces that we know from Red Cross work. We saw Miss Hudcová, who is so devoted to the people that when the typhus broke out last year, she visited Turzovka and the surrounding countryside and worked so hard that she ended up having to spend a few weeks in the hospital there. Then there was Mrs Lacková, on her way back from a Red Cross meeting in Prague, Dr. Kreitz, Dr. Švamberk and others. We hurried along, and we were soon in Bytčica. What did we find there? Gone was the Kubelíks’ palace, and in its place was a safe haven for Slovak children, who get such a poor upbringing in the hovels of our impoverished people. We were welcomed by an English nurse speaking in English, but we could all understand her. Those who didn’t understand the words from her mouth understood the language of her heart. And then it struck us that our quarrelsome, limelight-hogging politicians, constantly boosting their “ego” would do better to dedicate themselves to such very serious work with people that would cultivate more love for the nation, raise more offspring to build the state, and enhance the general happiness. Oh no! There is no wish for that. Let somebody else do that! Who? How about Lady Muriel Paget, who has already shown that she has room in her heart for our children. But to return to Bytčica. The front room soon, very soon, filled up with the arriving guests. We looked around for our Alica Masaryková, but she wasn’t there! She was deputised by Minister Procházka (editor’s note: Alica Masaryková did not come because she was with her father the president, who was then seriously ill with a temperature of 37.2°C). When everyone had arrived, we went upstairs to the reception room. The room seemed too small for everyone but “there can never be too much of a good thing” and we all squeezed in. We were quiet; the opening ceremony was starting. I was watching someone who was getting ready to speak. Who? I had never seen him before. It was the head of Trenčín county, Mr Bellai (Note – actually Dr. Kállai). He welcomed Minister Procházka, Lady Paget and all of us. We were pleasantly surprised that the county head cares so much for our offspring, that he loves what the Hungarians hated and is trying to preserve as many children as possible in Slovakia. When it was the minister’s turn to speak, he told us what his “medical” heart dictated to him. He also promised to save as many of our little ones as possible. He also spoke for Dr. Alice Masaryková, who was originally planning to open the hospital but could not come because she had to be by the side of her sick father, who is not just hers but a father to all of us. We want to him to live for many, many more years… Lady Paget’s speech was beautifully interpreted into Slovak by Mrs Paulínyčka. – But what more could Lady Paget really tell us? The hospital itself spoke for her. What we might not have felt when we saw the hospital or heard Lady Paget’s words, we felt when the nurses appeared carrying a 3-year-old boy with a bunch of lily-of-the-valley in his hand which he presented to the mother of the hospital. That moved many hearts. Lady P. was followed by the director general of the medical department of the League of Red Cross Societies, Dr C. E. A. Winslow, who had come from Geneva. He also spoke in English and he had our Rev. Ruppeldt as his “right hand” (Note: Ruppeldt was the Lutheran pastor in Žilina at this time and was able to speak English). What was this man like? He was a typical American, brought up to be a good brother to every nation of the world. In the end, Minister Procházka closed the speeches by going to the door and opening it – to the sight of pure snow-white walls. We went forward a few steps and saw a whole row of beds. The hospital currently only has capacity for 30 children. We only saw 4 children but another 15 had been released earlier. One of the saddest things we saw was a child dying of consumption. They looked like they were just 8-9 months old, but they were in reality already 3 years old. We all went through the whole hospital and talked about how important it is and we also discussed volunteer work and collections to save those who cannot save themselves. Who donates to the Red Cross just like that? Who? The rich are more careful with their money than the poor: Those with full bellies do not trust the starving… I must emphasise that the Red Cross will take over management of the hospital after a year. Until then, it is planned that Lady Paget and the League of Red Cross Societies will finance it. One wish I have is that Slovak women should be as willing to make sacrifices for their own nation as English women are for others. When will we be grown up enough for that? The Red Cross “drive” is coming up. How many of you and us will take part? How many thousands or millions of crowns will we collect? The children’s hospital established by the English mission in Bytčica in an extension of the manor house. Other written sources from 1920 inform us that the building for the children’s hospital was purchased by the Czechoslovak Red Cross and that the League of Red Cross Societies set it up through Lady Paget’s mission. At that time the building had no lighting, no connection to the sewers, water mains or other infrastructure, which was still in preparation. Proceedings were commenced with the Ministry of Health for the sale of the hospital to the state administration. After long negotiations, the Czechoslovak Red Cross definitively bought the manor house for 1,250,000 crowns in 1921 and the children’s ward of the State Hospital in Žilina was established there under the leadership of Dr Ivan Hálek in 1923. There was interest in setting up a headquarters for the Žilina branch of the Czechoslovak Red Cross in the town itself, where various services could usefully be provided. In December 1922 a social and health centre for children was established in the Žilina town school on what is now Zaymusova ulica. It had one consulting room and a waiting room. The staff were a paediatrician, Dr Gross, and a nurse – first Jana Faltýnová, then from 01 February to the end of October Jiřina Hajnová and from 01 November 1923 Božena Šturmová. They were assisted by three volunteers – Mrs Bacherová, Mrs Klimešová and Mrs Kubicová. In 1923 the Žilina branch of the Czechoslovak Red Cross had 415 members compared to the 62 members it had had at its foundation in 1919. The chairman of the branch was Andrej Bacher, the manager of the bank Slovenská banka in Žilina; his deputy was the mayor and businessman Štefan Tvrdý (who died in 1925), the manager was Fedor Ruppeldt, the secretary was Rudolf Franca, a teacher in the grammar school, the treasurer was businessman Jozef Gáal. The branch engaged in various activities including public education through lectures. The branch also organised a festival each spring in the town forest where there was music, a procession and a collection towards the future health centre. The Baby Week festival was combined with an exhibition and promoted awareness of good hygiene habits. The branch also organised nursing courses. It also distributed food and clothes to the poor. In December 1922 they set up an ambulance service with a motorised ambulance. Koloman Thuranský, who had completed a medical course, became the leader of the ambulance service. By 1924 the membership had increased to 659 persons, each of whom paid an annual membership of 2,000 crowns. The premises in the town school were only temporary and were not large enough to allow larger-scale activities. The branch therefore made plans to build two buildings on land donated by the town in the Závaží neighbourhood. According to the construction plans made by Karol Pawer and the instructions from Division headquarters in Bratislava, the plan was to begin by buildings a two-storey social health institute. This was not built, unfortunately. Since the branch had to leave its previous premises in the town school, they first built a one-storey building with a cellar in the rear part of the lot based on a design by Karol Pawer. The building was completed on 17 October 1924. Inside there were two consulting rooms, a treatment room and two waiting rooms. In the improved conditions after the construction of the new Red Cross building, the branch’s clinic could focus not only on the treatment of sick children but also on protecting mothers. They examined and treated mainly lung diseases and vision impairment amongst others and the clinic also had an inpatient section. From 1929 the clinic also treated sexually transmitted diseases. The Czechoslovak Red Cross branch built today’s two-storey building in 1927. Unfortunately, the building’s plans have not survived. Construction lasted from 25 March 1927 to 10 November 1927, when the occupancy certificate was issued. The house had a basement containing a caretaker’s flat. This building was also built by Karol Pawer’s construction firm. Another interesting piece of history related to the Red Cross is that the branch in Žilina used to run a cinema called BIO HUMANITAS in the theatre of the Catholic House from 1927. Although a licence was issued for showing films in this building in 1927, the branch had been showing films at other locations since 1925. The Catholic House and the National Theatre were built in 1925–1926 on what is now Hurbanova ulica by the Catholic Circle Association, which created the Cooperative for the Construction of the Catholic House for this purpose under the leadership of the rector of Žilina, Tomáš Ružička. The neoclassical building was based on a design by the Czech architect Stanislav Koníček, who worked in Žilina from 1923. The design already anticipated that the theatre with a stage and space for musicians could be used to show films. The theatre was able to seat 530 spectators. Seating was on folding seats and there was linoleum on the tiled floor. The theatre had its own heating from four American stoves. There was electric lighting powered from the public grid with batteries as a back-up. There were four exits from the theatre and there were stairs leading up to a gallery where there was gallery and balcony seating for 113 spectators. The cinema had separate projection booths and film storeroom. The cinema was managed by Emanuel Salaquarda and Aladár Šterk was the projectionist. In 1932 the cinema offered 282 seats for 6 crowns a ticket, 120 for 5 crowns, 80 for 4 crowns, 80 for 3 crowns and 80 for 2.50 crowns. This made a total of 642 seats. From 1932 onwards the cinema donated a flat rate of 1,000 crowns to charity every year. The cinema was renovated, and a sound system was purchased in 1934. At that time it had 6 employees. The cinema ended 1931 with a loss of 13,766 crowns and in 1932 there was a loss of 20,749 crowns. As a result of the financial losses, the local branch of the Czechoslovak Red Cross in Žilina asked to be excused from contributions for charitable purposes, but the cinema was placed in administration and subject to bailiff enforcement for its debts. The Žilina branch of the Slovak Red Cross is currently based in the building at Moyzesova 38. Its activities include the recruitment of new blood donors to expand the blood donor base, and the provision of first-aid demonstrations and training. It has accreditation for first-aid courses and a care course. Every year, it awards the Jánsky and Kňazovický medals for unpaid blood donors. An old people’s home – the St Lazarus Home – occupies the ground floor of the building. The Slovak Red Cross in Žilina provides humanitarian and crisis assistance to individuals and families during natural disasters and difficult situations in life. Source: Mgr. Peter Štanský a Milan Novák It can be visited ■ exterior during a guided tour of TIO Žilina. Position of the monument on the map: C4
<urn:uuid:6c0807e9-f98d-40d7-a6e2-76252599621d>
CC-MAIN-2021-43
https://www.tikzilina.eu/en/building-of-the-slovak-red-cross/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00670.warc.gz
en
0.981529
4,063
2.90625
3
Some would say hyphens are going the way of dinosaurs, disappearing in favor of compound words. But they are not extinct just yet, so knowing when to use one and when to skip it is important. The difference between “high-quality” and “high quality” is determined by the location of the noun that the phrase should modify. High-quality indicates a compound adjective where the word “high” modifies the word quality rather than the noun that follows. The only time you don’t need a hyphen is when a noun does not follow the phrase. Sometimes, removing the hyphen can cause confusion for readers in understanding what a compound word or phrase is modifying. Read on to learn more about hyphens, common words requiring hyphens, and what the phrases “high-quality” and “high quality” mean. Understanding Hyphens: Why and When You Should Use Them A hyphen is a punctuation mark that you can use to join words or parts of words together (source). We should not confuse this with a dash, though both look quite similar. You should only use a dash to separate full statements or thoughts, and you should add a space on both sides (source). Conversely, you should not separate a hyphen by a space on either side. You can think of it this way — the purpose of a dash is to separate ideas, while the purpose of a hyphen is to join ideas or words together. The most common and important reason for you to use a hyphen in your writing is to avoid confusion for your reader in understanding what an adjective or adjectival phrase should modify. Let’s take a look at an example: 1. I went to the car dealership to meet the antique car salesman. 2. I went to the car dealership to meet the antique-car salesman. Reading the above sentences, you may be thinking they are exactly the same. But, if you read the first sentence more closely, your reader may wonder whether the car is antique (meaning old), or perhaps it is the salesman himself who is antique. Using a hyphen in between antique and car shows that, rather than modifying salesman, the two words antique and car become a compound adjective, antique modifying the car rather than the salesman. This is one simple example of how using a hyphen, while seemingly insignificant, is important for clarity. There are other uses for hyphens as well, including for prefixes, particular parts of words like “multi” and “self,” and certain phrases, including GPA or “grade-point average.” When Words Require Hyphens There is a multitude of spelling nuances in English, and hyphens are just one example. The rules surrounding hyphens are certainly a bit complicated and can be down-right confusing. While they are not incredibly common to most words, and you’ll not see them all that often, hyphens are important where and when required. It is also true that some words that used to have a hyphen no longer do so, though. Some examples of words that you do not need to write with a hyphen include email (e-mail), living room (living-room), bus driver (bus-driver), or nowadays (now-a-days). Again, the central reason for using a hyphen is simply to avoid confusion for your reader. When confusion is no longer likely to occur — such as in words that have become increasingly common, like “email” — a hyphen is unnecessary. Categories and Examples: Proper Spelling with Hyphens In the table below, you’ll find a list of common words and categories that require hyphens. However, the list is not exhaustive. The rules surrounding hyphens are in a state of fluctuation — meaning that there is not 100 percent agreement across all authorities and editing styles, so you may see some words combined, with a hyphen, or spaced (source). When it comes to adjectives and adjectival phrases, it is best to remember that if a noun follows a two (or more) part adjective, you’ll likely need a hyphen (source). That is probably the easiest rule for you to remember, and it also applies to the phrase “high-quality.” It all comes down to whether a noun follows the phrase — we’ll look at more examples of this a bit later. For all other categories, you’ll find that, as you become more fluent and familiar with English grammar, you’ll begin to recognize particular words where hyphens are common. There are no hard and fast rules when it comes to some of the categories below, such as prefixes. Some words with prefixes require a hyphen, while others do not. When in doubt, just remember that a dictionary is your best bet for double-checking. Table with Examples |Type||Hyphenated Word||Example Sentence:| |Two-Part Adjectives or Longer Adjectival Phrases that Precede a Noun| |•I worked at the family-owned business for 10 years.| •The blue-eyed little girl was friendly and sweet. •I work out-of-state since I live in New Jersey, but my office is in New York. •My credit card bills are all up-to-date. (Note: not all prefixes require a hyphen) |•She was a non-English speaking presenter at the Academy.| •He and his ex-wife co-parent amicably. •I have a pre-existing medical condition. |Numbers or Units of Measure we use Adjectivally||•12-inch| |•You’ll need a 12-inch ruler for the art project. | •I took a 3-week intensive course covering English grammar. |Words Containing “Self”||•Self-employed| |•My father is self-employed.| •Her daughter is very self-sufficient for a 10-year-old. |Semi when connected to a word that begins with an “I”||•Semi-intelligent| |•Some animals are considered semi-intelligent creatures.| •Celebrities are sometimes semi-influential. |Line and Word breaks||This refers to times when you are writing, and one word does not fit but, rather, bleeds onto the next line of text.||n/a| |Words that include “all” or “half” and “wide” when we use them with a proper noun||•all-encompassing| |•The change was all-encompassing and affected everyone in the company.| •I gave a half-hearted hug to my estranged Uncle. •The University-wide policy has been in effect for one year. A Few More Tips to Remember for Hyphen Usage Another instance not mentioned above where you’ll find writers using a hyphen with two-part adjectives is when there is an understanding that something is between two things, such as with nationalities and borders between countries. For example, you’ll use a hyphen with “India-Pakistan” or “Anglo-Saxon” (source). Additionally, it is very rare to find proper nouns connected with the word “wide.” University-wide is the most common in that regard. And finally, one more “rule” worth mentioning here pertains to compound words or modifiers with a common base word. With these phrases, you can remove the base word to avoid repetition, but you should retain the hyphen (source). You may see this with numbers or units of measure. Here’s an example: 1. Sprinting practice took place in 1-, 2-, and 3-part intervals. Again, it would be nearly impossible for you to remember all of these examples, including those not listed above, so be sure to consult a dictionary when you need to be certain as to whether you should add a hyphen or not. A good reference work is The Oxford New English Dictionary, which you can easily find on Amazon. Another good tip is to get yourself a copy of Dryers English, a style guide, as it will help you become more accustomed to some of these spelling and grammar nuances. You can also find this work on Amazon. When You Should Avoid Hyphens Just as there are times to use hyphens, there are also times to avoid them. Earlier, we said that two-part adjectival phrases often require a hyphen if they precede a noun. The only time you will not add a hyphen, in this case, is when the first part of that phrase is an adverb and ends in an “ly.” Here is an example: 1. I had a ridiculously small lunch, so I was starving by the time I left work today. Even though “ridiculously small” is a compound and it does precede a noun, you do not need to add a hyphen given that ridiculously ends in “ly.” If you are wondering why this is the case, the easiest way to explain it is that if we took out the word “small,” you would read the sentence as “I had a ridiculously lunch, so I was starving by the time I left work today.” That is grammatically incorrect. Therefore, there’s really no confusion for your reader — he or she will know that both the words ridiculously and small modify the noun, lunch. Another instance where you can avoid the use of a hyphen is when the adjectival phrase or compound does not precede a noun but, rather, follows it. Let’s take a look at this more in detail with the phrases “high-quality” and “high quality.” Understanding Meaning: High-Quality versus High Quality The meaning of high quality, both with and without a dash, is simply that someone deems something to be “very good or well-made” (source). You’ll often see examples in discussions about “high-quality education” or perhaps “high-quality products or services.” As we stated earlier, the tricky part is in understanding what, precisely, is high quality. High quality falls into the category of a compound adjective, so the rule you want to remember is that if the phrase precedes a noun, you will need a hyphen. If it does not precede a noun, you do not need a hyphen, despite its being a compound adjective. Let’s look at two sentences below — one where you’ll see “high-quality” and the second where no hyphen is necessary. - I went to Penn State University because I knew I would receive a high-quality education. - The education I received at Penn State University was high quality. Both of these sentences are communicating the same idea. However, in the first sentence, the phrase “high-quality” precedes a noun (education), whereas, in the second sentence, the adjectival compound high quality follows the noun. Here is another example: 1. The restaurant served beautiful meals containing high-quality ingredients. 2. The meals the restaurant served contained ingredients that were high quality. Again, here you can see that in the second sentence, high quality follows the noun. There is no confusion for your reader about what is high quality — you can easily infer that the writer is speaking of the ingredients. When a Hyphen (or Lack of) Can Change Your Intended Meaning Often, you can assume that the confusion your reader may experience if you forget the hyphen will not be a problem — he or she will likely easily figure out what you are trying to say. But there are indeed situations where this can become tricky. Take a look at another example below — this one with a different phrase than “high-quality or high quality.” 1. The high school students were arrested for breaking and entering. 2. The high-school students were arrested for breaking and entering. These two sentences look and sound the same, but they are certainly ambiguous. In the first sentence, you cannot be certain what the word “high” is modifying. You could read the first sentence and assume that the school students themselves were intoxicated. However, in the second sentence, adding the hyphen communicates clearly that the students were not at all intoxicated but, rather, they were high-school-age students or teenagers. With phrases like high-quality versus high quality, the ambiguity is less severe. Still, it is essential since you always want to ensure clarity when communicating, whether in speaking or writing. This article was written for strategiesforparents.com. If you’d like to learn more about hyphens and these types of phrases in English, take a look at our article on “real time” or “real-time.” English grammar is probably one of the most complicated things you will need to wrap your mind around as you learn the language. Still, you will be surprised by how quickly some of these confusing questions become easy answers. When it comes to hyphens, just try to remember one rule: if your reader would be confused without it, add it. If there is likely no confusion that will result from omitting the hyphen, it’s okay to let it go. For the most part, trust your instincts and, when in doubt, consult your dictionary to determine the best spelling.
<urn:uuid:b82ebc4f-b7ff-4a19-b6b3-7f86e98f074f>
CC-MAIN-2021-43
https://strategiesforparents.com/high-quality-or-high-quality-understanding-when-to-use-a-hyphen/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00230.warc.gz
en
0.950518
2,881
3.3125
3
teeth(redirected from congenital teeth enamel deficiency) Also found in: Thesaurus, Medical, Encyclopedia. Related to congenital teeth enamel deficiency: Congenital enamel hypoplasia Plural of tooth. American Heritage® Dictionary of the English Language, Fifth Edition. Copyright © 2016 by Houghton Mifflin Harcourt Publishing Company. Published by Houghton Mifflin Harcourt Publishing Company. All rights reserved. 1. (Dentistry) the plural of tooth 2. the most violent part: the teeth of the gale. 3. the power to produce a desired effect: that law has no teeth. 4. by the skin of one's teeth See skin14 5. get one's teeth into to become engrossed in 6. in the teeth of in direct opposition to; against: in the teeth of violent criticism he went ahead with his plan. 7. show one's teeth to threaten, esp in a defensive manner 8. to the teeth to the greatest possible degree: armed to the teeth. Collins English Dictionary – Complete and Unabridged, 12th Edition 2014 © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003, 2006, 2007, 2009, 2011, 2014 n., pl. teeth, (ˈtu θɪŋ, -ðɪŋ) n. 1. (in most vertebrates) one of the hard bodies or processes usu. attached in a row to each jaw, serving for the prehension and mastication of food, as weapons of attack or defense, etc., and in mammals typically composed chiefly of dentin surrounding a sensitive pulp and covered on the crown with enamel. 2. (in invertebrates) any of various similar or analogous processes occurring in the mouth or alimentary canal, or on a shell. 3. any projection resembling a tooth. 4. one of the projections of a comb, rake, saw, etc. a. any of the uniform projections on a gear or rack by which it drives or is driven by a gear, rack, or worm. b. any of the uniform projections on a sprocket by which it drives or is driven by a chain. 6. Bot. any small, toothlike marginal lobe. 7. a sharp, distressing, or destructive attribute or agency. 8. taste, relish, or liking. 9. teeth, effective power, esp. to enforce or accomplish something: to put teeth into a law. 10. a roughened surface, as on a sharpening stone, grinding wheel, or drawing paper.v.t. 11. to furnish with teeth.v.i. 12. to interlock, as cogwheels.Idioms: 1. in the teeth of, straight into, against, or in defiance of. 2. long in the tooth, noticeably old; elderly. 3. set one's teeth, to become resolute; prepare for difficulty. 4. show one's teeth, to become menacing; reveal one's hostility. 5. to the teeth, to the fullest extent; fully; entirely: armed to the teeth. [before 900; Middle English; Old English tōth, c. Old Frisian tōth, Old Saxon tand, Old High German zan(t), Old Norse tǫnn; akin to Gothic tunthus, Latin dēns, Greek odoús, Skt dánta] Random House Kernerman Webster's College Dictionary, © 2010 K Dictionaries Ltd. Copyright 2005, 1997, 1991 by Random House, Inc. All rights reserved. the condition of having teeth without roots attached to the alveolar ridge of the jaws, as in certain animals. — acrodont, adj. the habit of purposelessly grinding one’s teeth, especially during sleep. Also called bruxomania. the condition of being decayed or carious, especially with regard to teeth. the shedding of teeth. the production or cutting of teeth; teething. Also called odontogeny. the branch of dentistry concerned with diseases of the dental pulp and removal of the dental pulp, the nerve and other tissue of the pulp cavity; root canal therapy. Also endodontology. — endodontist, n. the branch of dentistry concerned with the extraction of teeth. — exodontist, n. a condition of the teeth in which they become loose, especially the molars. dentition. — odontogenic, adj. a treatise describing or giving the history of teeth. — odontographic, adj. 1. the science that studies teeth and their surrounding tissues, especially the prevention and cure of their diseases. 2. dentistry. Also called dentology. — odontologist, n. — odontological, adj. 2. dentistry. Also called dentology. — odontologist, n. — odontological, adj. an abnormal fear of teeth, especially of animal teeth. the branch of dentistry that studies the prevention and correction of irregular teeth. — orthodontist, n. — orthodontic, adj. the branch of dentistry that studies and treats disease of the bone, connecting tissue, and gum surrounding a tooth. — periodontist, n. — periodontic, adj. preventive dentistry. — prophylactodontist, n. — prophylactodontic, adj. the branch of dentistry concerned with the replacement of missing teeth with dentures, bridges, etc. — prosthodontist, n. a shrinking or wasting away of the gums. -Ologies & -Isms. Copyright 2008 The Gale Group, Inc. All rights reserved. - Beautiful teeth, like china plates —Rosellen Brown - Big teeth … like chunks of solidified milk —Frank Swinnerton - Front teeth showed like those of a squirrel —George Ade - (When she opened her mouth) gaps like broken window panes could be seen in her teeth —Sholem Asch - Her front teeth overlapped each other like dealt cards —Alice McDermott - His teeth looked like a picket fence in a slum neighborhood —Stephen King - His [false] teeth moved slightly, like the keyboards of a piano —Pamela Hansford Johnson - His teeth stood out like scored corks set in a jagged row —Sterling Hayden - Lower teeth crooked, as if some giant had taken his face and squeezed them loose from his jaw —Larry McMurtry - My teeth felt like they had little sweaters on them —Anon See Also: TASTE - Sharp-worn teeth like slivers of rock —Ella Leffland - The shiny new false teeth gave him the peculiar look of someone who smiles for a living —Andrew Kaplan - Small pointed teeth, like a squirrel’s —Willa Cather - Teeth all awry and at all angles like an old fence —George Garrett - Teeth, as yellow as old ivory —Frank Swinnerton - Teeth … big and even as piano keys —Helen Hudson - Teeth … channelled and stained like the teeth of an old horse —R. Wright Campbell - Teeth … chattering like castanets —Maurice Edelman - Teeth clatter like ice cubes in a blender —Ira Wood - Teeth clicking like dice —T. Coraghessan Boyle - Teeth like cream —Willa Cather - Teeth like a row of alabaster Britannicas —Joe Coomer - Teeth like pearls —Robert Browning - Teeth like piano keys —Elizabeth Spencer - Teeth like white mosaics shone —Herbert Read - Teeth … tapping together like typewriter keys —Cornell Woolrich - White teeth, the kind that look like cheap dentures even when they are not —Eric Ambler Similes Dictionary, 1st Edition. © 1988 The Gale Group, Inc. All rights reserved. Up to 32 bone-like structures in the jaws. Different types (incisors, canines, premolars, molars) are specialized to pierce, tear, crush, and/or grind food. Dictionary of Unfamiliar Words by Diagram Group Copyright © 2008 by Diagram Visual Information Limited Switch to new thesaurus |Noun||1.||teeth - the kind and number and arrangement of teeth (collectively) in a person or animal| primary dentition - dentition of deciduous teeth secondary dentition - dentition of permanent teeth tooth - hard bonelike structures in the jaws of vertebrates; used for biting and chewing or for attack and defense mouth, oral cavity, oral fissure, rima oris - the opening through which food is taken in and vocalizations emerge; "he stuffed his mouth with candy" set - a group of things of the same kind that belong together and are so used; "a set of books"; "a set of golf clubs"; "a set of teeth" Based on WordNet 3.0, Farlex clipart collection. © 2003-2012 Princeton University, Farlex Inc. plural noun see teeth Collins Thesaurus of the English Language – Complete and Unabridged 2nd Edition. 2002 © HarperCollins Publishers 1995, 2002 n., pl. dientes; deciduous ___ → ___ de leche o primera dentición; permanent ___ → ___ permanentes; secondary ___ → ___ secundarios; wisdom ___ → ___ cordales pop. muelas del juicio. English-Spanish Medical Dictionary © Farlex 2012 teethpl de tooth English-Spanish/Spanish-English Medical Dictionary Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved.
<urn:uuid:12e8b1cc-2e60-414a-bfb6-ca74446c3575>
CC-MAIN-2021-43
https://www.thefreedictionary.com/congenital+teeth+enamel+deficiency
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00110.warc.gz
en
0.869953
2,118
2.65625
3
If there was a rule book of life, there would be one particular page that was highlighted, underlined, and titled as most important. It would be the one which told you that you need to master effective goal setting and have an aim in mind before you get on with the process. While there may not be an actual rule book of life, we do have this helpful goal setting guide to offer. Yes, goal setting is important. In fact, it’s more important than achieving the goal itself. This is because it is the sense of direction that is needed for you to fulfill any task in life. You don’t have to feel overwhelmed if this sounds new to you, as all the following information has you covered. Today, you’ll find out all about the importance of goal setting, types of goals, and tips to define realistic goals for yourself! What Are Goals? To kick off our goal setting guide, you need to first recognize what goals are and how they are different from objectives, dreams, and expectations. A goal is essentially your aim for the long-term future. It is the bigger umbrella, the main focus. Objectives, on the other hand, fall under the umbrella of goals. They are the stepping stones that help you achieve your goals. For example, you may decide you want to learn a new language. Your goal is to be fluent in the new language. Everything you do to achieve this goal, such as the daily tasks and monthly learning aims, are the objectives. Similarly, your expectations, visions, and dreams are not your goals. If you wish to learn a new language someday, that is your dream. If you see yourself fluently speaking multiple foreign languages, that is your vision. If you think you’re capable of learning a new language, that is your expectation. However, if you aim to fulfill these visions, dreams, and expectations practically, that is your goal. Why Is Goal Setting Important? Why should you bother with goal setting at all? Wouldn’t it be more convenient to just get on with your daily objectives, follow a dream or vision, and let life take you wherever? While that road can feel exciting and spontaneous, if you actually want to tick off things from your list of goals to achieve, learning how to set goals is necessary. Being committed to a goal puts your brain to work in one specific direction. Believe it or not, by having a defined goal, your brain does its magic unconsciously, 24/7, with full efficiency, to achieve the desired results. Goal setting is important to shift your focus, boost your motivation, and give you a sense of direction. Without formally defining a particular aim that you want to reach, you won’t be able to keep your objectives in line. Hence, this one tiny step can end up saving you a lot of hassle and time while also encouraging your productivity. Types of Goals Before we move onto the technique of setting effective goals, we need to first take a look at all types of goals in this goal setting tips. These categories will not just help you brainstorm new one for yourself, but it will also guide you to list them down in the right way. One of the two broad categories of goals is based on time. These goals define how far in the future you want to achieve them. There are certain smaller goals that you can easily achieve in a day or two. In fact, some of these daily goals can be recurring, too. For example, you may want to run for an hour every morning. Now, these daily goals can also serve as objectives for a long-term goal. You may be running every day because, in the long-term, you want to increase your stamina. Daily goals are highly effective for people who want to improve their mental wellbeing, time management skills, and stress management. Next in line are short-term goals. As you would have already guessed, goal setting in this area is aimed at the near future. The great thing about these is that they are generally easier to achieve. This is because short-term goals are set for the foreseeable future. You are aware of the circumstances and have a general idea of how much the situation can change. Just like daily goals, short-term goals may also serve as objectives for a long-term goal. Your short-term goal may be to lose 5 pounds in one month. That could be a goal in itself, or maybe it is just one objective to fulfill your goal to adopt a healthy lifestyle in the next two years. Another example of a short-term goal is to fulfill the checklist for promotion within the next 6 months. Or, you may want to reduce your screen time within the coming week. Lastly, we have long-term goals that are meant to be completed over a stretched period. Whatever you want to achieve in a later stage of life is a long-term goal. An insurance plan, for example, is a long-term goal. Some long-term goals don’t have any time frame at all. They are goals that you want to accomplish at some point in your life. So, something like traveling the whole world is a lifelong goal with no specific time constraint at all. There’s one thing about long-term goals that isn’t great. They are the hardest to keep up with since you’re not seeing any huge achievements regularly. This may take a toll on your motivation. To tackle this problem, it is best to divide a long-term goal into various, short-term and daily objectives so that you’re always tracking the progress you’re making. Moving forward, you can also start goal setting based on the results you want to achieve instead of the time period. Like most people, you will likely want to succeed and excel in your career. Anything that has to do with this intention, regardless of the time frame, is a career goal. These are usually measurable goals, such as receiving a promotion within two years, finding a job at a certain company within the next six months, etc. You can learn more about how to set successful career goals here. The past few years have all been about emphasizing your personal health. So, when it comes to goals, how can we forget the ones that have to do with our personal gains? From health to finances to relationships, everything that brings you happiness and composure as a person is a personal goal. It’s important that these are realistic and attainable goals for your life. Whether you want to get rid of your debt, quit smoking, start a side hustle, have children, or travel the world, all of these goals are personal and very important to have on your list. How to Set Goals The best way to guarantee the fulfillment of goals is to set them the right way. 1. Use SMART Goals Every goal you define has to be SMART. SMART stands for: In summary, your specific goals should be very well defined. They shouldn’t be generic or broad, and every detail should be clarified as you’re goal setting. If you want to start running, how often do you want to do it? How long will each session be? For how long will you continue this habit? There has to be a connection between your goals and beliefs or you’ll never be able to achieve the results you want. Most importantly, do not be unrealistic. You cannot learn to fly, and forcing yourself to try is only going to demotivate and stress you out. 2. Prioritize Your Goals As you’re looking into how to write goals for the next month or year, it’s likely you’ll come up with more than one. In this case, it’s important to prioritize which are the most important or the ones that have the tightest deadline. This is going to be subjective, as only you know which goals will have the most impact on your life. 3. Think of Those Around You As you’re working on goal setting, keep your loved ones in mind. You may have a partner, children, or employees that depend on you, and you should take them into consideration with your goals. For example, if you set a goal to travel to 10 different countries in the next two years, how will this affect your children? If you want to lose 30 pounds this year, is there something your partner can do to support you? S/he will need to be made aware of this before you set off on your weight loss journey. 4. Take Action Setting goals is the first step, but in order to be successful, you have to follow this with action. If you set goals but never act on them, they become dreams. Create an action plan laying out the steps you need to take each day or week in order to achieve your big and small goals. You can also check out Lifehack’s free guide: The Dreamers’ Guide for Taking Action and Making Goals Happen. This helpful guide will push you to take action on your goals, so check it out today! 5. Don’t Forget the Bigger Picture Most people refer to the big picture as their vision. Whether it is the long-term result or the connection of the goal with your desire, keep it in mind to keep yourself from getting distracted. You can learn more about creating a vision for your life here. I also recommend you to watch this video to learn 7 strategies to set goals effectively: How to Reach Your Goals You can ensure your progress by following some foolproof tactics. The use of relevant helpful tools can also keep you on the right track. One rookie mistake that most people make is that they work on too many goals simultaneously. Create an action plan and focus on one thing at a time. Divide your goal into smaller, easily achievable tasks. Taking it one step at a time makes it much easier. However, do not break them down too much. For example, for long-term goals, you should go for weekly checkpoints instead of daily ones. Also, keep track of your progress. This will keep you motivated to work harder. With so many categories of goals and so many aims, it is almost impossible to remember, let alone work, on all of them. Luckily, numerous goal tracker apps will help you keep track of your goals, as well as your plan to achieve every single one. Have at least one installed in your smartphone so that your plan is always within reach. The Bottom Line In conclusion, using a goal setting tips guide is not rocket science. All that it takes is strong will power along with all the knowledge that you’ve learned so far. Try out the tactics and goals setting tips mentioned above to be able to set successful goals so that you can achieve the life that you want! More Tips on Achieving Success Featured photo credit: Danielle MacInnes via unsplash.com
<urn:uuid:d11b865d-8915-41d9-970b-86d3897273f9>
CC-MAIN-2021-43
https://thirteen.space/7-best-goal-planners-to-get-in-2021/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00270.warc.gz
en
0.960286
2,297
2.546875
3
This week I’m “at” the RNA Society’s Annual Meeting where I “gave” my first ever official oral presentation. Conferences like these are usually big, crowded events where scientists from around the world jam into packed rooms to hear talks and squeeze through tangles of people to get a look at the hundreds of research posters tacked up on corkboards spaced out around exhibit halls. Not exactly social distancing… So this year, in order to protect all of our coworkers, and everyone else we could possibly accidentally infect, the conference was held online, with posters and talks hosted on a web platform. This layout may be unusual, but RNA researchers are used to protecting our coworkers – our molecular coworkers that is – because RNA, despite being really powerful, is really delicate and has to be treated with extreme TLC. A bit about why and then a few notes about the conference. First off, what is RNA? It’s a nucleic acid – like the nucleic acid you’re probably more familiar with, DNA, it’s a chain of NUCLEOTIDES which have a generic “sugar-phosphate” part that allows them to link together, connected to a unique nitrogenous base or “base” part (C, G, A, and T (in DNA) or U (in RNA) which allow for specific base-pairing between strands (C to G and A to T or U). These bases of your RNA and DNA really are bases (as in things that can accept a proton (H⁺) – thankfully only weak ones! If a solution is acidic, meaning there are lots of protons around (low pH), A or G bases can get protonated. Now, it doesn’t need sugar to satisfy its needs, so it breaks that relationship off which can lead them to leave the sugar-phosphate backbone they’re attached to -> DEPURINATION This leaves you with a sort of “hangman” like situation where you’re left with something like S_METHING. Your cell knows that something should be there, but it has to guess what that missing letter is, but it might get it wrong. Is that supposed to be “sOmething” or “sAmething” – which are NOT the “same thing”! Thankfully, your cells maintain a pH of about 7.4, which is too high for them to protonate much. In science-y terms, the nucleobases have a pKa that’s substantially higher than the cellular pH. The pKa tells you the pH at which 1/2 of the groups will be deprotonated. So a higher pKa means that you have to swamp them with more protons (make conditions more acidic) to get them to take one. BUT when you’re working in a test tube you have to make sure that pH is “safe” too! pH is a measure of how many protons (H⁺) there are in a solution. The more protons, the lower the pH because it’s an inverse log scale and the more acidic. And the fewer the number of protons, the higher the pH & the more basic. Unlike most covalent bonds, which are strong, many bonds to hydrogen are somewhat “looser” – the H⁺ can come and go depending on the pH. More here: http://bit.ly/2CcLQX7 We usually think of nucleic acids as having a negatively charged backbone because, under physiological (typically bodily) conditions, the phosphate groups are negatively charged because they have “extra” electrons, the negatively-charged counterparts to protons, which atoms share pairs of in covalent bonds. But the phosphates are only negatively charged because they’re in their deprotonated form. If they grab onto a proton (protonate) they’ll become neutral because the positive charge of the proton will cancel out their negative charge. But they’ll only do this if there are TONS of free protons around because the phosphate groups are happy being negative because they have something called resonance stabilization – they kinda play “hot potato” with the “extra” electrons, evenly distributing that charge. So you have to get things super acidic before you have to worry about this. So normally you have a negatively charged backbone & NEUTRAL nucleobases. BUT the nucleobases can also give and take protons. When they do, it disrupts the base pairing – this denatures double stranded DNA or RNA – it removes it’s “natural” shape and separates the strands. But the strands remain strand-y and unbroken and, unlike with the depurination, the nucleobase stays attached to the sugar. So, even though we don’t have to worry about the backbone protonating until we reach crazy-low pHs, milder acidic conditions can still cause problems – both with disrupted base-pairing and increased depurination -> low pH is no good. But you don’t want to go too far the other way either (too few protons (too high a pH)) or you’ll pull off the protons that need to be there for the bases to bind each other. And in RNA you have an additional, very important one to worry about The main difference between RNA & DNA is the thing that makes them “R” or “D” -> RNA has a RIBOSE sugar and DNA has a DEOXYribose sugar -> RNA has an “extra” oxygen. Usually this oxygen is bonded to a hydrogen to give you a hydroxyl (OH) group, but if there aren’t better sources around, an OH⁻ can pull that H⁺ off… That leads you with a O⁻ on the hunt for some + charge. And it doesn’t have to look far. That phosphate group might be negative overall, but the Phosphorus (P) at its center doesn’t get to participate in that electron hot potato fun, so that P is actually slightly positive. So the O⁻ attacks it. But then that phosphate would have too many bonds, so it kicks off one of it’s old oxygens, the one connected to the base below it -> the chain breaks Initially you get a 2’,3’-cyclic monophosphate derivative (the sugar’s “legs” kinda playing with each other criss cross applesauce style), but this can then react with water to form a mix of 2’ and 3’ monophosphate derivatives -> basically the sugar takes back a proton. but it can do this on either “leg” so you get 2 products. Note that neither of these are where that phosphate was before. In “normal” nucleotides that phosphate is at the 5’ position (like the “left arm”) but now it’s at one of the legs – the left one or the right one. It’s never on the “right arm” because that’s where the nitrogenous base goes. So, when working with RNA or DNA, pH is one thing that researchers have to be careful to monitor. But with RNA, there’s another really important thing to worry about – RNA can get “chewed up” by enzymes called RiboNucleases (RNases)) which are basically everywhere because they offer a sort of “generic” protection from RNA viruses that try to get near us 🔹 we secrete them in our tears, saliva, mucus, & sweat 🔹 bacteria & fungi also secrete RNAses to protect themselves But RNAses DON’T protect us from experimental failure! So we have to be super careful when working w/RNA. We use precautions including special cleaners or just good ole ethanol to keep our work area RNase-free. Our lab even has a designated RNAse-free room (that doubles as the hot room (where we do radioactive (“hot”) work) and an RNAse-free microcentrifuge in our main lab. So -> At low pH you have to worry about depurination or RNA & DNA, as well as disrupted base pairing. At high pH you have to worry about disrupted base pairing of DNA & RNA as well as hydrolysis of the backbone of RNA So, now that I’ve told you about some of the steps RNA scientists take all the time to protect our RNA friends, some more notes about the conference. Firstly, I want to give a huge shoutout of gratitude to the International Union of Biochemistry and Molecular Biology (IUBMB) and the RNA Society for providing me funding to attend. Even though I didn’t end up getting to travel to Vancouver and meet a bunch of awesome people in person, it’s still been a tremendous opportunity and it was an honor (albeit a bit weird and still super nerve-wracking) to be able to give a talk on my work studying a protein-production regulation method called RNAi). But the real highlights of the conference for me were the two IUBMB-sponsored keynote lectures (whereas most talks are short and selected from submitted abstracts, keynotes are long and the speakers are invited based on their record of awesomeness, without them having to “apply”). Dr. Jack Szostak, a professor at Harvard, gave a molecular-mind-boggling talk about how RNA could have evolved from non-life things and given life to things. Speaking of Dr. Szostak – one of my favorite grad school interviews was with him – when he asked if I had any questions, he was probably referring to questions about the school, but I asked him about whether he thought life could evolve on other planets without water… The second keynote lecture was from Dr. Melissa Moore from Moderna (yeah, *that* Moderna). She talked about how mRNAs (the messenger RNAs that serve as copies of gene recipes that get read by the protein-making complexes called ribosomes to make protein) can be used as therapeutics. I was afraid it was going to be all coorporate-y, but Dr. Moore was totally “down to earth” and hard core science-y. She actually had a highly successful academic career before joining Moderna, and it showed. She also had an inspiring message about the importance of basic research (research aimed at “just” understanding things rather than putting them to immediate use) and how Moderna couldn’t be doing what it’s doing if it weren’t for all the basic scientists who have done and continue to do what we’re doing. Now, more than ever, as we face an international (and biochemistry-related) crisis, I am incredibly grateful to be able to serve as Student Ambassador for the International Union of Biochemistry and Molecular Biology (@theIUBMB) that has helped me recruit translators and share the translated versions around the world. This post was just one in my series of weekly “Bri*fings” – this week a “Bri*fing from RNA2020!” If you want to learn more about all sorts of things: #365DaysOfScience All (with topics listed) 👉 http://bit.ly/2OllAB0
<urn:uuid:f3d459f8-0572-42ca-a21f-afb51afc5577>
CC-MAIN-2021-43
https://thebumblingbiochemist.com/365-days-of-science/rna-tlc-rnas-sensitive-so-treat-it-with-tender-loving-care/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00590.warc.gz
en
0.95783
2,445
2.890625
3
The Book of Ecclesiastes The book of Ecclesiastes is written by Solomon, the author of Proverbs and Song of Solomon. In this book, Solomon is looking for something that lasts, for what is worthwhile in this life. He ponders how we are surrounded by people living for this world and they are still empty and discontent. This is a book written by human wisdom, not the word of God. It shows that no one can have a happy life on this Earth without God and an eternal perspective. - “under the sun” : emphasizes Solomon’s narrow-minded viewpoint throughout most of this book; he is only looking at life on this Earth. - “this too is meaningless” / “everything is meaningless”/ “meaningless”: Solomon learns that on this Earth, nothing truly has a point when you don’t have an eternal perspective, and you are carrying out your life for yourself. - “chasing of the wind”: cannot be grasped and held onto, just as the wind cannot be captured. Wind temporarily fills your sails, but does not stay forever, just like the temporary joys Solomon finds in his pursuits of meaning. - pleasure, toil, wisdom, greed - “no one knows what is coming”: we do not know our fates, this we all have in common. We do not know what will come from day to day. Only God knows this. Solomon is lost and distraught, pondering the cycles of time (“generations come and generations go…the sun rises and the sun sets..” Ecc. 1: 3-7) and how “there is nothing new under the sun” (Ecc. 1:9). He questions what man gains from all his toil under the sun, and states that everything is meaningless. He then decides to follow three pursuits to see if he can discover some meaning in life: wisdom, pleasure (hedonism), and things/success (materialism). Through his pursuit of wisdom, Solomon realizes that knowledge alone as fulfillment will never fully satisfy. “For with much wisdom comes much sorrow; the more knowledge, the more grief”. - Take away: Solomon has drifted away from the Lord. He questions how anything can be gained is cycles are constantly repeating, and if nothing is gained then what is the point of all of our toil? If there is nothing new, then everything is meaningless. What Solomon fails to recognize is that all of this is occurring under the sun, on this Earth. He is forgetting the eternal perspective, and that God has a time for everything. He learns that wisdom alone is not the answer, for it opens his eyes to more sorrows and griefs in the world. - Time is the only resource that is never replaced. Solomon explores the meanings of pleasure and work in life. He denies himself nothing, acquires all of the “delights of the heart of man” (Ecc. 2:8), yet he keeps his wisdom as well. He takes delight in anything he wishes, but states that at the end of the day when he looks at what he has achieved, it still appears meaningless. He then flips to the opposite side of the spectrum and explores the outcomes of hard work and toil. He realizes that wisdom is better than folly, but that the same fate is hanging over all of our heads- death. He then becomes angry and frustrated that man can work his whole life away, constantly in “pain and grief; even at night his mind does not rest” (Ecc. 2:23), and declares this meaningless as well. - Take away: All of these worldly pleasures will make us happy only for a moment, but stressing constantly and working our life away will not bring us joy either. Solomon finally comes to the realization at the end of chapter 2 that to work and enjoy the fruits of our labor is a gift from God. Without God, we cannot find lasting enjoyment. Life was meant to be enjoyed, for the glory of God. Solomon looks at how there is a time and place for everything, and questions what we gain at the end of the day. If all of these events are going to happen anyways, what is the point? Solomon begins to realize that God wants us to be happy, and “do good” while we live. That God will be the final judge over the wickedness of this world and “there will be a time for every activity, a time for every deed” (Ecc. 3: 17). Solomon states that God tests us, for we all have the same fate at the end. - All events in life are beautiful in their own time (Ecc. 3:11), and God wants us to view life with an eternal perspective. God is in control of the timing of everything. We all will die at the end of the day, and we do not know what will come after us, but God wants us to enjoy our work while we are here. That is His gift to us- to be able to work, but to also enjoy what we have earned. Solomon finally looks at other people, and considers the importance of relationships. He sees the oppression around him, and that people can be cruel. He sees that labor spurned from envy of others is pointless. He realizes that working for the sole purpose of ourselves is also pointless. Solomon sees that “two are better than one” (Ecc. 4: 9), and that we need to life and work for the benefit of others. - Work to be able to impress others is pointless, but not working is also unhealthy. We should not stop working, but we should stop working for wanting what others have. Our work has meaning in community. We need the help of each other. We are not meant to be alone on this Earth; we are meant to have fellowship with others. This chapter takes a look at how we are to approach God, and it takes a look at our love for money. Solomon states that we go near to the Lord to listen, and that we are to “not be quick” with our mouths or “hasty” in our hearts (Ecc. 5:2). We are to let our words be few. Solomon speaks the importance of fulfilling all vows we promise to God, stating that it is better to not make a vow than to make one and not fulfill it. He then explores the human relationship with riches, realizing that those who love money never have enough and those who love wealth never have enough (Ecc. 5: 10-11). He ponders about the dangers of lusting after wealth and possessions, stating that if we toil “for the wind” we gain nothing. We all die the same, with just our bodies. - When we go before God, we are to listen and speak thoughtfully. We are to humble ourselves before the Lord. - Our actions speak louder than our words and promises. - Money is not inherently bad, but our love of it is. Our love of wealth breeds greed, envy, unhappiness, and discontent. We all enter and leave this world with just the skin on our backs, so we should remember not to make money our idol. It is God’s gift to us to be able to work and be happy with what we are given. To be able to enjoy our work and our wealth, and to not worry about obtaining more but be content with what we have been given, is God’s gift to us. Ecclesiastes 5: 18-20: Then I realized that it is good and proper for a man to eat and drink, and to find satisfaction in his toilsome labor under the sun during the few days of life God has given him-for this is his lot. Moreover, when God give any man wealth and possessions, and enables him to enjoy them, to accept his lot and be happy in his work-this is a gift of God. He seldom reflects on the days of his life, because God keeps him occupied with gladness of heart.” In chapter 6, Solomon realizes that man can have all the wealth in the world and still be unhappy if he does not enjoy and give thanks for what he has. “A stillborn child is better off than he” is the dramatic statement Solomon makes about this man (Ecclesiastes 5: 3). He notes that if we fail to enjoy our prosperity here on earth, we will still die all the same. Solomon states that we should be content with what we have, even the “little” things, and keep our focus eternally. Be grateful for and take pleasure in the gifts we are given, no matter how big or small. Chapter seven is full of wise thoughts Solomon has about death, sadness, and the progression of life. It talks about how there is a time and purpose for all things in life. The chapter opens up with Solomon speaking of death, and how the day of death is greater than the day of birth because one’s character has been established by death. “Sorrow is better than laughter, because a sad face is good for the heart” indicates the wisdom that there is a time, place, and purpose for sadness in our lives and our development (Ecclesiastes 7:3). He speaks again of living in the present moment, enjoying what we currently have versus what we used to have or want to have. Verses 13-14 recognize that God has made our lives to be non-linear, and that He is behind the good and the bad. “When times are good, be happy; but when times are bad, consider: God has made the one as well as the other” ( Ecclesiastes 7: 14) There is a time, place, and purpose for everything in our lives. God did not design our lifetime to be filled with only successes and joy. God created and allows sadness for a reason. Solomon (who is a king) speaks about being respectful to the king and the authoritative systems. Again, he states how there is a proper time and place for everything. Like Daniel, Joseph, and Jesus, we we are to trust in God’s timing and plans by being respectful of our human rulers. Solomon then goes on to discuss how a sinner might seem to get away with his actions, but judgement will always come. That even though the righteous men can get what the wicked deserve, and the wicked can get what the righteous deserve, God is just. Justice might be slow but it will come (Ecclesiastes 8:11-14). Then Solomon circles back to the importance of enjoying our current lives here on Earth, for we do not know God’s plans for us. “No one can comprehend what goes on under the sun” (Ecclesiastes 8: 17) so we are to appreciate and enjoy our gifts while we can. Justice will come for everyone, even if it is not on this Earth Do not be abusive of our gifts, but do enjoy our time here on Earth. We do not and never will be able to understand God’s plan for us, not matter how wise we become. “No man knows whether love or hate awaits him. All share a common destiny- the righteous and the wicked, the good and the bad, the clean and the unclean, those who offer sacrifices and those who do not….this is the evil in everything that happens under the sun. The same destiny overtakes all” (Ecclesiastes 9:1-3). Everyone dies. Solomon only sees this fact. He does not see that eternity is different for everyone. Even so, Solomon urges us to life our lives like a celebration, “with a joyful heart”. Our days are fleeting, the battle does not always go to the wealthiest, the most brilliant, the healthiest, we do not know when our time will come (Ecclesiastes 9: 11-12). Solomon briefly discusses the importance of wisdom over brashness at the end of the chapter. We do not know when our life will end. Celebrate it daily, and do not take it for granted. Just because you are wealthy, you are not exempt from fleeting days and wickedness and sadness. A quiet wisdom is better than loud foolishness. Solomon wisely states that “as dead flies give perfume a bad smell, so a little folly outweighs wisdom and honor” (Ecclesiastes 10:1). This is to say that you can live your life well overall, but little mistakes can ruin you. Kind of a bleak outlook, although not entirely wrong. The rest of the chapter carries on many of the same themes as several before it; folly is madness, wickedness is stupid, skill and wisdom will bring success. Living life only for pleasure will heed no joy; one must work before pleasure. Again, “no one knows what is coming – who can tell him what will happen after him?” (Ecclesiastes 10: 14). Some seemingly anachronistic investment advice- “give portions to seven, yes to eight, for you do not know what disaster may come upon the land” (Eccl. 11: 2). Diversity your investments, for failure is a part of life and if all your eggs are in one basket, failure can wreck you. Our time too will end/for dust you are from and to dust you will return/aging body, hands trembling, eyes dim our fire flickers out. While you still have time left, do not waste it. “Fear God and keep his commandments…For God will bring every deed into judgement, including every hidden thing, whether it is good or evil” (Ecc. 12:13-14).
<urn:uuid:5a95a881-7177-4d2b-a12d-67ffb14a5d2f>
CC-MAIN-2021-43
https://mountains-and-valleys.com/2021/04/09/the-book-of-ecclesiastes/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00429.warc.gz
en
0.961585
2,907
3.359375
3
Shared electric scooters have taken the world by storm, thanks largely to the massive efforts by companies like Lime and Bird. Such companies have flooded cities around the world with affordable, convenient electric scooters that are solving a critical transportation need. However, a dirty secret of the industry is that these scooters break down at an alarming rate while constantly being cycled out of circulation and replaced. Many scooters don’t last 3 months on the road, and some are even replaced monthly. That’s not enough time to even recoup their cost, calling into question how sustainable such companies can be if their scooters can’t last long enough to be profitable – not to mention the environmental ramifications. But Superpedestrian, a startup straight out of MIT, has built a solution in the form of a smart and self-repairing industrial-grade electric scooter. Could this be the breakthrough needed to turn shared personal electric vehicles into a fully sustainable transportation alternative? I met up with Superpedestrian’s CEO Assaf Biderman at their Cambridge, Massachusetts headquarters to learn more about the company’s mission and technological developments. As Assaf explained, the issue of urban mobility is one that requires an urgent solution. Urban centers in the US are expected to see nearly a three-fold increase in mobility demands by 2050. According to Assaf: “It’s a major issue. Where are you going to put three times the number of cars or people on the road in just three decades? And it’s not about one company. It’s not about Tesla, or Bird, or Superpedestrian. That problem is not going away and will have to be solved.” Assaf believes that without room to expand roads sufficiently, the problem of meeting 3x the urban mobility demand will need to be solved by curtailing the current trend of one person taking up the space of a (often large) car. For years, experts have focused on the concept of carpooling to more efficiently capture road space. However, while carpooling and ride shares such as Uber have helped to a degree, pooling returns are limited. Studies have found that peak efficiency for pooling is just over 2 passengers per car, at which point more passengers actually increases congestion due to the extra miles traveled and traffic created to collect them. Cities just don’t have room for triple the number of cars So while helpful, pooling doesn’t work to solve the issue as a single solution. Instead, Assaf believes that the answer lies in personal vehicles such as electric bicycles and scooters that are close to the size of a human body. Such small vehicles can be used both for complete car replacements on entire journeys, or to increase the efficiency of other transportation solutions such as pooling options and mass transportation. While electric bicycle and scooter sharing companies such as Bird and Lime are on their way to addressing this need, they’ve discovered that this solution presents its own problem. Now you need millions of individual vehicles that cost hundreds of dollars instead of thousands, can be centrally managed and don’t require a large workforce to maintain them. And since shared scooters and e-bikes have many temporary riders instead of a single owner, they need to be able to look after and care for themselves in order to be scalable to a size that actually impacts the transportation issue. As Assaf explains: “They need to have a level of intelligence in them to take care of themselves on the street that is greater than that of a normal car today. They need to survive things that cars are not required to survive. But the price pressure is intense; they need to cost an order of magnitude less to make. That’s something that the car industry doesn’t know how to address. There’s not an existing technology platform for addressing that particular challenge. And that’s why Superpedestrian was born. To solve the key technical challenges that will help scale micro-mobility deployments into massive, massive numbers.” So far, no scooter or e-bike used by the major shared micro-mobility companies has been able to check all of these boxes. Or at least until now, as Superpedestrian claims to have succeeded in building the perfect system. Superpedestrian’s next level e-bikes and scooters While at Superpedestrian’s headquarters a few blocks from MIT, I had the chance to take one of the company’s new industrial-grade scooters for a test ride. Having ridden just about every electric scooter on the market (a perk of the job!) I can tell you this might be the nicest ride I’ve ever had on an electric scooter. I tried to play it cool and not act dumbstruck, but this scooter felt like nothing before. And that’s coming from someone who has just about seen it all. I was sure that Superpedestrian’s electric scooter had dual suspension based the way it glided over pot holes, speed bumps, cobblestones and brick pavers. As it turns out, the scooter only has front suspension, but the 12″ wheels make the ride so smooth you’d never know. For comparison, those wheels are around 50% larger than almost every other electric scooter on the market. The rake angle of the front fork is also steeper, making it a blast to carve around the street while remaining completely stable. The scooter also feels incredibly solid from the moment you step onto it. It made me feel like I was riding a true vehicle instead of an electric toy. That sense helps the rider feel much safer and more in control. The scooter is also designed to be adaptable, allowing it to function in nearly any country and regulatory environment. All of its specs are adjustable in real-time – something I discovered as my scooter suddenly started changing mid-ride. I returned to see Assaf at his keyboard, smiling. Part way through my test ride he remotely granted me extra speed. Then just as quickly as it was granted, to scooter lord taketh away. Suddenly I was gifted extra features like cruise control. All with the remote click of a button. As Assaf explained: “If you have a simple scooter like those used today and local regulations change, you have to pull back your entire fleet. With our system, you can change everything remotely to comply with new laws.” I pressed Assaf for technical specs, but he couldn’t reveal too much proprietary info yet. Even so, I can tell you that the direct drive motor chosen by Superpedestrian is definitely leagues above what I’ve seen on other scooters. With a background designing electric vehicles myself, I can see that the company is using the motor well below its limit, which helps them achieve better efficiency than other scooters that operate right at their design limits, or even past them (resulting in the disposable nature of so many consumer-grade electric scooters). In fact, Assaf explained how the company’s approach to tackling efficiency resulted in creating the perfect scooter designed for long-term fleet use: “We’ve gotten to the point where we’ve almost doubled the range of our scooter for the same amount of charge compared to your average scooter today. And consider that nearly half of revenues of some scooter companies are spent on charging, think of what that does for a fleet operator’s bottom line.” Those efficiency gains come from a careful selection of hardware and a refined software integration that helps to eek out every bit of range possible for the same amount of stored energy. Superpedestrian writes all of their own software to run each component of the scooter instead of relying on off-the-shelf solutions found en masse in Asia. The scooters operate at a higher voltage than any other scooter I’ve seen, which results in less wasted energy by operating the motor in a more efficient regime. Each component is developed and optimized in-house, with extreme levels of testing and validation at their Cambridge R&D center. Assaf showed me room after room of custom test equipment, from dynamometers for engine testing to load testers that perform long-term riding cycles of vehicles when weighed down by simulated passengers weighing in the 95% percentile for both male and female riders. They have salt fog machines for performing intense corrosion testing and insulated test chambers for extreme weather testing. After seeing their extensive test center, I’d wager Superpedestrian does better cold weather testing than Tesla did on the Model 3 before rollout. And to ensure that all of their hardware is manufactured in their worldwide factories to the same high standards to which it is developed, the company even builds their own testing equipment to be used in the factories. Superpedestrian has all the same work going into the development of their electric bicycles as they do for their scooters, though I didn’t get a chance to test ride their e-bike this time. Hopefully that one will be coming soon. Vehicle intelligence is what sets these e-bikes and e-scooters apart These are impressively designed and robust industrial-grade vehicles, that’s for sure. But like Assaf explained, it takes more than just an effective vehicle to solve the problem of managing massive fleets. And that’s where Superpedestrian truly shines. With their close location and association with MIT, Superpedestrian has taken advantage of some of the brightest engineers in the industry to design a completely new level of intelligence into their vehicles. Today’s electric bicycle and scooter sharing companies rely on a system of simple scooters that require incredibly inefficient and labor-intensive maintenance. Current scooters communicate with their fleet operators by usually reporting back just location and battery charge. When something breaks on scooters like those used by Lime and Bird, it is up to riders to notify the company. Not only does that mean multiple riders might try to ride a broken scooter before anyone lets the company know that there’s a problem, but then each scooter requires multiple visits by a technician to locate it, diagnose issues, retrieve it, repair it, then return it to service. The entire system is terribly inefficient and will never scale effectively to truly solve massive transportation needs. Not only are Superpedestrian’s vehicles designed to higher standards to last much longer without maintenance, but since all machines eventually break, Superpedestrian designed their scooters and e-bikes to handle most problems themselves. All components talk to each other and self monitor. As soon as a problem is encountered, the vehicle’s own central computer is notified and the scooter attempts to isolate the issue. Using a number of self repair programs, the scooter can actually solve and repair by itself many of the common problems that sideline other scooters, such as battery voltage imbalances that are a common issue in electric vehicles. A number of sensors also help prevent damage before it even occurs. According to Assaf: “For example, consider water ingress. If water somehow enters the scooter, the system identifies and reports back where it happened. If it poses a risk to the electronics, which then poses a risk to the rider and the vehicle, it immediately opens the circuit so that there will be no damage to either.” Self-protection and self-repair works for many common issues, but for larger issues that can’t be handled by the scooter itself, the computer reports to the cloud and asks for remote maintenance. That flushes all of the operating systems in the multiple embedded computers and reloads them – another advantage of designing all the hardware and software internally. “Between those two things, self-diagnostic and self-protection as well as remote maintenance, we address over 55% of all technical issues without human intervention. Think of what that means for a massive fleet that you own, where the biggest expense is humans and manual labor.” While the scooter and cloud can handle the majority of issues completely automatically, there are still some problems that can’t be handled remotely. To make those repairs more efficient, the central computer reports to the cloud exactly what the problem is and what the correct solution will be. As Assaf explained it, “It’s like if your immune system could talk directly to your doctor, telling him or her what is wrong and what it needs.” For example, Superpedestrian’s vehicles can remotely inform their operator that they need a new motor controller, or that the battery has reached its end of life. That saves a diagnostic trip, and means that a technician can arrive and perform immediate service on location. With a modular system designed from the ground up to integrate together, a component swap takes just minutes. That means fewer scooters break down, when they do there are fewer trips required by human technicians, and each trip is optimized for the shortest repair time. That layered system is what Superpedestrian believes will solve current problems with scooter sharing services and help meet current as well as rapidly expanding urban-mobility demands. I think Superpedestrian’s work has massive implications for the micro-mobility and shared EV industry. Studies have shown that despite a vocal minority, the vast majority of Americans are loving electric scooter and bicycle sharing services. The largest micro-mobility companies, Bird and Lime, are both reportedly worth over $1B. Major players like Ford, GM, Uber and Lyft are all getting into the game. Every indication is that such vehicles are here to stay, and the industry will only continue to grow. With urban transportation demands growing at such a large rate, these personal electric vehicles can be an incredibly important part of the solution. However, current operations such as those run by Bird and Lime simply aren’t sufficiently sustainable or scalable. With disposable scooters and a huge human workforce required to keep those scooters on the road, the current system can’t meet future demand. But newer, purpose-built industrial scooters that can manage themselves and vastly reduce the amount of human intervention required to keep them running could very well be a game changer. With Superpedestrian’s rapid pace of development, you could be seeing these changes in cities near you sooner than you might think. And if you haven’t given shared electric bicycles and scooters a try yet, consider it. You’d be surprised how easily you can get around without a 2-ton vehicle around you, and how much more enjoyable it can be. What do you think about Superpedestrian’s approach and the future of micro-mobility in cities? Let us know in the comments below. FTC: We use income earning auto affiliate links. More. Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.
<urn:uuid:c371120b-78f4-4d55-b38a-954f8adeb8cf>
CC-MAIN-2021-43
https://electrek.co/2018/12/04/superpedestrian-electric-scooter/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00110.warc.gz
en
0.955296
3,125
2.53125
3
If you’ve been paying attention to what’s happening to the nonhuman life forms with which we share this planet, you’ve likely heard the term “the Sixth Extinction.” If not, look it up. After all, a superb environmental reporter, Elizabeth Kolbert, has already gotten a Pulitzer Prize for writing a book with that title. Whether the sixth mass species extinction of Earth’s history is already (or not quite yet) underway may still be debatable, but it’s clear enough that something’s going on, something that may prove even more devastating than a mass of species extinctions: the full-scale winnowing of vast populations of the planet’s invertebrates, vertebrates, and plants. Think of it, to introduce an even broader term, as a wave of “biological annihilation” that includes possible species extinctions on a mass scale, but also massive species die-offs and various kinds of massacres. Someday, such a planetary winnowing may prove to be the most tragic of all the grim stories of human history now playing out on this planet, even if to date it’s gotten far less attention than the dangers of climate change. In the end, it may prove more difficult to mitigate than global warming. Decarbonizing the global economy, however hard, won’t be harder or more improbable than the kind of wholesale restructuring of modern life and institutions that would prevent species annihilation from continuing. With that in mind, come along with me on a topsy-turvy journey through the animal and plant kingdoms to learn a bit more about the most consequential global challenge of our time. Insects Are Vanishing When most of us think of animals that should be saved from annihilation, near the top of any list are likely to be the stars of the animal world: tigers and polar bears, orcas and orangutans, elephants and rhinos, and other similarly charismatic creatures. Few express similar concern or are likely to be willing to offer financial support to “save” insects. The few that are in our visible space and cause us nuisance, we regularly swat, squash, crush, or take out en masse with Roundup. As it happens, though, of the nearly two million known species on this planet about 70% of them are insects. And many of them are as foundational to the food chain for land animals as plankton are for marine life. Harvard entomologist (and ant specialist) E.O. Wilson once observed that “if insects were to vanish, the environment would collapse into chaos.” In fact, insects are vanishing. Almost exactly a year ago, the first long-term study of the decline of insect populations was reported, sparking concern (though only in professional circles) about a possible “ecological Armageddon.” Based on data collected by dozens of amateur entomologists in 63 nature reserves across Germany, a team of scientists concluded that the flying insect population had dropped by a staggering 76% over a 27-year period. At the same time, other studiesbegan to highlight dramatic plunges across Europe in the populations of individual species of bugs, bees, and moths. What could be contributing to such a collapse? It certainly is human-caused, but the factors involved are many and hard to sort out, including habitat degradation and loss, the use of pesticides in farming, industrial agriculture, pollution, climate change, and even, insidiously enough, “light pollution that leads nocturnal insects astray and interrupts their mating.” This past October, yet more troubling news arrived. When American entomologist Bradford Lister first visited El Yunque National Forest in Puerto Rico in 1976, little did he know that a long-term study he was about to embark on would, 40 years later, reveal a “hyperalarming” new reality. In those decades, populations of arthropods, including insects and creepy crawlies like spiders and centipedes, had plunged by an almost unimaginable 98% in El Yunque, the only tropical rainforest within the U.S. National Forest System. Unsurprisingly, insectivores (populations of animals that feed on insects), including birds, lizards, and toads, had experienced similarly dramatic plunges, with some species vanishing entirely from that rainforest. And all of that happened before Hurricane Maria battered El Yunque in the fall of 2017. What had caused such devastation? After eliminating habitat degradation or loss — after all, it was a protected national forest — and pesticide use (which, in Puerto Rico, had fallen by more than 80% since 1969), Lister and his Mexican colleague Andres Garcia came to believe that climate change was the culprit, in part because the average maximum temperature in that rainforest has increased by four degrees Fahrenheit over those same four decades. Even though both scientific studies and anecdotal stories about what might be thought of as a kind of insectocide have, at this point, come only from Europe and North America, many entomologists are convincedthat the collapse of insect populations is a worldwide phenomenon. As extreme weather events — fires, floods, hurricanes — begin to occur more frequently globally, “connecting the dots” across the planet has become a staple of climate-change communication to “help the public understand how individual events are part of a larger trend.” Now, such thinking has to be transferred to the world of the living so, as in the case of plummeting insect populations and the creatures that feed on them, biological annihilation sinks in. At the same time, what’s driving such death spirals in any given place — from pesticides to climate change to habitat loss — may differ, making biological annihilation an even more complex phenomenon than climate change. The Edge of the Sea The animal kingdom is composed of two groups: invertebrates, or animals without backbones, and vertebrates, which have them. Insects are invertebrates, as are starfish, anemones, corals, jellyfish, crabs, lobsters, and many more species. In fact, invertebrates make up 97% of the known animal kingdom. In 1955, environmentalist Rachel Carson’s book The Edge of the Seawas published, bringing attention for the first time to the extraordinary diversity and density of the invertebrate life that occupies the intertidal zone. Even now, more than half a century later, you’ve probably never considered that environment — which might be thought of as the edge of the sea (or actually the ocean) — as a forest. And neither did I, not until I read nature writer Tim McNulty’s book Olympic National Park: A Natural History some years ago. As he pointed out: “The plant associations of the low tide zone are commonly arranged in multistoried communities, not unlike the layers of an old-growth forest.” And in that old-growth forest, the starfish (or sea star) rules as the top predator of the nearshore. In 2013, a starfish die-off — from a “sea-star wasting disease” caused by a virus — was first observed in Washington’s Olympic National Park, though it was hardly confined to that nature preserve. By the end of 2014, as Lynda Mapes reported in the Seattle Times, “more than 20 species of starfish from Alaska to Mexico” had been devastated. At the time, I was living on the Olympic Peninsula and so started writing about and, as a photographer, documenting that die-off (a painful experience after having read Carson’s exuberant account of that beautiful creature). The following summer, though, something magical happened. I suddenly saw baby starfish everywhere. Their abundance sparked hope among park employees I spoke with that, if they survived, most of the species would bounce back. Unfortunately, that did not happen. “While younger sea stars took longer to show symptoms, once they did, they died right away,” Mapes reported. That die-off was so widespread along the Pacific coast (in many sites, more than 99% of them) that scientists considered it “unprecedented in geographic scale.” The cause? Consider it the starfish version of a one-two punch: the climate-change-induced warming of the Pacific Ocean put stress on the animals while it made the virus that attacked them more virulent. Think of it as a perfect storm for unleashing such a die-off. It will take years to figure out the true scope of the aftermath, since starfish occupy the top of the food chain at the edge of the ocean and their disappearance will undoubtedly have cascading impacts, not unlike the vanishing of the insects that form the base of the food chain on land. Concurrent with the disappearance of the starfish, another “unprecedented” die-off was happening at the edge of the same waters, along the Pacific coast of the U.S. and Canada. It seemed to be “one of the largest mass die-offs of seabirds ever recorded,” Craig Welch wrote in National Geographic in 2015. And many more have been dying ever since, including Cassin’s auklets, thick-billed murres, common murres, fork-tailed petrels, short-tailed shearwaters, black-legged kittiwakes, and northern fulmars. That tragedy is still ongoing and its nature is caught in the title of a September article in Audubonmagazine: “In Alaska, Starving Seabirds and Empty Colonies Signal a Broken Ecosystem.” To fully understand all of this, the dots will again have to be connected across places and species, as well as over time, but the great starfish die-off is an indication that biological annihilation is now an essential part of life at the edge of the sea. The Annihilation of Vertebrates The remaining 3% of the kingdom Animalia is made up of vertebrates. The 62,839 known vertebrate species include fish, amphibians, reptiles, birds, and mammals. The term “biological annihilation” was introduced in 2017 in a seminal paper by scientists Geraldo Ceballos, Paul Ehrlich, and Rodolpho Dirzo, whose research focused on the population declines, as well as extinctions, of vertebrate species. “Our data,” they wrote then, “indicate that beyond global species extinctions Earth is experiencing a huge episode of population declines and extirpations.” If anything, the 148-page Living Planet Report published this October by the World Wildlife Fund International and the Zoological Society of London only intensified the sense of urgency in their paper. As a comprehensive survey of the health of our planet and the impact of human activity on other species, its key message was grim indeed: between 1970 and 2014, it found, monitored populations of vertebrates had declined in abundance by an average of 60% globally, with particularly pronounced losses in the tropics and in freshwater systems. South and Central America suffered a dramatic loss of 89% of such vertebrates, while freshwater populations of vertebrates declined by a lesser but still staggering 83% worldwide. The results were based on 16,704 populations of 4,005 vertebrate species, which meant that the study was not claiming a comprehensive census of all vertebrate populations. It should instead be treated as a barometer of trends in monitored populations of them. What could be driving such an annihilatory wave to almost unimaginable levels? The report states that the main causes are “overexploitation of species, agriculture, and land conversion — all driven by runaway human consumption.” It does, however, acknowledge that climate change, too, is a “growing threat.” When it comes to North America, the report shows that the decline is only 23%. Not so bad, right? Such a statistic could mislead the public into thinking that the U.S. and Canada are in little trouble and yet, in reality, insects and other animals, as well as plants, are dying across North America in surprisingly large numbers. From My Doorstep to the World Across Time My own involvement with biological annihilation started at my doorstep. In March 2006, a couple of days after moving into a rented house in northern New Mexico, I found a dead male house finch, a small songbird, on the porch. It had smashed into one of the building’s large glass windows and died. At the same time, I began to note startling numbers of dead piñon, New Mexico’s state tree, everywhere in the area. Finding that dead bird and noting those dead trees sparked a desire in me to know what was happening in this new landscape of mine. When you think of an old-growth forest — and here I don’t mean the underwater version of one but the real thing — what comes to your mind? Certainly not the desert southwest, right? The trees here don’t even grow tall enough for that. An 800-year-old piñon may reach a height of 24 feet, not the 240-feet of a giant Sitka spruce of similar age in the Pacific Northwest. In the last decade, however, scientists have begun to see the piñon-juniper woodlands here as exactly that. I first learned this from a book, Ancient Piñon-Juniper Woodlands: A Natural History of Mesa Verde Country. It turns out that this low-canopy, sparsely vegetated woodland ecosystem supports an incredible diversity of wildlife. In fact, as a state, New Mexico has among the greatest diversity of species in the country. It’s second in diversity of native mammals, third in birds, and fourthin overall biodiversity. Take birds. Trailing only California and Arizona, the state harbors 544 species, nearly half of the 1,114 species in the U.S.And consider this not praise for my adopted home, but a preface to a tragedy. Before I could even develop a full appreciation of the piñon-juniper woodland, I came to realize that most of the mature piñon in northern New Mexico had already died. Between 2001 and 2005, a tiny bark beetle known by the name of Ips confusus had killed more than 50 million of them, about 90% of the mature ones in northern New Mexico. This happened thanks to a combination of severe drought and rapid warming, which stressed the trees, while providing a superb environment for beetle populations to explode. And this, it turned out, wasn’t in any way an isolated event. Multiple species of bark beetles were by then ravaging forests across the North American West. The black spruce, the white spruce, the ponderosa pine, the lodgepole pine, the whitebark pine, and the piñon were all dying. In fact, trees are dying all over the world. In 2010, scientists from a number of countries published a study in Forest Ecology and Management that highlights global climate-change-induced forest mortality with data recorded since 1970. In countries ranging from Argentina and Australia to Switzerland and Zimbabwe, Canada and China to South Korea and Sri Lanka, the damage to trees has been significant. In 2010, trying to absorb the larger ecological loss, I wrote: “Hundreds of millions of trees have recently died and many more hundreds of millions will soon be dying. Now think of all the other lives, including birds and animals, that depended on those trees. What happened to them and how do we talk about that which we can’t see and will never know?” In fact, in New Mexico, we are finally beginning to find out something about the size and nature of that larger loss. Earlier this year, Los Alamos National Laboratory ornithologist Jeanne Fair and her colleagues released the results of a 10-year bird study on the Pajarito Plateau of New Mexico’s Jemez Mountains, where some of the worst piñon die-offs have occurred. The study shows that, between 2003 and 2013, the diversity of birds declined by 45% and bird populations, on average, decreased by a staggering 73%. Consider the irony of that on a plateau whose Spanish name, Pajarito, means “little bird.” The piñon die-off that led to the die-off of birds is an example of connecting the dots across species and over time in one place. It’s also an example of what writer Rob Nixon calls “slow violence.” That “slowness” (even if it’s speedy indeed on the grand calendar of biological time) and the need to grasp the annihilatory dangers in our world will mean staying engaged way beyond any normal set of news cycles. It will involve what I think of as long environmentalism. Let’s return, then, to that dead finch on my porch. A study published in 2014 pointed out that as many as 988 million birds die each year in the U.S. by crashing into glass windows. Even worse, domestic and feral cats kill up to 2.4 billion birds and 12.3 billion small mammals annually in this country. In Australia and Canada, two other places where such feline slaughters of birds have been studied, the estimated numbers are 365 million and 200 millionrespectively — another case of connecting the dots across places and species when it comes to the various forms of biological annihilation underway on this planet. Those avian massacres, one the result of modern architecture and our desire to see the outside from the inside, the other stemming from our urge for non-human companionship, indicate that climate change is but one cause of a planet-wide trend toward biological annihilation. And this is hardly a contemporary story. It has a long history, including for instance the mass killing of Arctic whales in the seventeenth century, which generated so much wealth that it helped make the Netherlands into one of the richest nations of that time. In other words, Arctic whaling proved to be an enabler of the Golden Age of the Dutch Republic, the era when Rembrandt and Vermeer made paintings still appreciated today. The large-scale massacre and near extinction of the American bison (or buffalo) in the nineteenth century, to offer a more modern example, paved the way for white settler colonial expansion into the American West, while destroying Native American food security and a way of life. As a U.S. Army colonel put it then, “Kill every buffalo you can! Every buffalo dead is an Indian gone.” Today, such examples have not only multiplied drastically but are increasingly woven into human life and life on this planet in ways we still hardly notice. These, in turn, are being exacerbated by climate change, the human-induced warming of the world. To mitigate the crisis, to save life itself, would require not merely the replacement of carbon-dirty fossil fuels with renewable forms of energy, but a genuine reevaluation of modern life and its institutions. In other words, to save the starfish, the piñon, the birds, and the insects, and us in the process, has become the most challenging and significant ethical obligation of our increasingly precarious time.
<urn:uuid:2dbc9231-e755-4f8e-8d54-0038310fef9e>
CC-MAIN-2021-43
https://eddierockerz.com/2018/12/13/biological-annihilation-a-planet-in-loss-mode-by-subhankar-banerjee/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00311.warc.gz
en
0.954581
4,019
3.1875
3
- Summarize nutritional requirements and dietary recommendations for elderly adults. - Discuss the most important nutrition-related concerns during the senior years. - Discuss the influence of diet on health and wellness in old age. Beginning at age fifty-one, requirements change once again and relate to the nutritional issues and health challenges that older people face. After age sixty, blood pressure rises and the immune system may have more difficulty battling invaders and infections. The skin becomes more wrinkled and hair has turned gray or white or fallen out, resulting in hair thinning. Older adults may gradually lose an inch or two in height. Also, short-term memory might not be as keen as it once was.Beverly McMillan, Illustrated Atlas of the Human Body (Sydney, Australia: Weldon Owen, 2008), 260. In addition, many people suffer from serious health conditions, such as cardiovascular disease and cancer. Being either underweight or overweight is also a major concern for the elderly. However, many older adults remain in relatively good health and continue to be active into their golden years. Good nutrition is often the key to maintaining health later in life. In addition, the fitness and nutritional choices made earlier in life set the stage for continued health and happiness. Older Adulthood (Ages Fifty-One and Older): The Golden Years An adult’s body changes during old age in many ways, including a decline in hormone production, muscle mass, and strength. Also in the later years, the heart has to work harder because each pump is not as efficient as it used to be. Kidneys are not as effective in excreting metabolic products such as sodium, acid, and potassium, which can alter water balance and increase the risk for over- or underhydration. In addition, immune function decreases and there is lower efficiency in the absorption of vitamins and minerals. Older adults should continue to consume nutrient-dense foods and remain physically active. However, deficiencies are more common after age sixty, primarily due to reduced intake or malabsorption. The loss of mobility among frail, homebound elderly adults also impacts their access to healthy, diverse foods. Due to reductions in lean body mass and metabolic rate, older adults require less energy than younger adults. The energy requirements for people ages fifty-one and over are 1,600 to 2,200 calories for women and 2,000 to 2,800 calories for men, depending on activity level. The decrease in physical activity that is typical of older adults also influences nutritional requirements. The AMDRs for carbohydrates, protein, and fat remain the same from middle age into old age. Older adults should substitute more unrefined carbohydrates for refined ones, such as whole grains and brown rice. Fiber is especially important in preventing constipation and diverticulitis, and may also reduce the risk of colon cancer. Protein should be lean, and healthy fats, such as omega-3 fatty acids, are part of any good diet. An increase in certain micronutrients can help maintain health during this life stage. The recommendations for calcium increase to 1,200 milligrams per day for both men and women to slow bone loss. Also to help protect bones, vitamin D recommendations increase to 10–15 micrograms per day for men and women. Vitamin B6 recommendations rise to 1.7 milligrams per day for older men and 1.5 milligrams per day for older women to help lower levels of homocysteine and protect against cardiovascular disease. As adults age, the production of stomach acid can decrease and lead to an overgrowth of bacteria in the small intestine. This can affect the absorption of vitamin B12 and cause a deficiency. As a result, older adults need more B12 than younger adults, and require an intake of 2.4 micrograms per day, which helps promote healthy brain functioning. For elderly women, higher iron levels are no longer needed postmenopause and recommendations decrease to 8 milligrams per day. People over age fifty should eat foods rich with all of these micronutrients. Nutritional Concerns for Older Adults Dietary choices can help improve health during this life stage and address some of the nutritional concerns that many older adults face. In addition, there are specific concerns related to nutrition that affect adults in their later years. They include medical problems, such as disability and disease, which can impact diet and activity level. For example, dental problems can lead to difficulties with chewing and swallowing, which in turn can make it hard to maintain a healthy diet. The use of dentures or the preparation of pureed or chopped foods can help solve this problem. There also is a decreased thirst response in the elderly, and the kidneys have a decreased ability to concentrate urine, both of which can lead to dehydration. At about age sixty, taste buds begin to decrease in size and number. As a result, the taste threshold is higher in older adults, meaning that more of the same flavor must be present to detect the taste. Many elderly people lose the ability to distinguish between salty, sour, sweet, and bitter flavors. This can make food seem less appealing and decrease the appetite. An intake of foods high in sugar and sodium can increase due to an inability to discern those tastes. The sense of smell also decreases, which impacts attitudes toward food. Sensory issues may also affect the digestion because the taste and smell of food stimulates the secretion of digestive enzymes in the mouth, stomach, and pancreas. A number of gastrointestinal issues can affect food intake and digestion among the elderly. Saliva production decreases with age, which affects chewing, swallowing, and taste. Digestive secretions decline later in life as well, which can lead to atrophic gastritis (inflammation of the lining of the stomach). This interferes with the absorption of some vitamins and minerals. Reduction of the digestive enzyme lactase results in a decreased tolerance for dairy products. Slower gastrointestinal motility can result in more constipation, gas, and bloating, and can also be tied to low fluid intake, decreased physical activity, and a diet low in fiber, fruits, and vegetables. Some older adults have difficulty getting adequate nutrition because of the disorder dysphagia, which impairs the ability to swallow. Any damage to the parts of the brain that control swallowing can result in dysphagia, therefore stroke is a common cause. Dysphagia is also associated with advanced dementia because of overall brain function impairment. To assist older adults suffering from dysphagia, it can be helpful to alter food consistency. For example, solid foods can be pureed, ground, or chopped to allow more successful and safe swallow. This decreases the risk of aspiration, which occurs when food flows into the respiratory tract and can result in pneumonia. Typically, speech therapists, physicians, and dietitians work together to determine the appropriate diet for dysphagia patients. This video provides information about the symptoms and complications of dysphagia. Obesity in Old Age Similar to other life stages, obesity is a concern for the elderly. Adults over age sixty are more likely to be obese than young or middle-aged adults. As explained throughout this chapter, excess body weight has severe consequences. Being overweight or obese increases the risk for potentially fatal conditions that can afflict the elderly. They include cardiovascular disease, which is the leading cause of death in the United States, and Type 2 diabetes, which causes about seventy thousand deaths in the United States annually.Centers for Disease Control, National Center for Health Statistics. “Deaths and Mortality.” Last updated January 27, 2012. http://www.cdc.gov/nchs/fastats/deaths.htm. Obesity is also a contributing factor for a number of other conditions, including arthritis. For older adults who are overweight or obese, dietary changes to promote weight loss should be combined with an exercise program to protect muscle mass. This is because dieting reduces muscle as well as fat, which can exacerbate the loss of muscle mass due to aging. Although weight loss among the elderly can be beneficial, it is best to be cautious and consult with a health-care professional before beginning a weight-loss program. The Anorexia of Aging In addition to concerns about obesity among senior citizens, being underweight can be a major problem. A condition known as the anorexia of aging is characterized by poor food intake, which results in dangerous weight loss. This major health problem among the elderly leads to a higher risk for immune deficiency, frequent falls, muscle loss, and cognitive deficits. Reduced muscle mass and physical activity mean that older adults need fewer calories per day to maintain a normal weight. It is important for health care providers to examine the causes for anorexia of aging among their patients, which can vary from one individual to another. Understanding why some elderly people eat less as they age can help health-care professionals assess the risk factors associated with this condition. Decreased intake may be due to disability or the lack of a motivation to eat. Also, many older adults skip at least one meal each day. As a result, some elderly people are unable to meet even reduced energy needs. Nutritional interventions should focus primarily on a healthy diet. Remedies can include increasing the frequency of meals and adding healthy, high-calorie foods (such as nuts, potatoes, whole-grain pasta, and avocados) to the diet. Liquid supplements between meals may help to improve caloric intake.Morley, J. E. “Anorexia of Aging: Physiologic and Pathologic.” Am J Clin Nutr 66 (1997): 760–73. www.ajcn.org/content/66/4/760.full.pdf. Health care professionals should consider a patient’s habits and preferences when developing a nutritional treatment plan. After a plan is in place, patients should be weighed on a weekly basis until they show improvement. Many older people suffer from vision problems and a loss of vision. Age-related macular degeneration is the leading cause of blindness in Americans over age sixty.American Medical Association, Complete Guide to Prevention and Wellness (Hoboken, NJ: John Wiley & Sons, Inc., 2008), 413. This disorder can make food planning and preparation extremely difficult and people who suffer from it often must depend on caregivers for their meals. Self-feeding also may be difficult if an elderly person cannot see his or her food clearly. Friends and family members can help older adults with shopping and cooking. Food-assistance programs for older adults (such as Meals on Wheels) can also be helpful. Diet may help to prevent macular degeneration. Consuming colorful fruits and vegetables increases the intake of lutein and zeaxanthin. Several studies have shown that these antioxidants provide protection for the eyes. Lutein and zeaxanthin are found in green, leafy vegetables such as spinach, kale, and collard greens, and also corn, peaches, squash, broccoli, Brussels sprouts, orange juice, and honeydew melon.American Medical Association, Complete Guide to Prevention and Wellness (Hoboken, NJ: John Wiley & Sons, Inc., 2008), 415. Elderly adults who suffer from dementia may experience memory loss, agitation, and delusions. One in eight people over the age sixty-four and almost half of all people over eighty-five suffer from Alzheimer’s, which is the most common form of dementia. These conditions can have serious effects on diet and nutrition as a person increasingly becomes incapable of caring for himself or herself, which includes the ability to buy and prepare food, and to self-feed. Longevity and Nutrition The foods you consume in your younger years influence your health as you age. Good nutrition and regular physical activity can help you live longer and healthier. Conversely, poor nutrition and a lack of exercise can shorten your life and lead to medical problems. The right foods provide numerous benefits at every stage of life. They help an infant grow, an adolescent develop mentally and physically, a young adult achieve his or her physical peak, and an older adult cope with aging. Nutritious foods form the foundation of a healthy life at every age. As adults age, physical changes impact nutrient needs and can result in deficiencies. The daily energy requirements for adults ages fifty-one and over are 1,600 to 2,200 calories for women and 2,000 to 2,800 calories for men, depending on activity level. Older adults are more susceptible to medical problems, such as disability and disease, which can impact appetite, the ability to plan and prepare food, chewing and swallowing, self-feeding, and general nutrient intake. A nutrient-dense, plant-based diet can help prevent or support the healing of a number of disorders that impact the elderly, including macular degeneration and arthritis. - Revisit the predictions you made at the beginning of this chapter about how nutrient needs might change as a healthy young adult matures into old age. Which predictions were correct? Which were incorrect? What have you learned?
<urn:uuid:287c834c-852c-4584-91d2-e62fadd97428>
CC-MAIN-2021-43
https://med.libretexts.org/Courses/Dominican_University/DU_Bio_1550%3A_Nutrition_(LoPresto)_OLD/13%3A_From_Childhood_to_the_Elderly_Years/13.8%3A_Old_Age_and_Nutrition
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00670.warc.gz
en
0.945579
2,686
3.09375
3
Development of Niger Valley people (credit: AncientAfricanHistory.com) 1. Ounjougou (9400 BC) - early on pottery2. Dabous (8000 BC) - largest old petroglyphs3. Gobero (7550 BC) - oldest graveyard, early on aqualithic culture4. Dufuna (6500 BC) - second oldest dugout canoe5. Tassili N"ajjer (6000 BC) - beforehand boats, trained cattle, horses6. Oued Mertoutek (3000 BC) - at an early stage writing7. Lower Tilemsi valley (2500 BC) - oldest tamed millet 8. Dahr Tichit (2000 BC) - at an early stage city 9. Lejja (2000 BC) - oldest iron smelting10. Nok (1500 BC) - at an early stage iron smelting, terracotta statues Niger valley Civilization The Niger Valley describes the 2,597-mile Niger River"s watershed and also its environs, a vast region spanningthe lush Delta an ar of southern Nigeria, north to the dried highlands of southerly Algeria, and west come the lush highlands of Guinea. Contradictory to well-known belief, this region of West Africa has been populated for tens of hundreds of years. The Niger Valley, in particular, lies south of Jebel Irhoud, Morocco, where excavators recently discovered the world"s earliest early modern-day humans (300,000 BC), and north that Iwo Eleru, Nigeria, whereby 13,000-year-old human remains have actually been found.And despite widespread ignorance that West african history, over there is an abundance of historical evidence and written accounts obtainable to aid us map the breakthrough of this region"s progressed ancient and also medieval civilizations. At an early stage Niger Valley cultures were the an initial to practice ceremonial burials, domesticate millet and also smelt iron, and also were amongst the firsts come write, do pottery, domesticate cattle, and use boats for travel. Their markets and strong trading networks offered rise to rich empires and city-states who stone, earthen and walled urban were pundit hubs that the middle ages world. beforehand PotteryArchaeologists indigenous the college of Geneva dated ceramic sherds found at the Ravin du Hibou website at Ounjougou, Mali to 9400 BC, make them among the earliest ceramics in the human being (only pottery found in Sudan, Japan, China and also Siberia, The Dabous Giraffes in Niger (8000 BC) (credit: Bradshaw Foundation) which are normally dated to 10,000-12,000 BC, are supposedly older). Situated in the sink of Yamé (west that the within Niger Delta), the ancient, so-called Ounjougou culture lived where the Dogon world reside today. And comparable to their current use that ceramics to boil grains and cereal, archaeologists posit the the old pottery was offered for the exact same purposes. World"s Largest old PetroglyphsThe Tenere Desert in Niger is home to end 800 old rock carvings the depict humans and also animals, including the largest ancient petroglyphs in the world. At Dabous, about 150 miles north the Agadez, lie comprehensive rock carvings of life-size giraffes, the biggest of i m sorry is 18 feet lengthy (adult giraffes are usually 15-20 feet tall). Excavators date the petroglyphs come 8000 BC, one era when the Tenere Desert was likely greener (as further described below) and more hospitable because that both giraffes and also humans. At an early stage Aqualithic and also Pastoral Civilization; oldest Graveyard Farther southern at Gobero, also in Niger"s Tenere Desert, archaeologists from the university of Chicago uncovered the stays of an advanced civilization that dates to 8000 BC and continuously lived in this site for practically 5,000 years (so-called "Kiffian Culture"). This site offers one the the earliest instances of domesticating cattle and also practicing ceremonial burials, through the earliest graveyard in the world. Moreover, numerous old harpoons, fish hooks and also pottery v wavy lines, which archaeologists generally associate v pre-historic populations that greatly fished, were amongst the earliest artefacts identified at the site. Over there were additionally the 8,250-year-old remains of catfish, tilapia and also hippos, additional evidencing the the climate was as soon as much much more humid 보다 today. Pottery shard with wavy present (8000 BC) (credit: Paul Sereno) One of plenty of harpoons uncovered at Gobero (8000 BC) (credit: Paul Sereno) The Dufuna Canoe (6500 BC) being hoisted the end of the ground; currently housed in ~ the national Museum at Damaturu, Nigeria (credit: Peter Breunig) Second earliest Known watercraft in the WorldHeavy fishing and also riparian searching led beforehand Niger Valley people to boating over 8,500 years ago. Among the oldest boats ever discovered anywhere in the world, the so-called Dufuna Canoe (6500 BC), was partially found by a Fulani cattle herdsman digging a well close to Dufuna, Nigeria. This site is not far from the Komadugu Gana River, a tributary of Lake Chad, i beg your pardon would have actually beenmuch larger in that era. Measure up 27.6 feet long, the Dufuna Canoe is nearly 3 times the size of and carved in a much more sophisticatedmanner 보다 the so-called Pesse canoe, i beg your pardon was found in the Netherlandsand is believed to it is in the only watercraft older than the Dufuna canoe. The Dufuna discovery demonstrates old Nigerians" beforehand use the advanced an innovation and possibly maritime trade. at an early stage Depictions of Boats and Domesticated Cattle Rock art depicting boats and domesticated livestock at Tassili N"Ajjer, Algeria (6000 BC) (credit: Gruban) Depictions of boats additionally appear about 6000 BC in Saharan rock arts at Tassili N"Ajjer in southerly Algeria, i beg your pardon hosts roughly 15,000 old rock drawings, some dating together far back as12,000 BC, however most dating to 6000 BC. Such boats show up alongside the photos of civilization usingbows and arrows, saying boats may have been offered for searching or early on naval warfare. current studies have likewise shown the the Sahara may alternative between 20,000-year wet and also dry cycles, so the desert would have actually been replete v rivers and also lakes throughout the time the rock paintings were drawn. The plenty of depictions that longhorn trained cattle, which feed turn off grass, provide further evidence that this desertwas as soon as green. Together rock paints of humpless, longhorn livestock predate all others outside Somalia (i.e. Laas Geel) and aregenerally thought about by archaeologists to represent the domesticated Bos Taurus species. Together depictions demonstrate thatdomesticated cows were existing in Africa as early on or previously than in Asia. Early on AgricultureWest Africa is residence to the world"s oldest proof of tamed millet, attesting to the region"slong background of agriculture. Dating ago 4,500 years, the grew millet was established by university College London archaeologistsat a website in Mali"s reduced Tilemsi Valley and is century older 보다 examples found elsewhere in Africa and also Asia. Farther west, excavators identified an ext evidence of domesticated millet dating to 2000 BC atDahrs Tichit and also Walata in southern Mauritania and also hundreds the miles away in Birimu, Ghana, which mirrors the an excellent extent the neolithic farming in West Africa. In addition to millet, early people in this an ar widely cultivated rice. And recent analyses the rice genomes make it clear that African grew rice (Oryza glaberrima) was trained independently from the much more globally popular oriental rice (Oryza sativa). Researchers also pinpointed the inner Niger Delta (i.e. The section of the river between Timbuktu and also Djenne-Djenno) together the most likely birthplace of african rice at least 2,000 year ago, withthe oldest evidence from ancient city of Djenne-Djenno in Mali. Such findings break longheld misbeliefs that African rice was derived from eastern rice carried to West Africa much later on in history. A Nok terracotta frostbite (195 BC) Oldest iron Smelting and also Early Terracotta & copper SculptureAncient Niger Valley civilizations widely practiced advanced metallurgy and also sculpting. In Lejja, a town in the Niger Delta an ar of southerly Nigeria, archaeologists have radiocarbon dated iron-smelting furnaces to 2000 BC, making castle the oldest in the world. Proof of stole smelting and terracotta ceramic have additionally been discovered north of the confluence of the Niger and Benue Rivers in Nigeria, whereby the so-called Nok culture thrived as early as 1500 BC. In the Taruga valley alone, 13 old iron smelting furnaces, along with ancient iron tools and weaponry have actually been identified. And also in Taruga, Samun Dukiya, Jos, Sokoto, and Nok in particular, numerous intricate terracotta statues have been uncovered, part dating together far earlier as 1000 BC--centuries enlarge than any known Greek sculpture. The Nok statues depict men and women put on ornate costumes and also jewelry--some ~ above horseback, rather in symbolic and also abstract gestures and also poses. At some point the popular of sculpting inclay gave way to iron and also bronze. In ~ numerous archaeological sites in south-east Nigeria, such as Igbo-Ukwo, there space hundreds of instances of 1,200-year-old, intricate bronze sculptures using progressed methods the Europeans walk not learn until the 1500s. Terracotta statue of a woman (200 BC) (credit: Siyajkak) Terracotta statue the a guy (200 BC) Bronze ship in the shape of a conical shell, uncovered at Igbo-Ukwu (800) (credit: Ochiwar) Bronze ornament found at Igbo-Ukwu (800) (credit: Ochiwar) Famous Ife heads made of brass and also designed in a naturalistic format (1300s) (credit: Trustees the the brother Museum) In addition to iron and bronze, sculpting in other steels such together copper, lead, zinc and also alloys such as brass (copper/zinc) gained popularity throughout medieval times, especially in the industrial cities the the lower Niger Valley. Large-scale production of metallic sculptures and also other objects like farming and hunting tools and weaponry, fueled the growth and influence the the cook Yoruba city of Ife and the Edo city that Benin. In this part of the Niger Valley, in particular, metalworking was considered to it is in a ritualistic practice and also blacksmiths were very valued and also honored professionals. Major cities in the Niger sink (credit: AncientAfricanHistory.com) Medieval Cities and the rise of Empires Other old agrarian and industrial Niger valley towns grew into rich centers that trade as result of their strategy location in between the mineral mines come the south and the salt mines in the Sahara. When some major cities, particularly Kano, Katsina and also Zaria in Hausaland, kept their independence, other major cities were linked under the very first documented realms in the Niger Valley, many notably Wagadu or Ghana (700-1240), Benin or Edo (1000s-1897), Mali (1235-1670) and Songhai (1464-1591). In fact, pottery from Egypt and as far away as China were found while excavating the Wagadu resources city of Gao--a testament to its significant trade networks (archaeologists at various other sites, such as Yikpabongo in north Ghana (the modern-day country, not to be perplexed with the Ghana Empire), have actually even discovered evidence the the medieval use the bananas and pine, which space not native to the Niger Valley). The wide range of this an ar was well recorded throughout the medieval world. Although originally called "Wagadu" by establishing Emperor Kaya Maghan (700), the much more widely known name, "Ghana", comes from Iraqi scholar Ibrahim al-Fazari (777), who referred to as it the "land that gold." Al-Hasan ibn Ahmad al-Hamdani (893-945 AD), a Yemeni geographer and historian, also described Ghana as having the "richest yellow mines on earth." Mansa Musa, emperor the the Mali Empire, ~ above the Catalan Atlas (1375) Al Bakri, a historian and also geographer indigenous Cordoba (former caliphate in Spain), likewise said, "On every donkey-load that salt the King the Ghana levies one golden dinar once it is carried into his country and two dinars as soon as it is sent out out." and also Mali Emperor Mansa Musa ns (1280-1337) to be famously depicted with a gold crown and also coin in the Catalan Atlas (possibly created by Iberian cartographer Abraham Cresques in 1375) and is widely thought about the wealthiest historic number in the world. Despite their mineral wealth, Niger Valley cities would come to be even an ext famous for their huge collections and trade of academic books, particularly Timbuktu, which was explained by Malian scholar, Mahmud Kati (1468-1552), in his Tarikh al Fettash as a city with"solid institutions, political liberties, purity that morals...courtesy and also generosity towards students and also scholars" -- making the an international intellectual mecca unparalleled in the middle ages world. Stone ruins of Gao, a resources of the Wagadu empire (900) (credit: Mamadou Cisee, Shoichiro Takezawa) Stone damages of Kumbi Saleh, the very first capital of the Wagadu realm (700) (credit: Serge Robert) Mosque of Tichit, Mauritania (1100) (credit: Ville de Tichitt, Mauritanie) Medieval rock buildings in Tichit (1000), a major Soninke city the was cleared up by 2000 BC and was component of the Wagadu, Songhai and also Mali empires (credit: Ville de Tichitt, Mauritanie) Gate the the initial earthen wall surface encircling Kano, Nigeria (1000) Gates the the original earthen wall that encircled Zaria, Nigeria (1000) Reconstructed gateway come Gidan Rumfa, the emir"s royal residence in Kano (1475) A liven street in front of medieval buildings in Kano, Nigeria (founded in 999) second Oldest college (outside the the Nile Valley) University of Sankore, v the middle ages city the Timbuktu in the background Timbuktu is additionally the site of the world"s 2nd oldest university (excluding ancient Nile valley temples), developed in 989 together the University/Mosque that Sankore Madrasah. In ~ its height, the college enrolled 25,000 students and also housed as much as 700,000 books--more than anywhere else in the middle ages world. Contradictory to famous belief, the above pyramid-like structure is do of cut stone (not mud brick), covered with a dirt stucco that is periodically stripped and renewed--a exercise that keeps the internal cool during the day and warm in ~ night. Medieval manuscripts native the Djenne Library (credit: Sophie Sarin) Largest middle ages Libraries; early Literary IndustryTimbuktu and other Niger Valley urban such as Gao, Kano and Djenne stayed intellectual and literary hubs for centuries, attractingscholars from almost everywhere the world. In ~ their elevation (1200-1700), such urban were widely known for their big collections that mathematical, astronomical, religious, poetic, legal and administrative texts, including over 700,000 that have been revealedin recent years. Timbuktu"s literary culture and industry, in particular, room thoroughly explained in medieval literature. Mohammed al-Wazzan al-Zayati (aka Leo Africanus), who saw Timbuktu in 1509, wrote that "Many manuscripts...are offered . Such sales are more rewarding than any type of other goods." In the Tariqh al-Sudan (1600), a book that chronicles the city"s history, Timbuktu is defined as "a refuge that scholarly and righteous folk, a haunt that saints and ascetics, and also a meeting place for caravans and also boats." Indeed, Mansa Musa ns purchased publications here and also eventually constructed the good Mosque the Timbuktu in 1326.A bulk of the so-called "Timbuktu manuscripts" and other regional books were written in the 1300s-1600s in West african Ajami script, which has actually been used because at least the 11th century to write in details West afri languages, including Kanuri, Hausa, Fulani, Wolof and Yoruba. Although derived from the Arabic script, the 2 scripts differ in certain respects.
<urn:uuid:37258230-13de-46c6-877b-3670332796c6>
CC-MAIN-2021-43
https://juniorg8.com/niger-valley-civilization/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00150.warc.gz
en
0.951003
3,743
3.265625
3
It’s safe to say we all know diversity and inclusion (D&I) matters. McKinsey research has shown that companies with greater gender diversity are 21% more likely to outperform others and those that are ethnically diverse are 33% more likely to outperform others. Why is this? With an increase in migrant workers pursuing careers overseas, coupled with the jump in demand for skilled workers, diversity has now become a key driver for economic growth. The benefits of a diverse workforce help to increase productivity for businesses wishing to succeed in the global market. And yet, according to SurveyMonkey, 26% of employees don’t feel like they belong at their current company. This is an alarmingly high percentage of people. Although company intentions may be well, it often takes significant work to implement a culture that truly embraces and encourages diversity and inclusion. What can you do as a business leader to enact change? Although as a society we arguably have a long way to go, businesses can do their part by committing to not only embracing diversity but to also actively encourage inclusion within their teams. So why is it important to build a culture that champions diversity and inclusion in the workplace? Let’s start with a few definitions first. What is diversity and inclusion (D&I)? Although the two go hand in hand, diversity and inclusion are two separate concepts with equal importance. What is Diversity? Diversity refers to any element that can be used to differentiate groups or individuals. In essence, it’s all about empowering people to respect and appreciate their differences such as gender, age, ethnicity, sexual orientation, religion, disability and education. Diversity provides a safe environment for these differences to be celebrated and nurtured. It focuses on understanding what makes us unique and why that is important. This allows us to explore how such valuable dimensions of diversity can be leveraged both in the community and the workplace. Each unique individual within a business provides an enriched and diverse set of experiences, perspectives and ideas. The benefits of diversity can only be reaped once we acknowledge how important these differences are to a person’s identity. Once a person feels they’re respected and valued regardless of their background, they’re far more likely to work productively. All in all, businesses who embrace diversity are better equipped for success. Now, a lot of Australian companies can probably say they’re doing reasonably well at diversity, it’d be hard not to with our nation’s remarkable multiculturalism. In fact, Australia is the most ethnically diverse country in the world with the 2016 census showing that 26% of us were born overseas and 49% have at least one parent born overseas. But, it’s not enough to have a diverse workforce, you have to ensure that these diverse voices are seen and heard across the business. In fact, Deloitte argues diversity without inclusion is not enough. For a business to be successful, the two must be combined. As mentioned earlier, there are a number of rich dimensions of diversity. Many are self-evident while others can be more inherent, such as educational training or personality types like extroverts and introverts. Inclusive workplace cultures are an integral part of making employees feel valued and respected as individuals. While diversity is about appreciating differences, inclusion focuses on encouraging those differences to work collaboratively. In a business context, inclusion involves a team effort in which various backgrounds and levels of experience are socially and culturally accepted. It’s important for businesses to extend beyond just acceptance, to also embrace and treat such differences equally. Participation in meetings, office layouts, access to information and open invitations to social events are all factors of an inclusive culture. The process of inclusion helps to engage people in their work and make them feel like an important member of an organisation. This kind of culture can help to create higher-performing teams with strong motivation and morale. The great ‘inclusion’ debate The definition of inclusion is often up to individual interpretation. There is significant debate around what this word means (especially in the context of a business), as a person may be included within a group, but not necessarily feel a sense of belonging. Inclusion without belonging can be perceived as tokenism, which can have a negative effect on diversity. Employees from underrepresented communities or backgrounds may feel like a token hire for businesses wishing to enhance their diversity profile. Here minorities often end up feeling as though they’re ‘on the team’ but not necessarily ‘a part of the team’. You’ll find retention quickly becomes an issue as people seek places they’ll feel respected and appreciated. A more holistic definition of inclusion means to treat people equitably and with respect. The term brings about a feeling of connectedness, without fear of embarrassment or rejection. In a nutshell, inclusion is about belonging. What is inclusive leadership? Fostering a diverse and inclusive workplace culture starts with leadership. Each employee wants to feel as though their unique attributes and experiences are valuable to the team, so it is up to leaders to ensure this. Deloitte recently identified six signature traits of an inclusive leader, all of which are interrelated: - Commitment: Inclusive leaders are dedicated to D&I as it aligns with their own personal values. They are committed to articulating their beliefs and feel responsible for change. - Courage: they are humble by nature, not afraid to speak out and encourage others to contribute their unique ideas. - Cognizance of bias: they are aware of their own blind spots and work hard to quash any internal biases. They also work hard to prevent bias within others. - Curiosity: they are open-minded, curious about others, listen without judgement and always seek to understand. - Culturally intelligent: they learn and recognise the importance of other cultures. - Collaboration: they empower people to bring their differences together and work collaboratively. They create a safe space for diversity of thought. Source: Deloitte Review What are the benefits of D&I? There are endless benefits to investing in D&I, so we’ve handpicked our top four to help inspire and motivate change: 1. Diversity of thought and perspective 💭 Different backgrounds bring different perspectives. Each individual has their own unique characteristics with a diverse set of skills and lived experiences. These differences in opinion, values and beliefs are useful for any business discussion. When it comes to making decisions, you want your teams to provide ideas, feedback and suggestions. Diverse perspectives are highly beneficial for innovation as it allows for a greater understanding of how different people think and feel. When designing a product or service you want your team to be able to put themselves in the customers’ shoes to provide the best possible solution for a problem. If you have a largely homogenous team that think alike, you lack the ability to experiment and empathise with various customer profiles. All in all, diversity of thought leads to better decision making and design. 2. Increased creativity and innovation 🚀 Exposure to a variety of diverse perspectives and worldviews has been linked to greater creativity. When you put a group of people together who can view the same problem from multiple angles, you have a range of creative ideas to choose from. Diverse teams take a collaborative approach to creativity. Homogenous teams are less likely to experiment and innovate quickly or creatively as opposed to a heterogeneous team of people. This is because diverse ways of thinking provide a safe space for positive conflict to flourish. By bringing various different expertise to the table, ideas can freely bounce off one another to help get those creative juices flowing. This also leads to faster problem-solving. The quicker you can provide new and exciting solutions to the market, the stronger your competitive advantage becomes. While it may feel comfortable to work with people who share the same ideas, values and experiences as you, this can be extremely counterproductive when brainstorming innovative solutions. Conformity has been known to discourage innovative thinking. Enriching your team with representatives of different genders, races, and skills is key for boosting your intellectual potential. 3. Higher employee engagement 💡 This point is definitely one of our favourites. At Employment Hero, we are all about creating an engaging and welcoming workplace for employees. There is no hiding the fact that a more diverse and inclusive workforce leads to higher staff engagement. The link is pretty straightforward – when people feel included, they are also more motivated to work hard. You want your teams to take pride in where they work and be excited about working for a business who values their uniqueness. A sense of belonging is crucial for engagement. Every staff member wants to know they have equal opportunity and access to information and support. Let’s not forget that the more engaged, happy and motivated an employee is, the more likely they are to stay. When businesses commit to diversity and inclusion programs, their employees are 80% more likely to rank their employer as high performing. It’s safe to say that companies with rich D&I initiatives experience greater profits and reduced turnover. 4. Strong brand reputation 🏆 As a business, your brand is everything. It’s how your customers and employees see and perceive you within the market. Companies dedicated to building a more diverse and inclusive workplace are considered to be socially responsible. For customers, this means they can better relate to your brand and develop a sense of loyalty. Prospective candidates are also more likely to join the team if they feel like their unique qualities are valued. This helps to bolster your retention and recruitment efforts. It’s easy to understand why top talent would choose a company which showcases high levels of diversity and inclusivity. Job seekers are more able to see something within your company that resonates with them, making it more enticing to apply. A survey conducted by Glassdoor found 67% of job seekers believed a diverse workforce to be one of the most important factors when evaluating companies and job offers. Why is D&I important? As Australia grapples with the economic fallout caused by the COVID-19 pandemic, research highlights the importance of implementing D&I initiatives to bolster business performance. A new report released by the Bankwest Curtin Economics Centre (BCEC) and the Workplace Gender Equality Agency (WGEA) in June this year suggests women in senior leadership can help drive better company profitability. In fact, the report found a 6.6% increase in the market value of ASX-listed companies – the equivalent of AUD 104.7 million. However, women currently represent only 17.1% of company CEOs. Not to mention 29.8% of companies have zero female representation on their Board. Report author and BCEC Principal Research Fellow Associate Professor Rebecca Cassells believes “When businesses are looking to a post COVID-19 world, our research shows that having a female CEO has the potential to help companies navigate through the crisis”. McKinsey also conducted several year-long studies on diversity in the workplace. Their latest report, Diversity Matters, focuses on 366 publicly listed companies across multiple industries within Canada, Latin America, the United Kingdom and the United States. The data suggests companies in the bottom quartile both for gender, ethnicity and race are less likely to achieve above-average financial returns. Source: McKinsey diversity database The numbers and statistics highlight the ongoing D&I work that needs to be done, even as D&I programs continue to gain traction. It’s not surprising that businesses with strong diversity initiatives are outperforming those without. The majority of organisations today recognise the opportunity diverse and inclusive leadership provides and represents. This also rings true for your talent pipelines which involve attracting, mentoring and retaining top talent. With the numerous benefits D&I provide, we believe now is the time to invest. What are other businesses doing? Diversity and inclusion can be a challenge for any business regardless of their size. Inclusion often requires a shift in mindset and culture, especially for those who have previously held a largely homogenous workforce. If you’re new to the concept of D&I, take a look at how other industry experts are tackling the problem. To increase the number of women in STEM-related fields such as engineering, Canva recently partnered with Project F, a new initiative targeting female representation in tech. The company commenced ‘Program 50/50′, in November last year. The initiative transcends traditional diversity and inclusion measures to help bridge the gender gap in tech-related roles within Canva. “Stronger female representation across all facets of our organisation – from engineering and product management through to operations – the more gender-equal we are helps to ensure we’re able to empathise with our community and build a more inclusive product.” – Crystal Boysen, Head of People at Canva. At Accenture, diversity training takes place within the organisation. The program is broken into three categories; 1. Diversity Awareness – to help people understand the benefits of a diverse workplace, 2. Diversity Management – to equip leaders with the tools to manage diverse teams and 3. Professional Development – to enable women, LGBT and ethnically diverse employees to build skills for success. The wrap up The more diverse and inclusive a company, the more innovative they become. New ideas almost always stem from different ways of thinking. The true value of diversity is diverse thought and perspective for strategy and problem-solving. Additionally, let’s not forget the power of D&I in establishing a strong brand image. This helps your business acquire customers, attract top talent and retain employees. A level playing field with fewer obstacles helps to mitigate bias and give everyone the same opportunity to thrive within an organisation. D&I work isn’t an easy process and it’s not something that can be achieved overnight. But the key is getting started. The earlier you can implement workplace initiatives to foster a more diverse and inclusive environment, the better you will be in the future. The longer your company waits, the harder it will be to implement change. Ultimately, the growth of your business is largely intertwined with the growth of your people, customers and community. If your company seeks ongoing innovation, look no further than a diverse and inclusive workplace. We want to hear from you Are you a proud D&I champion for your business? Get in touch with us at [email protected] and tell us your story. We want to showcase the culture game-changers.
<urn:uuid:452fb1dd-6ff4-4bd6-95d9-fa291a7691c2>
CC-MAIN-2021-43
https://employmenthero.com/blog/what-is-diversity-and-inclusion/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00270.warc.gz
en
0.953958
2,974
2.828125
3
Virtual private networks (VPNs) are systems that use public networks to carry private information and maintain privacy through the use of a tunneling protocol and security procedures. By using the shared public infrastructure, these virtual private networks are far more cost effective than were early real private networks which companies built using costly private lines and systems. In a VPN some of the parts of the network are connected using the Internet (the public infrastructure). Data that travel over the Internet are encrypted, so the entire network is "virtually" private. This allows users to share private information over a public infrastructure. A typical VPN application would be one created by a company with offices in different cities. By setting up a VPN the company uses the Internet as the connector between the networks in its two offices effectively merging their networks into one. Encryption is used on all transmissions within the network that use the Internet link, making it a private network. The public infrastructure that provides the backbone for most VPN systems is the Internet. VPNs can connect remote users and other off-site users (such as vendors or customers) to a larger centralized network. Before the Internet, and the easy availability of high-speed or broadband connections to the Internet, a private network required that a company install proprietary and very expensive communication lines. The expense of such an investment put private networks out of the reach of most mid- to small-size firms. This is no longer the case. This fact, along with the universal appeal of the Internet, has enabled the rapid spread of VPN technology. The result is remote access that is quicker, more secure, and wider in scope. STRUCTURAL OVERVIEW OF VPN SYSTEMS In the most basic terms, a computer network is a group of computers that are connected with cable. Usually, one or more computers acts as a server within the group. A network may also be formed with computers that communicate through wireless connections but the wireless signal must be caught and transmitted by hardware that is located reasonably near both the sending and receiving machines. Companies have long networked computers. Until the advent of the Internet, however, the entire infrastructure of these networks had to be built by the companies themselves. They had to purchase and lay cables to connect their computers. They had to purchase and install boosters or repeaters to augment the signals transmitted through cables when large distances were involved. They had to lease high-capacity, dedicated phone lines in order to connect computers or networks in remote locations. They had to build or lease transmission towers in order to send wireless signals long distances and they had to purchase and install the systems used to send and receive these signals. Not surprisingly, most companies did not go far beyond networking computers in a single building since the cost of the infrastructure requirements for anything larger were prohibitive. With the advent of the Internet and the growth in availability of high speed, broadband communication lines, new technologies were developed to use the Internet as the conduit through which to connect remote computers or networks. A company no longer had to absorb the full cost of building the infrastructure needed for wide area networks (WANs). The communications protocols that regulate and make the Internet possible are also the basis for the protocols necessary to operate virtual private networks. The underlying collection of protocols is called transmission control protocol/Internet protocol or TCP/IP for short. The protocols for VPNs are called IPSec. A virtual private network is, basically, a network in which some of its components are connected to one another through the Internet. Software written to use IPSec is used to establish these Internet connections. The connections created in this way are called tunnels, through which all transactions between the two authenticated computers on either end of the tunnel may transmit privately across the public Internet. VPN can be set up to connect single-client PCs with a company's local-area network (LAN) This sort of VPN is usually called a client-to-LAN VPN. This enables companies that have employees who travel extensively or work remotely to equip those employees with a computer that uses the VPN to access the company network and work on it like any other employee from just about anywhere, as long as they have access to the Internet. Small companies may set up a client-to-LAN VPN through which all the employees access a central server from their home offices. A LAN-to-LAN VPN is one that connects two networks together instead of individual client computers being connected to a single LAN. The mechanisms behind these two types of VPN is the same. A LAN-to-LAN system is useful for connecting a branch office network to a corporate headquarters network, or a warehouse network to a supplier's network. The options are many. THE COST OF VIRTUAL PRIVATE NETWORKS The costs of implementing a virtual private network are reasonable for any company that already has a network and high-speed access to the Internet. The two biggest components of a VPN, for those with networks in place, are the software and set-up of the same, and the need in many cases to upgrade the Internet connection service. Because a VPN uses the Internet address of the network server as the access for those logging on the system through the Internet, a company must have a static IP address. Internet Service Providers usually charge slightly more for a service that holds the IP address static. The software needed to manage a VPN is commonly sold as a part of many network operating systems. Setting up this software takes networking knowledge but can be done by any competent network administrator or network outsourcing supplier. When a business decides to use an outside provider, it is immediately eliminating any costs for purchasing and maintaining the necessary equipment. The most the business will have to do is maintain security measures (usually a firewall) as well as provide the servers that will help authenticate users. Of course, this too can be done by an outside provider for an additional price. Outsourcing also cuts down on the number of employees that would be required to manage and maintain the virtual private network. For a firm that does not already have a computer network with Internet access, the task of setting up a VPN is a much larger undertaking. VIRTUAL PRIVATE NETWORKS AND SECURITY Virtual private network systems are constantly evolving and becoming more secure through four main features: tunneling, authentication, encryption, and access control. These features work separately, but combine to deliver a higher level of security while at the same time allowing all users (including those from remote locations) to access the VPN more easily. Tunneling creates the connection between a user (either from a remote location or separate office) to the main LAN. This connection is called a tunnel and is essentially the circuit-like path that transfers encrypted private information through the Internet. This requires an IP address which is an Internet address to which the client PC can direct itself, a pointer to the company network. Unlike other IP addresses, this one is not open to the public but is rather a gateway through which VPN users may enter, and after authentication and logging on, have access to the network. To avoid crowded connections, a tunneling feature called "switching" was developed. This feature helps differentiate between direct and remote users to determine which connections should receive the highest priority. The switching can either be programmed directly into the virtual private network or upgraded so that the hardware recognizes each connection on an individual basis. Incoming callers to the virtual private network are identified and approved for access through features called authentication and access control. These features are usually set up by the IT manager who enters a user's individual identification code or password into the main server, which cuts down on the chances that the network can be manipulated from outside the company. Authentication also offers the chance to regulate access to the material on the LAN so that users can be provided access to specific information only. Encryption is the security measure that allows information on virtual private networks to be scrambled so that it becomes meaningless to unauthorized users. Encrypted data is eventually unscrambled at the end of the tunnel by a user with the proper authorization. This process is usually done via a private IP address that encrypts the information before it leaves the LAN or a remote location. Despite these precautions, some companies are still hesitant to transfer highly sensitive and private information over the Internet via a virtual private network and still resort to tried-and-true methods of communication for such data. THE PERFORMANCE OF VIRTUAL PRIVATE NETWORKS The latest wave of virtual private networks features self-contained hardware solutions (whereas previously they were little more than software solutions and upgrades to existing LAN equipment). Since they are now self-contained, this VPN hardware does not require an additional connection to a network and therefore cuts down on the use of a file server and LAN, which makes everything run a bit more smoothly. These new VPNs are small and easy to set up and use, but still contain all of the necessary security and performance features. In order for a virtual private network to perform properly, the server must have enough bandwidth to accommodate the number of users active at any one time. The number of remote users can also affect a VPN's performance. In addition, new technology that requires more bandwidth is bound to come out from time to time, and this should be planned for in advance to avoid a potential disruption in performance. High volumes of traffic are also known to adversely affect the performance of a virtual private network, as is encrypted data. Since encryption technology is often added on via software, this may cause the network to slow down, hindering performance. A more desirable solution is to incorporate encryption technology that uses hardware solutions to keep the network running at the proper speed. New technologies are also constantly emerging that help to decide just how sensitive certain material is (and therefore how intensive the encryption needs to be). THE FUTURE OF VIRTUAL PRIVATE NETWORKS As virtual private networks continue to evolve, so do the number of outlets that can host them. Several providers have experimented with running VPNs over cable television networks. This solution offers high bandwidth and low costs, but less security. Other experts see wireless technology as the future of virtual private networks. A new protocol for VPN systems has emerged in recent years and shows promise for enhancing the flexibility of VPNs. The traditional VPN system was based on Internet protocol security. The new protocol is based on Secure Sockets Layer or SSL. According to an article in Network World, "The biggest difference between SSL VPNs and traditional IP Security VPNs is that the IP Security standard requires installation of client code on the end user's system, while SSL VPNs focus on making applications available through any Web browser." The popularity of VPNs continues to grow and evolve, providing companies of all sizes a means with which to leverage the Internet to reduce the costs of communication. Administrator's Guide to TCP/IP. Second Edition. Tech Republic, June 2003. Binsacca, Rich. "Virtual Private Networks." Builder. June 2000. Goldberger, Henry. "The Migration from Frame Relay to IP VPN and VPLS Services." In-Stat Alerts. 2 February 2006. Hayes, Jim. "Managed Data Services." Communicate. July 2000. Schnider, Joel. "SSL VPN Gateways." Network World. 12 January 2004. Winther, Mark. "Avoiding the Challenges of Do-it-Yourself Broadband VPNs." Business Communications Review. February 2006.
<urn:uuid:d63619bd-738a-4f65-8de0-fff29fdb505b>
CC-MAIN-2021-43
https://www.inc.com/encyclopedia/virtual-private-networks.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00510.warc.gz
en
0.953191
2,328
3.828125
4
Coventry Canal History The race for supremacy was on in the West Midlands. The businessmen of Coventry wanted to link their city to the local coal fields before Birmingham did the same. Both cities had Acts granted for a new canal and both hired James Brindley as engineer.The Coventry Canal would run from the Grand Trunk Canal (now the Trent & Mersey) at Fradley, past Fazeley, Tamworth, Atherstone and Nuneaton to a basin in the centre of Coventry. Another set of businessmen, this time in Oxford, wanted to build a canal which would link their city to the Coventry Canal and the rest of the existent waterways network. Oxford, of course, is on the River Thames and together with the Coventry Canal this was the final piece in Brindleys’ “Grand Cross” jigsaw which was to link the 4 great rivers of England; the Mersey, Trent, Severn and Thames. The Oxford Canal was to join the Coventry Canal some way north of Coventry – which seems a little strange when Oxford is a long way south of Coventry. However, this was in the days before embankments and cuttings and Brindley had to wind his way around the contours of the land. In fact, the Oxford Canal is probably the most convoluted canal in Britain. The money raised for the building of the Coventry Canal ran out before half the line was finished. The only completed part ran from Coventry to Atherstone – a long way short of Fradley on the Trent & Mersey Canal. However, this was well within range of the many coal mines to the north of Coventry. By this time the company had sacked their engineer, James Brindley. For 7 years the route stood isolated. Meanwhile, their local rival – the Birmingham Canal – had long since been open and was very successful. The Oxford Canal was open for 63 miles, running from the Coventry Canal at Longford to Banbury. Connecting the two canals was no easy process as the two companies argued over water losses and the exact meeting place. For a while the two routes actually ran side by side for quite a distance with no connection. Eventually it was agreed to make a junction at Hawkesbury near Exhall. For the owners of the Oxford Canal, it was very important that the Coventry Canal should finish its link into the main canal network at Fradley. Unfortunately the Coventry Canal was still in no financial state to continue their line so the two canals were left unfinished, both only half built, connecting to nothing in particular – other than each other. Despite their money problems, the Coventry Canal Company still hoped to “invade” the Black Country coal fields. The Birmingham Canal already ran into the prosperous areas around West Bromwich and Wednesbury but they had no outlet to the east. The Coventry Canal Company proposed to build a route from their canal to Wednesbury (just north of Birmingham). They got full backing from both the Trent & Mersey Canal and the Oxford Canal as both saw it as an opportunity to get the missing link completed between Fradley and Atherstone. At a meeting in Coleshill (situated between the towns of Birmingham & Coventry) the supporters and promoters of the project agreed that the new line would be known as the Birmingham & Fazeley Canal. It would run east from Wednesbury to Fazeley. The Coventry Canal agreed that they would construct a link between their current terminus at Atherstone and the terminus of the new Birmingham & Fazeley. The Trent & Mersey Canal and new Birmingham & Fazeley Company agreed to complete the original Coventry route and meet at a point ½ way between Fazeley and Fradley. This amazingly friendly, multi-company partnership (which was an incredibly rare event) was known as the Coleshill agreement. However, the Birmingham Canal Company were not about to let anybody sneak into their territory and steal away with their coal! They bitterly opposed the whole scheme and arguments continued for several years. Parliament were brought into the battle and eventually the Birmingham Canal won the day. The government gave them permission to buy out the Birmingham & Fazeley Company and build a canal from the centre of Birmingham to Fazeley. They kept up the Coleshill agreement and the final parts of the Coventry Canal were built though the Coventry Company – perhaps bitter at losing the battle – had to be pushed into completing their short stretch between Atherstone and Fazeley. With renewed hopes of success due to a soon to be completed canal along with the completion of the Oxford Canal to the Thames and a link into Birmingham, Coventry Basin was extended, taking on the Y-shaped form which can still be seen today. The Coleshill agreement was completed when the final part of the Coventry Canal was opened. This opened the way for water-borne traffic to travel from Birmingham to London for the first time. It also saw the completion of Brindleys’ Grand Cross – 18 years after his death. Great optimism came to the Coventry Canal when a new route to London was proposed. Work had just begun on the Grand Junction (or Braunston) Canal, it would connect with the northern part of the Oxford Canal which in turn connected with the Coventry Canal. However, the good news didn’t last long and it soon turned into very grim news indeed. Another new route was under way via Warwick which would completely miss out the Coventry Canal. The Wyrley & Essington Canal opened on the northern section of the Coventry Canal. This travelled across the northern edge of the Black Country, by-passing the monopolising Birmingham Canal company and opening up new routes to Cannock, Wolverhampton and the River Severn. A new canal opened which linked into the Coventry Canal a little way north of the junction with the Oxford Canal. This was the Ashby Canal which had originally been planned to reach the Trent & Mersey Canal at Burton though in the end it got nowhere near. In fact, it didn’t even reach Ashby! For around 20 years the Ashby Canal made losses and brought little or no income to the Coventry Canal who had hoped to make a nice profit out of toll charges. Eventually things got better and the Ashby Canal became one of the successes of the canal era and beyond. The new route to London from Birmingham opened via the Warwick & Birmingham Canal, Warwick & Napton Canal, 5 miles of the Oxford Canal and the Grand Junction Canal. The loss of traffic had a devastating effect on the Coventry Canal and it took the company many years to recover. Of course it was not all doom and gloom as the canal still provided a through route from the Trent & Mersey to London and some boats preferred the Oxford, Coventry and Fazeley route to the Warwick route. Slowly but surely the Coventry Canal got over its London losses.Along with through traffic from other waterways and an ever growing Ashby Canal trade, there were nine collieries close to the route and a number of quarries. The canal was helped further when the Oxford company had their canal upgraded, cutting 14 miles off their round-the-hills route by use of high embankments and deep cuttings. The Coventry Canal eventually became one of the most successful ever to be built. The Midland Railway Company bought the Ashby Canal. Both the Coventry Canal and the Oxford Canal feared that they would find it impossible to cope with the loss in tolls if Ashby Canal traffic switched to the rails. The two companies managed to continually foil the railway company and were so successful at it that the Ashby Canal traffic actually increased, leaving the railway with no chance of getting permission to close it down. During this period the Coventry company, and other companies on the through route from the north to London, bitterly complained about excessive tolls being charged by the Oxford Canal. Meetings were held and the Oxford company were instructed to lower their tolls but they flatly refused. Despite this, the Coventry Canal continued to increase its profits each year. Because it was in a prosperous coal field and on a useful through route it managed to stay in business long after others had faltered. In fact, it was still paying a dividend up till 1947, its last year of independent ownership. Even the devastation that took place in Coventry during WW2 did not dent the canal’s success. The Coventry Canal passed into government control when the whole canal system was nationalised. During the following decade commercial carrying diminished every year. A minor loss came to the Coventry Canal when the eastern end of the Wyrley & Essington Canal was closed. Although trade from this canal had long since been declining, it cut off the route along the northern edge of the Black Country. Coventry Council began a concerted effort to close down the canal which had for so many years brought them prosperity. They planned to fill in the 5½ miles from Hawkesbury Junction into the city centre. But this was a bad time to try and close ANY canal. The Inland Waterway Association, a group of enthusiasts led by Tom Rolt, were waiting for just such an opportunity to show the world how important Britain’s canals were. They staged a rally in Coventry Basin which raised enough support from voters and councillors to stop the closure.The Coventry Canal Society was formed and they have done much since to secure the survival of the waterway. The last commercial traffic came to an end. During the years that followed the route has become very popular with holiday makers. In complete contrast to the Coventry Council of 1957, the latest council wanted to attract people into the city. This included a complete refurbishment of Coventry Basin. Consultation with the canal society ensured that the new ideas fitted well alongside the old and in 1995 the basin opened. The basin contains offices, small shop units, car parking space and plenty of room for visiting narrow boats – of which there are many. A canal society’s work is never done. Towards the end of the year Coventry Canal Society successfully prevented the demolition of 32 Sutton Stop, known as Sephton’s Cottage, at Hawkesbury Junction.The owners were not only ordered not to demolish the building but to restore it to reasonable condition! Also at Hawkesbury Junction, it was announced that the canal society were to take over the historic pump house. The building was in serious danger of collapse and would need a lot of work. The hope was to make it safe, rebuild floors, restore the engine room and even install a replica engine. The building could also be turned into a visitor centre. Work is under progress. While Coventry Council have transformed the canal within the city, not all local councils are quite so helpful towards the canal.Plans were announced by BW and other local councils to develop the area around Hawkesbury Junction. These plans included wiping out a historic wharf and replacing it with new houses. Not surprisingly, the Coventry Canal Society bitterly opposed this saying not only is the wharf of historic value but it is still used by trading boats. At the time of writing, BW have refused to back down. Coventry Canal Route The Coventry Canal has remained open and is a popular holiday route.It can be found in all the popular waterways guides so I don’t feel the need to give full details here. I will however point to some of my favourite parts of the canal. Possibly the best place on the canal is its northern terminus where it makes a junction onto the Trent & Mersey. Fradley junction is a pretty settlement with pub, locks, cottages and a boat yard. However, all of these are situated on the Trent & Mersey while the Coventry quietly heads off towards the south east. Fradley can be found just north of the A38 to the north east of Lichfield. Just south of the A38 is the small village of Huddlesford. Here the Wyrley & Essington Canal used to head west towards Wolverhampton. Up until a few years ago the W&E at this point was in a sorry state but it is now being restored under the name of the Lichfield Canal. The first few yards of the route are used as private moorings and the cottages which stand at the junction are now occupied by Lichfield Cruising Club. On the minor road south from Huddlesford is the village of Wittington. One item of interest here is a garden which has a lock gate in it. This is a complete folly as the gate was “acquired” from “somewhere” on the Birmingham Canal Navigations. Hopwas on the A51 is a pretty area though you’d never guess it from the busy road. Park at one of the two pubs on either side of the road bridge and you will be in for a pleasant surprise. Fazeley Junction is on Watling Street, no longer the A5 as this has been diverted onto a new bypass. The junction is right beside the point where the A4091 crosses Watling Street. The canal which leaves the Coventry Canal here is the Birmingham & Fazeley. This takes boats right into the centre of Birmingham city. There is a junction house with a number of old canal buildings and a massive mill nearby. Also at Fazeley are a number of redeveloped canal side areas and basins. Between Fazeley and Tamworth is a long straight stretch of canal on an embankment which includes an aqueduct over the River Tame. East of here the canal curves around into Tamworth, passes under a railway bridge and then arrives at the two Glascote Locks. Just below the flight is a former wharf which is now a small marina belonging Tamworth Cruising Club. Above the locks is a junction which leads under a humped back towpath bridge into Glascote Basin. East of Tamworth the canal runs alongside houses and gardens and then heads off into the Warwickshire countryside. The only village on this stretch is at Polesworth though there are a number of road bridges and a railway is never far away. The main lock flight on the canal is at Atherstone. Although there are 11 locks, spread over 2 miles, the flight can be tricky to find by car because the main road (A5) now bypasses the town. Heading east by car you must come off the A5 at the first slip road AFTER the B4116 roundabout. This bends round under the A5. Take the first right, a sharp turn, onto what was once the original A5. This goes under a very low railway bridge and comes to a canal bridge. There are 6 locks downstream from here though they are well spaced out. Climbing upwards are 5 locks very close together. The flight is always well kept and well worth a visit by any canalcoholic. South east of Atherstone the Coventry Canal enjoys another stretch of countryside, passing the BW yard at Hartshill which dates back to the early days of the waterway. Industry and urbanisation returns at Nuneaton. On a bleak winter day this stretch can look very dreary but it is always interesting. From here to Coventry there are only brief sightings of countryside. The canal has now entered the area which it was built to serve. This was once a rich mining area though only the old slag heaps remain – and many of these are now being removed or landscaped. Near Bedworth is Marston Junction where the Ashby Canal begins. This can be reached via a minor road within a council estate between the B4112 and B4113. Although the junction is of interest, it was not a pretty sight when I was there and it does very little to entice potential visitors onto what is actually a lovely canal. Much more interesting is the junction onto the Oxford Canal. This is Hawkesbury Junction, found 2 miles south of the Ashby junction. Take the minor road east off the B4113 just before the B-road goes under the M6. Hawkesbury Junction is also known as Sutton Stop, named after the stop lock at the junction on the Oxford Canal. There is an old pump house here (under restoration), numerous old canal houses and the popular Greyhound Pub. Once upon a time there was no junction here at all. The Oxford canal used to run south for several miles with only a towpath between it and the Coventry. The junction today is an excellent place for those who like to gongoozle (watch canal boats passing by). At busy times there seems to be boats moving everywhere and in all directions! Not everybody is happy with Hawkesbury Junction today. New housing developments are being built alongside the junction and those who prefer the canal system only to reflect yesteryear don’t like it. Five miles south is the end of the line though the journey into Coventry is not one most people would care to remember. Having said that, I’ll never forget my first boat ride on this section. I fell in the canal at Hawkesbury, it was the week between Christmas and New Year – it was cold! After drying off, changing clothes and getting warmed up I was then spat on by a cheerful youth as we passed under a footbridge. The local authority are doing much to brighten up the canal into the city, and the basin at the end of the line is well worth the effort. By car, Coventry Basin is well sign posted and, heading south towards the city, can be found by turning right (west) off the A444 immediately BEFORE the Coventry inner ring road. There is plenty of room to park in or around the basin area. The basin has 2 short arms forming a Y-shape. Surrounding the water are numerous old buildings as well as some brand new ones. New shops stand alongside, swing bridges cross over the water and an old toll office stands in one corner.
<urn:uuid:1101edee-2aee-49bb-96c6-92bbeedc73bf>
CC-MAIN-2021-43
https://skippy.org.uk/canals-and-waterways/index/coventry-canal/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00470.warc.gz
en
0.974005
3,759
3.484375
3
Your complimentary articles You’ve read one of your four complimentary articles for this month. You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please Matthew Gildersleeve goes to the movies with Jacques Lacan. Jacques Lacan (1901-1981) was a French psychoanalytical philosopher. I would like to apply some of his ideas to Mary Harron’s film American Psycho (2000) in order to understand the psychotic behaviour of its protagonist, ‘Patrick Bateman’. My hope is that explaining the film in these terms will contribute to a better understanding of psychosis. Specifically, I want to show that we can understand ‘Bateman’s’ psychotic behaviour in Lacanian terms, since his behaviour at the end of this movie demonstrates the lived experience of psychosis, where, as Lacan says, “That which has not seen the light of day in the symbolic appears in the real.” All will be revealed. Lacan & Psychosis To understand Lacan’s interpretation of psychosis, it is imperative to first grasp his concept of ‘foreclosure’. In Lacan, Language, and Philosophy (2009), Russell Grigg explains that foreclosure is an “initial, primary expulsion” of an idea or symbol whose expulsion “constitutes a domain that is external to, in the sense of radically alien or foreign to, the subject and the subject’s world. Lacan calls this domain the ‘real’.” Thus the ‘real’ in Lacan’s sense is not simply what we mean by the everyday use of the term. Rather, it refers to a world that is psychologically separated from a person’s own inner world; and foreclosure is the process of psychological separation. These concepts are also fundamental to understanding ‘Bateman’s’ behaviour in American Psycho. I put ‘Patrick Bateman’ in inverted commas because, as will be explained, ‘Patrick Bateman’ is not real-ly Patrick Bateman. It is also important to grasp the real in contrast to the Lacanian category of the symbolic, which is that aspect of human experience that involves the production and understanding of the meaning of an experience. When an experience is not meaningfully understood in the symbolic category, it is rejected and “subsists outside of symbolization – that is, as what is ‘foreclosed’” in the real. But although the real can be excluded from the symbolic field, it may nevertheless appear in ‘the real’. It will do so, for instance, in the form of hallucinations or delusions. As Grigg explains, the “real is capable of intruding into the subject’s experience in a way that finds him or her devoid of any means of protection” (ibid). Hence, as Lacan says, “That which has not seen the light of day in the symbolic appears in the real.” This is exactly what we find in American Psycho. In even deeper Lacanian terms, the movie demonstrates that the main character in American Psycho creates the imaginary reality of ‘Patrick Bateman’ through foreclosure of a ‘primordial signifier’ (symbol) – ‘the Name-of-the-Father’, which we might think of as the idea of paternal authority. Lacanian scholars commonly agree that the foreclosure of this primordial signifier is the cause of psychosis. This is because this signifier allows a person to overcome the Oedipus complex, since “Its function in the Oedipus complex is to be the vehicle of the law that regulates desire – both the subject’s desire and the omnipotent desire of the maternal figure.” In other words, the Oedipus complex is overcome through the ‘paternal metaphor’ of the Name-of-the-Father. This “is an operation in which the Name-of-the-Father is substituted for the mother’s desire, thereby producing a new species of meaning.” Without this new meaning concerning the desire of the mother provided by the signifier of the Name-of-the-Father, “the subject is left prey to… the mother’s unregulated desire, confronted by an obscure enigma… that the subject lacks the means to comprehend” (ibid). The foreclosure of this primordial signifier is therefore catastrophic for the person undergoing it, resulting in psychosis. That awful moment when your lawyer tells you you aren’t a serial killer. © Lions Gate Films 2000 The Real In American Psycho In his article ‘Diagnosing an American Psycho’ (International Review of Psychiatry, 21, 3), Wayne Parry provides a summary of the plot of the movie. The narrative centres around ‘Patrick Bateman’s’ murder of his colleague Paul Allen. As Parry says, “Bateman chooses to kill Allen out of envy. They meet for dinner and afterwards, in Bateman’s apartment, Allen is very drunk and Bateman attacks him with an axe and disposes of the body. He changes Allen’s answerphone message to say that he [Allen] has gone to London and packs a bag to corroborate the supposed trip.” After this, “Bateman continues his murderous spree, often using Allen’s apartment as the site of the murder or a place to keep the bodies.” Yet ‘Bateman’s’ serial killing suddenly unravels towards the end of the film. “When Bateman is caught by a police car having killed an elderly lady, he kills the policemen and blows up the patrol car. Having killed a night porter and a janitor. he phones his lawyer, confessing all his crimes and the events of that night.” However, after his confession of his serial killing to his lawyer, we start to see ‘the real’ intruding on ‘Bateman’s’ psychotic symbolic universe: “The following morning, Bateman goes to Allen’s apartment only to find that it is empty and undecorated. As he checks a closet where he left a few bodies, an estate agent asks him to leave after Bateman questions what had happened there.” This is in fact the first of three crucial moments in this film where we recognise the true nature of the psychosis of ‘Patrick Bateman’. Here the truth that Bateman has been foreclosing cannot be kept excluded: in the earlier parts of the film, ‘Bateman’ had used “Allen’s apartment as the site of the murder or a place to keep the bodies” (‘Diagnosing…’, p.281), but now the apartment is empty. This gives the viewer a clue that ‘Bateman’s’ symbolic universe is not what it appears to be. As Slavoj Žižek puts it, this moment is when “the barrier separating the real from reality… is torn down, when the real overflows reality” (Looking Awry: An Introduction To Jacques Lacan Through Popular Culture, 1992, p.20). There are also two other moments in the film when the real overflows into ‘Bateman’s’ symbolic world. The second of these is even more significant than the first. “Bateman runs into his lawyer in a bar and asks if he got the phone message last night. The lawyer believes that the call was a joke. Bateman tries to convince him that it is true but the lawyer states that he had dinner twice with Paul Allen in London ten days prior, leaving the reality of the events ambiguous” (‘Diagnosing an American Psycho’). It is important to note something else from this scene that was missed by Parry but picked up by André Loiselle in ‘Canadian Horror, American Bodies’, (Brno Studies in English, 39 (2), 2013). Loiselle quotes the transcript from the film: Patrick: Don’t you know who I am? I’m not Davis. I’m Patrick Bateman. We talk on the phone all the time. Don’t you recognize me? You’re my lawyer. Now, Carnes, listen. Listen very, very carefully. I killed Paul Allen, and I liked it. I can’t make myself any clearer. Lawyer: But that’s simply not possible. And I don’t find this funny anymore. Patrick: It never was supposed to be. Why isn’t it possible? Lawyer : It’s just not. Patrick: Why not, you stupid bastard? Lawyer : Because I had dinner with Paul Allen… twice in London, just ten days ago. Jacques Lacan © Ironie 2007 This is a crucial moment to retrospectively understand everything in the film up until then. This scene highlights the expulsion and foreclosure of the real in ‘Bateman’s’ psychotic symbolism, since it turns out that not only did ‘Bateman’ not kill Paul Allen, but ‘Bateman’s’ real name is Davis! Unfortunately, what the lawyer, Carnes, is saying to ‘Bateman’ is “radically alien or foreign to the subject and the subject’s world.” It’s alien to Davis (‘Bateman’) because, as Lacan might put it, “the desire of the Other” has been foreclosed from Davis’s psychotic symbolic reality (in this instance, ‘the Other’ is the lawyer, who called him Davis and who told him that Paul Allen is not dead; and so the desire of the Other is what the lawyer believes). Yet although Davis may have excluded a fact from his symbolic universe “it may nevertheless appear in reality.” Thus Lacan’s remark, “That which has not seen the light of day in the symbolic appears in the real.” This is exactly what we find in this scene in American Psycho, when the real intrudes on Davis’s psychosis. The conclusion that Davis lacks the means to comprehend the desire of the Other – what the lawyer is saying – is supported by the final scene of the movie, where after hearing this revelation from Carnes, Davis returns to his friends’ table in confusion. His friends are watching Ronald Reagan give a speech on television, and arguing about whether or not Reagan is lying. One of his friends asks, “Bateman? Come on, what do you think?” This small detail demonstrates that Davis lacks the means to “comprehend the desire of the Other”: with this detail, the viewer can understand that we are now watching events through ‘Bateman’s’ psychotic symbolic universe again. So the Lacanian interpretation of this scene is that Davis lacks the means to comprehend the desire of the Other which appeared in the real as an intrusion to the psychotic symbolic universe in which Davis imagined he was a serial killer called ‘Patrick Bateman’. The other moment in which the viewer sees the way things really are instead of through Davis’s fantasy, is when his secretary is shown to be “leafing through his [Davis’s] diary alone in his office, where she discovers an escalating number of poisonous doodles and designs devoted to the desecration of women’s bodies, much like the various murders he claims to have committed” (from ‘Canadian Horror…’ p.130). With this and the other two moments we have examined, the viewer can see that, as Loiselle says, “This scene clearly establishes the overriding possibility that ‘Bateman’s’ violence has all along been confined to the level of daydream and fantasy.” The viewer can also now recognise that the majority of the film has been shown through this psychotic fantasy. © Matthew Gildersleeve 2016 Matthew Gildersleeve teaches and researches at the University of Queensland in Brisbane.
<urn:uuid:f7f2f31e-6a28-4964-bb1d-a6bffd5ca3dc>
CC-MAIN-2021-43
https://philosophynow.org/issues/113/American_Psycho
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00470.warc.gz
en
0.947127
2,569
2.59375
3
10 Tips for Avoiding Online Security Traps 1. Understand Cybercrime and Malware Malware is malicious software code developed by cybercriminals to infect PCs, networks and mobile devices for the purpose of gaining access to and extracting sensitive data, typically for financial gain. There are more than 200,000 new malware threats created every day1, and nearly 70% of data breaches involve malware.2 The days of malware being created and released by hackers for fun and gaining notoriety are long gone. Today, malware fuels a global multi-billion dollar cybercrime economy. You are their #1 target. Whether you’re using a PC at home or at work, you are just a tool for cybercriminals to gain access to the data they want to steal or the systems they want to hijack. To defend yourself and your organization’s data, it is important to understand that malware writers are becoming very adept at creating threats that evade detection by traditional security solutions. Don’t assume you can let your guard down or behave in a riskier manner because your PC at home or work is defended by antivirus, email security, firewall or other cyber defenses. One wrong click and your PC is infected, and data is at risk. Some malware types – like viruses and Trojans – are tools for breaking into your PC, while others – like worms, spyware and key loggers – are all about snooping through a PC or network looking for particular systems to compromise and data to steal. Many data breaches involve multiple kinds of malware in a staged attack that progresses over time. It’s critical to understand that one infected PC may seem like a small problem, but it can lead to big trouble for the organization. Still other malware – like bots or bot nets – are all about hijacking PCs to steal computing resources to launch other cyber-attacks. Instead of paying for legitimate IT infrastructure and equipment to start a spam campaign, scammers often secretly use a network of infected PCs around the world to distribute malicious email without users ever knowing. Tip: Don’t underestimate how clever cybercriminals have become. Their tricks are extremely effective at luring users to open infected files, click on malicious links, unwillingly share malware with colleagues, and to freely divulge sensitive data. They understand how we behave online, and they know exactly what to do to infect us. Knowing the types of tricks and traps they use is the first step to defending yourself and the organization from malware. 2. Be Difficult to Catch Believe it or not, one of the most common ways that cybercriminals gain access to sensitive data is by tricking users into divulging information we ordinarily wouldn’t share with anyone. It’s called phishing, and it often involves using social engineering tactics to trick users into thinking they have been contacted by a service they know and trust – like a bank, online retailer, airline or social media platform – typically via a fraudulent email requesting that a user disclose sensitive information like passwords, credit card details and even social security numbers. Does this Facebook login screen look real to you? It’s not. It was part of a phishing campaign to steal passwords. Social engineering refers to the practice of creating deceptive attacks based on what is known about the targeted user. For example, cybercriminals scour users’ social media accounts like Facebook and LinkedIn to create phishing emails that look and read real enough to trick users into responding to fraudulent requests to change passwords, confirm payment options or divulge other personal information. Phishing emails and the websites they link to look like the real thing and can be difficult to identify as malicious right away. URLs or web addresses also look legitimate. And since many people re-use the same password, a user’s login credentials for a bank account is often the same one they use to log on to the network at work every day. This enables cybercriminals to access the work network as if they were you. Tip: Always keep in the mind that most of the services you use will never request that you share personal information directly via email. Moreover, the majority of time you are contacted to reset a password or confirm any changes to your account will be initiated by an action you take. In the event you receive an unsolicited email (even if it’s an alarming warning to reset a password), it is best to assume it is malicious. Do not click any links. Contact the service provider or check their website by entering the URL you always use. 3. Resist Your Curiosity Malicious spam remains a major threat to many organizations. These aren’t those annoying marketing emails we’re tired of deleting from our inboxes all day long. Think of malicious spam as a precursor to phishing, employing similar tricks of deception – stealing logos and designs from well-respected brands – to trick users into clicking malicious links or downloading infected files. Malicious spam could even come from an email address spoofed (manipulated) to appear as if it is from someone within your organization. But one click of the mouse to open an infected Word document or PDF, and your PC may be infected. Just about any type of malware can be delivered via malicious spam. Cybercriminals use spam as a “shotgun” tactic to spread their malware as wide as possible. Often these emails are disguised as shipping confirmation notices, alarming notices from banks, tantalizing photos, mortgage scams, fake news alerts and more – anything to raise our curiosity and get us to open an email and click an attachment or link that only leads to trouble. This malicious spam used the CNN logo and the public’s curiosity about Angelina Jolie to deliver malware. Tip: Always be wary of any email you receive that is out of the ordinary or you did not request. Spam can look very real, but avoid the temptation to click without thinking. Also, be aware that just because you’re at work and protected by security solutions, malicious spam can still slip through. Best course of action, if you think it’s spam, delete it. 4. Browse with Care Another favorite trick of cybercriminals is poisoned search results or black hat SEO. This is another way malware writers use our curiosity against us by exploiting high-profile events like a celebrity scandal, new tech gadget or major events like the Olympics, a royal birth, an election or sports championship. Cybercriminals know what people are searching for online and talking about via social media, and they use that against us. While search engines like Google are very good at protecting us from these threats, cybercriminals are quick to stand up entire websites within hours of sensational news breaking, claiming video and pics, but only delivering malware to visitors. It may take Google a few hours to identify and remove these sites from its search results, but in that time plenty of users can be infected. Tip: Get your celebrity gossip and news from trusted sites only. Always be careful what you’re searching for and what sites you visit on your lunch hour. Again, don’t assume you’re protected because work has better security than your home PC. Threats – especially newly created threats – can still slip through. 5. Don’t be Exploited Two types of malware known as exploits and Zero-day attacks refer to cybercriminals taking advantage of vulnerabilities in the software products we use every day. These include operating systems like Windows, web browsers like Chrome, Internet Explorer and Firefox, and a wide range of popular applications like Adobe Flash and Reader, Java and Skype. Malware writers invest a lot of time and energy searching for faulty software code they can exploit and use as a backdoor into your PC to deliver malware for any number of malicious purposes. Zero-day attacks are named as they are because at the time of their discovery there is no fix for the vulnerability they are exploiting, leaving software companies scrambling to release updates within a few days, which is plenty of time for cybercriminals to spread malware. Tip: The best defense against malware exploits is to always update software programs to the latest available versions. When a message appears on your screen to update a trusted software application, do it. Chances are good the software developer is correcting an issue that may have serious security implications. If your organization uses an automated patching solution, these updates should be deployed automatically. However, be mindful of Zero-day alerts from IT, which may instruct you to avoid using certain programs when a threat is identified. 6. Watch for Malware in Disguise Cybercriminals know that users are concerned about security and often employ messages and pop-up screens that appear to be legit programs on your PC requesting updates. Clicking on these links can lead to downloading malware and installing rogue applications. These rogues may claim to be antivirus products or system cleaning programs. Some even claim to be from the FBI. They look authentic, but they are designed to infect your PC to extort money from you, or to install additional malware on your computer. Tip: If you see a warning claiming your PC is infected, don’t click anything. Contact IT. Don’t take the chance. 7. Back it Up There is a family of malware known as ransomware, and just like the name implies, these malicious programs take your PC hostage. By clicking on the wrong link in an email or by visiting an infected website, your PC can fall victim to malware that demands payment to be removed, or even worse large sums of money to regain access to your files. Hijacking users’ PCs and encrypting files so they are no longer accessible is an increasingly popular tool in the bad guys’ arsenal. Tip: Avoid ransomware by being safe online, but be prepared for the worst and back up all critical files your business or operation can’t do without. And since ransomware is often delivered via malware exploits, keep your system patched and software up to date. 8. Stay Safe While Mobile Malware is no longer limited to just PCs. With the rise of mobile devices and their proliferation in the workplace, malware writers have switched tactics to take advantage of these inviting targets. Malicious Android and iOS apps can cause all sorts of headaches – from running up international text charges to stealing personal data and passwords to transmitting infections to other devices, like your PC. This game wasn’t even available for Android when this malicious rogue look-alike was making the rounds, frustrating users and redirecting them to unwanted and potentially harmful content. Tip: Don’t think that your Android or iOS device is safe from threats. Mobile malware is the fastest growing segment of malware. When downloading apps, only download from trusted sources (Google Play and Apple’s App Store) and only choose apps from trusted developers. Moreover, install a trusted security app onto your mobile device. 9. Don’t be a Carrier Just like people can spread the flu or a cold to colleagues, users can spread malware infections to their work PC and network. Two common ways this happens is by sharing files between a work and home PC that may not be as secure or is used by other family members who do not practice safe online habits. Users may work on an infected document on their home PC and email it to their work computer or upload to the cloud where other users may access it, getting infected themselves. Moreover, removable storage devices, like USB sticks and external hard drives are often shared among users. Malware writers know this and create threats that are designed to stealthily move from these devices to PCs. Tip: Only connect your PC to trusted devices and scan all USB drives with your antivirus software before opening any files. Be mindful of who is using a home PC if you are opening work documents on it. Always ask if you completely trust the surfing habits of your 13-year-old son or daughter. 10. Avoid Friendly Threats Security threats on social media continue to grow exponentially. Shortened links are effective tools to hide malicious URLs, and threats tied to compelling images and videos shared on Facebook can spread quickly among friends. Cybercriminals can quickly set up fake accounts and profiles to spread malware, typically employing the same social engineering tactics they’ve perfected. Moreover, cybercriminals can hijack your profiles and accounts to spread malware under your name to people you’re connected to. Tip: Be careful what you click on Facebook, Twitter, LinkedIn and other popular social channels. Only share and click on posts from trusted sources, and be mindful that it’s possible your friends are sharing malware. Also, use different passwords for all your accounts, so if one is compromised the others are still secure. By adopting these 10 tips, users can do their part to protect their network from data breaches, protecting critical data, safeguarding customer privacy and defending your organization’s reputation.
<urn:uuid:f1eeb2e0-28b3-4c25-ab19-c8cf028131a8>
CC-MAIN-2021-43
http://meadeky.com/understanding-and-avoiding-malware/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00310.warc.gz
en
0.92987
2,663
2.8125
3
Plague is a serious, potentially life-threatening infectious disease that is usually transmitted to humans by the bites of rodent fleas. It was one of the scourges of early human history. There are three major forms of the disease: bubonic, septicemic, and pneumonic. Plague has been responsible for three great world pandemics, which caused millions of deaths and significantly altered the course of history. A pandemic is a disease occurring in epidemic form throughout the entire population of a country, a people, or the world. Although the cause of the plague was not identified until the third pandemic in 1894, scientists are virtually certain that the first two pandemics were plague because a number of the survivors wrote about their experiences and described the symptoms. The first great pandemic appeared in AD 542 and lasted for 60 years. It killed millions of citizens, particularly people living along the Mediterranean Sea. This sea was the busiest, coastal trade route at that time and connected what is now southern Europe, northern Africa, and parts of coastal Asia. This pandemic is sometimes referred to as the Plague of Justinian, named for the great emperor of Byzantium who was ruling at the beginning of the outbreak. According to the historian Procopius, this outbreak of plague killed 10,000 people per day at its height just within the city of Constantinople. The second pandemic occurred during the fourteenth century, and was called the Black Death because its main symptom was the appearance of black patches (caused by bleeding) on the skin. It was also a subject found in many European paintings, drawings, plays, and writings of that time. The connections between large active trading ports, rats coming off the ships, and the severe outbreaks of the plague were understood by people at the time. This was the most severe of the three, beginning in the mid-1300s with an origin in central Asia and lasting for 400 years. Between a fourth and a third of the entire European population died within a few years after plague was first introduced. Some smaller villages and towns were completely wiped out. The final pandemic began in northern China, reaching Canton and Hong Kong by 1894. From there, it spread to all continents, killing millions. The great pandemics of the past occurred when wild rodents spread the disease to rats in cities, and then to humans when the rats died. Another route for infection came from rats coming off ships that had traveled from heavily infected areas. Generally, these were busy coastal or inland trade routes. Plague was introduced into the United States during this pandemic and it spread from the West towards the Midwest and became endemic in the Southwest of the United States. About 10-15 Americans living in the southwestern United States contract plague each year during the spring and summer. The last rat-borne epidemic in the United States occurred in Los Angeles in 1924–25. Since then, all plague cases in this country have been sporadic, acquired from wild rodents or their fleas. Plague can also be acquired from ground squirrels and prairie dogs in parts of Arizona, New Mexico, California, Colorado, and Nevada. Around the world, there are between 1,000 and 2,000 cases of plague each year. Recent outbreaks in humans occurred in Africa, South America, and Southeast Asia. Some people and/or animals with bubonic plague go on to develop pneumonia (pneumonic plague). This can spread to others via infected droplets during coughing or sneezing. Plague is one of three diseases still subject to international health regulations. These rules require that all confirmed cases be reported to the World Health Organization (WHO) within 24 hours of diagnosis. According to the regulations, passengers on an international voyage who have been to an area where there is an epidemic of pneumonic plague must be placed in isolation for six days before being allowed to leave. While plague is found in several countries, there is little risk to United States travelers within endemic areas (limited locales where a disease is known to be present) if they restrict their travel to urban areas with modern hotel accommodations. Over the past few years, this infection primarily of antiquity has become a modern issue. This change has occurred because of the concerns about the use of plague as a weapon of biological warfare or terrorism (bioterrorism). Along with anthrax , plague is considered to be a significant risk. In this scenario, the primary manifestation is likely to be pneumonic plague transmitted by clandestine aerosols. It has been reported that during World War II the Japanese dropped "bombs" containing plague-infected fleas in China as a form of biowarfare. Causes and symptoms Fleas carry the bacterium Yersinia pestis, formerly known as Pasteurella pestis. The plague bacillus can be stained with Giemsa stain and typically looks like a safety pin under the microscope. When a flea bites an infected rodent, it swallows the plague bacteria. The bacteria are passed on when the fleas, in turn, bite a human. Interestingly, the plague bacterium grows in the gullet of the flea, obstructing it and not allowing the flea to eat. Transmission occurs during abortive feeding with regurgitation of bacteria into the feeding site. Humans also may become infected if they have a break or cut in the skin and come in direct contact with body fluids or tissues of infected animals. More than 100 species of fleas have been reported to be naturally infected with plague; in the western United States, the most common source of plague is the golden-manteled ground squirrel flea. Chipmunks and prairie dogs have also been identified as hosts of infected fleas. Since 1924, there have been no documented cases in the United States of human-to-human spread of plague from droplets. All but one of the few pneumonic cases have been associated with handling infected cats. While dogs and cats can become infected, dogs rarely show signs of illness and are not believed to spread disease to humans. However, plague has been spread from infected coyotes (wild dogs) to humans. In parts of central Asia, gerbils have been identified as the source of cases of bubonic plague in humans. Two to five days after infection, patients experience a sudden fever , chills, seizures, and severe Plague is a serious infectious disease transmitted by the bites of rat fleas. There are three major forms of plague: bubonic, pneumonic, and septicemic. As illustrated above, fleas carry the bacterium Yersinia pestis. When a flea bites an infected rodent, it becomes a vector and then passes the plague bacteria when it bites a human. (Illustration by Electronic Illustrators Group.) headaches, followed by the appearance of swellings or "buboes" in armpits, groin, and neck. The most commonly affected sites are the lymph glands near the site of the first infection. As the bacteria multiply in the glands, the lymph node becomes swollen. As the nodes collect fluid, they become extremely tender. Occasionally, the bacteria will cause an ulcer at the point of the first infection. Bacteria that invade the bloodstream directly (without involving the lymph nodes) cause septicemic plague. (Bubonic plague also can progress to septicemic plague if not treated appropriately.) Septicemic plague that does not involve the lymph glands is particularly dangerous because it can be hard to diagnose the disease. The bacteria usually spread to other sites, including the liver, kidneys, spleen, lungs, and sometimes the eyes, or the lining of the brain. Symptoms include fever, chills, prostration, abdominal pain , shock, and bleeding into the skin and organs. Pneumonic plague may occur as a direct infection (primary) or as a result of untreated bubonic or septicemic plague (secondary). Primary pneumonic plague is caused by inhaling infective drops from another person or animal with pneumonic plague. Symptoms, which appear within one to three days after infection, include a severe, overwhelming pneumonia, with shortness of breath, high fever, and blood in the phlegm. If untreated, half the patients will die; if blood poisoning occurs as an early complication, patients may die even before the buboes appear. Life-threatening complications of plague include shock, high fever, problems with blood clotting, and convulsions. Plague should be suspected if there are painful buboes, fever, exhaustion, and a history of possible exposure to rodents, rabbits, or fleas in the West or Southwest. The patient should be isolated. Chest x rays are taken, as well as blood cultures, antigen testing, and examination of lymph node specimens. Blood cultures should be taken 30 minutes apart, before treatment. A group of German researchers reported in 2004 on a standardized enzyme-linked immunosorbent assay (ELISA) kit for the rapid diagnosis of plague. The test kit was developed by the German military and has a high degree of accuracy as well as speed in identifying the plague bacillus. The kit could be useful in the event of a bioterrorist attack as well as in countries without advanced microbiology laboratories. As soon as plague is suspected, the patient should be isolated, and local and state departments notified. Drug treatment reduces the risk of death to less than 5%. The preferred treatment is streptomycin administered as soon as possible. Alternatives include gentamicin, chloramphenicol, tetracycline, or trimethoprim/sulfamethoxazole. Plague can be treated successfully if it is caught early; the mortality rate for treated disease is 1-15% but 40-60% in untreated cases. Untreated pneumonic plague is almost always fatal, however, and the chances of survival are very low unless specific antibiotic treatment is started within 15-18 hours after symptoms appear. The presence of plague bacteria in a blood smear is a grave sign and indicates septicemic plague. Septicemic plague has a mortality rate of 40% in treated cases and 100% in untreated cases. Anyone who has come in contact with a plague pneumonia victim should be given antibiotics, since untreated pneumonic plague patients can pass on their illness to close contacts throughout the course of the illness. All plague patients should be isolated for 48 hours after antibiotic treatment begins. Pneumonic plague patients should be completely isolated until sputum cultures show no sign of infection. Residents of areas where plague is found should keep rodents out of their homes. Anyone working in a rodent-infested area should wear insect repellent on skin and clothing. Pets can be treated with insecticidal dust and kept indoors. Handling sick or dead animals (especially rodents and cats) should be avoided. Plague vaccines have been used with varying effectiveness since the late nineteenth century. Experts believe that vaccination lowers the chance of infection and the severity of the disease. However, the effectiveness of the vaccine against pneumonic plague is not clearly known. Vaccinations against plague are not required to enter any country. Because immunization requires multiple doses over a 6-10 month period, plague vaccine is not recommended for quick protection during outbreaks. Moreover, its unpleasant side effects make it a poor choice unless there is a substantial long-term risk of infection. The safety of the vaccine for those under age 18 has not been established. Pregnant women should not be vaccinated unless the need for protection is greater than the risk to the unborn child. Even those who receive the vaccine may not be completely protected. The inadequacy of the vaccines available as of the early 2000s explains why it is important to protect against rodents, fleas, and people with plague. A team of researchers in the United Kingdom reported in the summer of 2004 that an injected subunit vaccine is likely to offer the best protection against both bubonic and pneumonic forms of plague. — The use of disease agents to terrorize or intimidate a civilian population. — Smooth, oval, reddened, and very painful swellings in the armpits, groin, or neck that occur as a result of infection with the plague. — A disease that occurs naturally in a geographic area or population group. — A disease that occurs throughout part of the population of a country. — A disease that occurs throughout a regional group, the population of a country, or the world. — The medical term for blood poisoning, in which bacteria have invaded the bloodstream and circulates throughout the body. Beers, Mark H., MD, and Robert Berkow, MD., editors. "Plague (Bubonic Plague; Pestis; Black Death)." In The Merck Manual of Diagnosis and Therapy. Whitehouse Station, NJ: Merck Research Laboratories, 2004. Davis, S., M. Begon, L. DeBruyn, et al. "Predictive Thresholds for Plague in Kazakhstan." Science 304 (April 30, 2004): 736-738. Gani, R., and S. Leach. "Epidemiologic Determinants for Modeling Pneumonic Plague Outbreaks." Emerging Infectious Diseases 10 (April 2004): 608-614. Splettstoesser, W. D., L. Rahalison, R. Grunow, et al. "Evaluation of a Standardized F1 Capsular Antigen Capture ELISA Test Kit for the Rapid Diagnosis of Plague." FEMS Immunology and Medical Microbiology 41 (June 1, 2004): 149-155. Titball, R. W., and E. D. Williamson. "Yersinia pestis (Plague) Vaccines." Expert Opinion on Biological Therapy 4 (June 2004): 965-973. Velendzas, Demetres, MD, and Susan Dufel, MD. "Plague." eMedicine December 2, 2004. http://www.emedicine.com/EMERG/topic428.htm. Centers for Disease Control. 1600 Clifton Rd., NE, Atlanta, GA 30333. (800) 311-3435, (404) 639-3311. http://www.cdc.gov. National Institute of Allergies and Infectious Diseases, Division of Microbiology and Infectious Diseases. Bldg. 31, Rm. 7A-50, 31 Center Drive MSC 2520, Bethesda, MD 20892. World Health Organization. Division of Emerging and Other Communicable Diseases Surveillance and Control. 1211 Geneva 27, Switzerland. Bacterial Diseases (Healthtouch). 〈http://www.healthtouch.com/level1/leaflets/105825/105826.htm〉. Bug Bytes. 〈http://www.isumc.edu/bugbytes/〉. Centers for Disease Control. http://www.cdc.gov/travel/travel.html. Infectious Diseases Weblink. 〈http://pages.prodigy.net/pdeziel/〉. International Society of Travel Medicine. http://www.istm.org. World Health Organization. http://www.who.ch/. Gale Encyclopedia of Medicine. Copyright 2008 The Gale Group, Inc. All rights reserved.
<urn:uuid:9f347a5a-253b-4de5-9f4a-1d7c75e6fda1>
CC-MAIN-2021-43
https://medical-dictionary.thefreedictionary.com/swine+plague
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00550.warc.gz
en
0.938656
3,174
3.8125
4
THE IMPACTS OF CORPORATE GLOBALIZATION: HOW THE DUTCH EAST INDIA COMPANY CHANGED THE WORLD Reading time: 14 minutes The ‘Age of Discovery’, a period of European overseas exploration from the 15th to 17th century and considered by some to be the beginnings of globalization, is synonymous with the expansion of global capitalism and the explosion of maritime trade. At the start of the 17th century, the popularity of maritime trade was evident in the creation of global trading companies that attempted to monopolize trade routes and expand to the newly ‘discovered’ areas of the world. One of these companies – the Dutch East India Company – facilitated a global corporate expansion that impacted and fundamentally transformed the local societies they operated in. Four case studies; Batavia, Dutch Formosa, Mauritius and the Dutch Cape Colony – show how the VOC’s global corporate expansion impacted these areas through characteristics of globalisation – namely migration, exchange of flora and fauna, mixing of cultures and language and the linking of economies – that ultimately fundamentally changed their societies. By extrapolating these examples to other parts of the world influenced by the Dutch, it is clear that the VOC did facilitate corporate globalisation and that the effects of this process can still be witnessed today. By Madison Moulton The making of corporate globalisation The travels of European explorers like Christopher Columbus and Vasco da Gama at the end of the 15th century ushered in an age of unprecedented expansion of global trade, known as the ‘Age of Discovery’. This Age was first dominated by the Iberian powers, Spain and Portugal, until the 17th century when the idea of a “free sea” was popularised by Dutch jurist Hugo Grotius. At the same time, extensive improvements in shipping vessels allowed for longer trips with greater cargo. New private trading companies were created to take full advantage of these developments, often operating as “states within states” supported by the governments of their countries. The rivalries and competition between various European powers were played out over the oceans as these chartered companies fought for the monopoly on commodities and trade routes. Some examples of chartered companies included the Muscovy Company, the East India Company, the Dutch East India Company (VOC), the Hudson’s Bay Company and the Royal African Company. There were similarities in structure between these companies and they facilitated the spread of European capitalism across the globe, replacing previous traditions of China or the Islamic World. These companies traded on every continent, with the East India Company and the VOC being the largest in reach and volume of business. Although the East India Company was the first to adopt the company model, just two years later the Dutch followed suit in 1602, creating the VOC. The VOC managed to expand quickly, becoming one of the largest European trading companies. To manage this expansion the company created a managerial structure and consolidated company processes, aiming to ensure continuity and uniformity through all missions. Maintaining control of trade routes and territories was achieved by the company’s own military, a new feature to private companies, that made the VOC a virtually unstoppable force. Using the military force, the VOC took control of major trading areas; a project suggested and facilitated by Jan Pieterszoon Coen after being appointed governor-general. Coen’s project began with Fort Jaccatra (renamed Batavia) that became the overseas operational hub in 1619. Their increasing success allowed the VOC to add new trading points by colonising areas, establishing control and using the territories to grow commodities and find labour. After Batavia, the VOC expanded and built trading points at Dutch Formosa (now Taiwan) in 1624, Dutch Mauritius in 1638, and the Cape of Good Hope in 1652. These processes of expansion were the beginnings of the VOC’s contribution to corporate globalisation. Although the definitions of globalisation are contested, and the existence of the phenomenon itself has been questioned, there is some consensus among scholars that globalisation has economic aspects, either “through market expansion or the selling of goods and services”. By this definition the VOC was the epitome of corporate globalisation, attempting to reach many continents and secure trade in all corners of the globe. The VOC is largely characterised by economic or corporate globalisation, but their pursuit of profits also ensured the company not only ‘traded’ commodities, but also language, culture, political beliefs, flora and fauna, and even people through mass migrations. The case studies of Batavia, Dutch Formosa, Dutch Mauritius and the Dutch Cape Colony show how the VOC’s expansion was characteristic of corporate globalisation through the prevalence of the hallmarks of globalisation; migration, homogenization or hybridization of culture, exchange of flora and fauna and the linking of economies. The impacts of corporate globalisation On the 30th of May 1619 the VOC, led by Coen, captured the town of Jaccatra and renamed it Batavia. The Latin “Bata-via” is the name of an old Dutch tribe that carries connotations of the origins of Dutch identity, evoking the idea of a new beginning for the Dutch in Asia. The members of the VOC settled here and set out to make Batavia the capital of VOC operations in Asia. A major part of this process was finding labour and populating the city which led to mass migration, as well as cultural and linguistic hybridization, that characterises their corporate globalisation. The newly established operational hub needed enough workers to populate it, and migration from different parts of Asia fulfilled that role. While Jaccatra was previously a central trading port and had an already diverse population, the immigration to Batavia after 1619 facilitated by the VOC – both forced and voluntary – made the diversity greater. In 1673, a population survey showed that only 7% of the population was of Dutch descent, the rest being 10% Chinese, 11% Javanese, 18% Indian and almost half (48%) slaves of many origins. This migration completely changed the demographics of the town, having an influence on culture and language. An entirely new culture, Betawi (from the Dutch-given name Batavia), was developed in the 18th century and officially recognised in 1930 as an amalgamation of the various cultures that came together under the VOC; an example of cultural hybridization. The Betawi language is the most widely spoken indigenous language in Indonesia and has influences from various other languages including Dutch. Modern day Jakarta remains diverse, reflecting the VOC’s influence, evidenced by the many different population groups with the largest proportion (Javanese) only reaching 35%. Batavia exemplifies the typical forces of globalisation – migration, the hybridization of culture (to the point where it became an entirely new, officially recognised culture) and the hybridization of language – all in the name of the corporate expansion of the VOC. These factors remained influential long after the VOC left the country, influencing the structure and demographics of modern-day Indonesia. This same model was followed in subsequent years in the various areas that the VOC expanded to. The Portuguese first took an interest in the island now known as Taiwan from 1557 and called it ‘Ilha Formosa’ or ‘The Beautiful Island’, never settling there but rather at Macao near China, to begin trade with the Chinese. In 1622, the VOC decided that they also wanted to facilitate trade with the Chinese, and they settled in the territory of Formosa in 1624. As in Batavia, labour was important, but this time voluntary Chinese migration was encouraged to maintain a good relationship with the Chinese for trade. The VOC also linked the economy of Dutch Formosa with the rest of the world, particularly China, and the various territories the company held on different continents. In need of a labour force, the VOC encouraged members of the inland Chinese province of Fukien to immigrate to Formosa to work. In only 25 years, the Chinese population of Formosa had become 15% of the total population. The native Austronesians were subjugated in favour of maintaining good relations with the Chinese, so much so that in modern Taiwan they only make up 2% of the population, with the rest largely Han Chinese. This can be considered the largest “Chinese Diaspora” which completely transformed Taiwan from an independent area to a part of China. The example of Dutch Formosa is, therefore, an early indicator of the cultural homogenization that accompanies globalisation, as the native cultures were suppressed in favour of accommodating the Chinese. Another aspect of this cultural homogenization is the prevalence of endangered languages in Taiwan due to the dominance of Mandarin; Mandarin is the main spoken language and 5 of the traditional languages are in danger of dying out. In terms of the economy, the VOC’s settlement brought Formosa into the global economy as “producers of local commodities for export and as consumers of imported merchandise”. Through this process, their entire economy was transformed to be capital and resource focused and this spread throughout the whole region. Chinese immigration and the importance of trade in this area permanently linked Taiwan and mainland China economically, and later politically and culturally. The VOC left Formosa in 1662, but their influence has remained significant nearly 400 years later. The employees of the VOC were the first to inhabit the island of Mauritius, establishing a settlement in 1638. The purpose of settling in Mauritius was to prevent either the French or English from controlling it and to use it as a stop on the route to Asia. The VOC settlers left in 1653 but returned in 1664, for a second period of occupation, to make the most out of the resources the island had to offer; the most abundant of which was ebony trees. The migration of members of the VOC made permanent and destructive changes to the environment and introduced new flora and fauna to the island that damaged indigenous ecological populations. The most prominent characteristic of the first Dutch occupation of Mauritius was “an extreme form of exploitative development”. With no previous inhabitants on the island, there were abundant natural resources that the VOC completely exploited. One of the most well-known consequences of the environmental destruction was the extinction of the dodo bird within 30 years of the VOC’s arrival at Mauritius. A VOC crew member drew the first depiction of the dodo and described it as “good food”, foreshadowing their later demise (Figure 1). Secondly, on the VOC’s arrival, the island had forests of ebony trees that had been reduced by 10-15% once they left, changing the ecology of the area permanently. The migration of VOC members and their slaves to the island left Mauritius poorer “not only ecologically but also economically” and is considered the “most destructive human intervention in the environment per unit of time” Another environmental change was the exchange of flora and fauna from several continents the VOC was stationed in, to Mauritius. The most common introduction was crops, such as rice, wheat, coconut, and sugar. The latter, from Batavia, would come to be the most important introduction to the island. In modern Mauritius, sugar cane fields are a common landscape (covering 90% of cultivated land) and placed third on the list of exports for 2018. The Dutch introduction of sugar to Mauritius would lead to it becoming known as the “sugar island”. The VOC also brought deer, rabbits and cattle, as well as rats and monkeys that traveled on the ships, that became new predators to plant and animal life. The introduction of these invasive species destroyed indigenous fauna and flora and the deforestation by the VOC crew allowed these invasive species to dominate over the indigenous ones. The VOC’s legacy is still evident across Mauritius, including in its given name; but the most notable effect of their corporate globalisation is on the environment. The Dutch left a completely different island without dodo birds, very few ebony trees, new crops and animals that altered the ecology and decimated some of the native plants, and a small population of ex-slaves. Facing problems with resources and natural disasters, the VOC left in 1710 in favour of their settlement at the Cape of Africa. Dutch Cape Colony The history of European settlement in South Africa begins with the VOC in 1652. A group of VOC crew members, led by Jan Van Riebeeck, arrived in the Cape to set up a trading post as a halfway point between Europe and the main overseas operational hub at Batavia. They decided to colonise the area, beginning an extensive legacy that would come to inform the modern South African state “in political and constitutional terms”. The migration of ‘free burghers’, as well as the subsequent migration of slaves, changed the region and later the entire country through the diversity of peoples, cultures and language. The Cape was one of the most important territories of the VOC, both in function as a stopping point to Asia and in strategy to keep other companies from taking control and achieving a monopoly on the trade route. To ensure the successful maintenance of the colony, the VOC sent foreigners to settle in the area and gave them land. It can be argued that the settlement of the Dutch was one of the most significant points in modern South African history as, similar to the Betawi in Indonesia, cultural hybridization was evident in the creation of the Afrikaaners who became significant in the political and social history of the country. Homogenization of culture, or specifically religion, is also evident as the religion introduced by the Dutch and extended by the British, Christianity, is followed by 80% of the population. Involuntary migration was also prominent here, as it was in Batavia, with about 60 000 slaves from Indonesia, India and parts of Africa being sent to the Cape from 1652 to 1807. These slaves were often political prisoners from their countries of origin, sent to the Cape as punishment for opposing Dutch occupation. This migration brought several new cultural influences to the region and created what is now known as ‘Cape Malay’ culture. The mixture of the Dutch with the slaves at the Cape resulted in the creation of the Afrikaans language, and these two groups often shared cultural traditions and practices. The Cape became a cultural ‘melting pot’ as a result of the forced and voluntary migration under the VOC that still contributes to South Africa’s diversity today. The VOC in a global context These case studies show the extensive and diverse legacies of corporate globalisation under the VOC. The most common factor is migration that changed the demographics of these areas and spread different people to different parts of the world. This further impacted culture, religion, language and connected distant areas through the movement of people. Secondly, the VOC often changed the trajectory of economies by changing agriculture or industry and tied these countries to the global capitalist system permanently. Through this process, environments were also altered and Dutch influence in infrastructure was entrenched. These patterns can extend to other colonies of the Dutch VOC and even those of the Dutch West India Company that spread Dutch influence to the western half of the world. On the North American continent, the Dutch established New Netherland that would become the largest metropolitan area in the world, also known as the financial capital of the world, New York. In South America the Dutch had a brief occupation of Brazil, known as New Holland, that permanently changed environment, impacting the sugar industry, and consolidated the unity of the other inhabitants as one nation against the Dutch. They reached, but never colonised, the continent of Australia and named it New Holland. The impacts of their corporate globalisation even extended to the Dutch Republic as they became a leading economic power through ownership of overseas territories, making advances in several industries and encouraging immigration to the Republic. The 17th century was deemed the “Dutch Golden Age” as a result of the advancements made in the economy, science and the military, much of which is owed to the success of the VOC. The Dutch companies’ reach across almost all continents shows the trajectories of their “market expansion [and] the selling of goods and services” (Figure 2). Because of their expansion, parts of the Dutch legacy exist all over the world and play a role in their 21st century reality, whether it be in demographics, the environment, the economy or the society. Through this corporate globalisation, the VOC linked these areas permanently to each other and ultimately, the rest of the world. While discussions on corporate globalisation and its impacts often revolve around whether it is positive or negative, it is important to first look at the extent of its legacy and how it has impacted different societies. Focusing on the VOC, it is evident that their global corporate expansion formed part of the corporate globalisation said to have started with these 17th-century trading companies. The characteristics of this corporate globalisation – migration, exchange of flora and fauna, mixing of cultures and language and the linking of economies – impacted each of the VOC’s territories in diverse ways. Population demographics changed drastically, new cultures and languages were developed, environments were permanently altered, agricultural crops were spread across continents and economies became intertwined with global capitalism. These impacts have lasted long after the closing of the company in 1799 and although in some areas the Dutch government continued colonisation, it was with the VOC that these processes and legacies began. This article was originally published on the website of one of History Guild’s writers, Madison Moulton. Great podcast episodes that discuss this topic Articles you may also be interested in The Dutch colonial empire was a large collection of territories that spanned the globe from the Americas to Asia. It held together for about 400 hundred years and made its […] When the infamous Zong trial began in 1783, it laid bare the toxic relationship between finance and slavery. It was an unusual and distressing insurance claim – concerning a massacre of 133 […] The text of this article was republished with the kind permission of the author, Madison Moulton.
<urn:uuid:6f378554-506d-46b2-a9de-0b571a47da54>
CC-MAIN-2021-43
https://historyguild.org/the-impacts-of-corporate-globalization-how-the-dutch-east-india-company-changed-the-world/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00550.warc.gz
en
0.963521
3,786
3.5
4
You should take color seriously. When the color and contrast of art are well used to draw the eye to the intended object or focus, we are naturally drawn to the art. Black and white abstract art eliminates the opportunity for color to distract, and uses the dramatic differences in color to draw the eye and convey a message. Abstract art requires an appropriate level of technical knowledge to highlight the dramatic differences precisely and not to leave any shade unnoticed. The use of black and white art can, however, add a strong level of artistic appeal to otherwise bright and colorful surroundings. Black and White Wall Art In the case of framed prints, many different paper types are available, such as matte, glossy and velvet finish in black and white wall art. They can add a whole new dimension to the print. To keep your black and white print looking perfect, choose a black, grey or silver frame. Select a complementary color that is complementary with your color scheme. A style of painting that utilizes just two colors relies on their shades and hues. Dark and light values need to be balanced in order to add visual interest and balance to the artwork. In addition to on an artistic level, the image makes you concentrate on the fine details and subtle touches that the artist chose to include. When there is more shading, the picture appears less stark. For bold black and white abstract paintings, use a fewer number of values of lights and darks. A lack of light and darkness creates sharp contrast, forcing the viewer to pay attention. When you add more values to your image, it will appear more realistic and ‘picture-like.’ It is possible for artists to experiment with “colorless” paintings as a way to stretch their abilities and enhance their skills. Because color shading does not cover mistakes or indecision, the artist must rely on his or her drawing skills and a keen eye for shading to compensate for these issues. Artists are stripped down to their most basic skills: drawing. The same is true for painting without color: it relies on two colors, each colored differently, and all the shades in between. Modern decor can be created with abstract art. When these two colors have sharp contrast differences, art incorporating them only can be a wonderful way to balance a wall in otherwise bright and colorful surroundings. This can be used to draw attention to a bright and plain wall or to distract from a visually unpleasant area. It is also possible for these abstract paintings to stimulate nostalgia and comfort depending on the values they use. A work of art without color reflects an older era of photography where color photography was still a requirement for the future. Themed rooms often use monochrome to convey a particular mood. The mood and desire evoked by black and white abstract paintings are different from those of color paintings. Abstract art relies on skill as much as supplies when it comes to relying on shadings and values. The deft application of shadings and understanding of values can turn the work into anything from cartoonish to photorealistic. With these art styles, the home can be bright and colorful while adding visual interest or making a statement depending on the focus of the art. In general, black and white abstract art is not something to be overlooked and is something that could be placed in any home. Why You Should Buy Adjustable Beds For Your Bedroom For starters, an adjustable bed will provide the right level of support to people who struggle with poor circulation and pain in their legs. Sleeping on a traditional mattress can compromise the veins surrounding your feet, causing them to swell. This can cause anything from restless sleep to constant leg cramps during sleep. The headboard on many fixed-height beds is also higher than you may prefer for sleeping comfortably, so if you have trouble getting up in the morning without assistance this lack of stability could hinder your mobility even more. All these reasons are just scratching the surface though! There is no reason why an individual cannot benefit by purchasing or renting either a fixed-height or adjustable bed based on what they need it for at that time. Reasons Why You Must Consider Buying Adjustable Beds For Your Bedroom If you are looking for a better way to make use of the space that is available in your bedroom, then why not look into getting one of the adjustable beds for your bedroom. Adjustable beds come in a variety of different styles and options that make them perfect for bedrooms. There are many reasons why you should consider purchasing these types of beds for your room. In this article, we will explore a few of these reasons why you must consider buying adjustable beds for your bedroom. We hope that by the time you have finished reading this article you will have some ideas about why you should purchase an adjustable bed for your bedroom. We will discuss a few of these reasons why you must consider buying adjustable beds for your bedroom. Bedroom Beds – You may consider buying adjustable beds for your bedroom so that you can increase the amount of space that you have available in your bedroom. By getting one of these beds you will be able to fit in more of your belongings such as clothes and even other furniture for your bedroom. The most important thing to remember is that you will need to increase the size of the bed to accommodate all of your stuff. Therefore, if you want to fit in all of your belongings you will need to get a bed that can fit two people. This will ensure that you will be able to place all of your furniture in your bedroom without problems. Space-Saving Design – By getting one of these beds you will be able to save a lot of space in your bedroom. These types of beds have a simple bed frame design that will allow you to utilize all of the space that is available in your bedroom. The most common type of bed frame design comes with a headboard and a footboard that will sit upon each corner of the bed. Most of the time you will find that the headboard and the footboard will be on either side of the bed frame. By using this design you will be able to maximize the amount of space that is available in your bedroom. Durability – When you invest in a bed frame you will need to make sure that it is durable and will last a long time. There are many people who choose to buy these types of beds because they are durable and will last a long time. The first thing that you will want to do is to make sure that you are looking at the materials that the bed frame is made from. If you choose one that is made from wood you will want to look for one that has been sealed properly so that it will not warp over time. Portability – Even though many people prefer a bed frame that is made from wood you will still want to make sure that it is portable. Many people choose to get their bed frames in different sizes. This way you will be able to choose the size that will best fit into the area of your bedroom. Most of the time you will be able to move the bed frame around if you need to without the need for having to redecorate your room. Space Saving – A bed frame comes with different options when it comes to size. You will want to make sure that you look at what is available when it comes to the size of bed that you want. If you have a large bedroom you will want to go with a larger bed frame. However, if you only have a small bedroom you will be able to get away with a smaller bed frame. Comfortable -One of the last reasons why you must consider buying adjustable beds for your bedroom is the fact that they are comfortable. You will be able to get various pillow sizes on one of these beds and you will not wake up sore the next morning. They offer firm support to the body and this helps to keep the person sleeping comfortably so that they can get a good night’s rest. If you want to live a better life and get the most out of your time in bed, we suggest buying an adjustable bed. You can take care of all your needs without having to move around as much and it will make sure that you’re getting quality sleep. An adjustable bed is perfect for those who have back problems or just need comfort at night. It’s worth it because even if you’re sleeping 10 hours per day, each hour means more productivity during the rest of the day. If you have been struggling with sleep issues for a long time, an adjustable bed could be the solution. Adjustable beds are designed to promote healthy and restful sleep by taking into account your body’s natural physiology as well as how it changes over time. With a great night’s sleep come better moods, increased energy levels, and even lower blood pressure! At Bedtime Furniture we offer a wide variety of mattresses that will work perfectly with any adjustable base from brands like Serta iSeries or Tempur-Pedic. Know the best fabric for your cushion covers- a detailed guide The image and importance of fabrics have seen an exponential growth over the last 20 years. Homeowners enjoy fabrics in their broadest variety as they want to make their living spaces more radiant and decorative. At the same time, you want to have the bandwidth to provide a catchy interior to your room. You can use different types of fabrics for curtains and sofa cushions. - Linen and cotton are the most popular cushion fabrics. You can find them in fabric stores and home depots. - Both are washable fabrics. Their hypoallergenic properties help in preventing itchiness. - Cotton’s consistent availability makes it extremely viable for your home and furniture needs. - Cotton is undoubtedly a cool fabric. It’s more comfortable inside the rooms, be it home or office. - Linen has mildew-resistant and antimicrobial properties. It’s not prone to pilling, making it ideal for outdoor and indoor use. - It’s also lightweight and cozy as cotton. It’s also heat-resistant, which is why it’s so popular for outdoor uses. A basic guide To make your custom cushion covers, you can choose any type of high-quality upholstery material and fabric. Your first step is to decide on the appropriate print or style to suit your living room or office space. - You may want to use the cushions for outdoors or indoors. Your installation can also affect the fabric type you’re choosing. - Outdoor fabrics must be stain-resistant and waterproof. On the other hand, indoor cushions entail chenille or velvet, both luxurious and opulent fabrics. - If you blend linen and cotton together, you can thwart the dread of creased and fading cushion covers. - Chenille fabrics can be the candidate for cushion fabrics because of its fuzzy and soft nature. - It comprises synthetic polyester fabrics. You’ll find that chenille cushions are long-lasting. They don’t undergo fading, wrinkling, or loose shape. - If you have wool curtains, you know the amount of coziness and warmth they induce inside your room. - The pure wool covers have natural fibres. They are 100% hypo-allergenic. However, the wool products aren’t ideal for large, busy family homes because there could be shrinkage issues in the long run. If you want instant luxury, go for velvet cushion covers. The crushed velvet look is simply sumptuous. The covers come in sleek, modern styles. The other fabrics If it’s outdoor furniture, the best choice of fabric for your cushion cover is canvas. This cotton fabric is very strong, hard-wearing, and weather-resistant. - Another huge quality of this fabric is that it’s waterproof, making it a wonderful option for inclement weather conditions, such as rain, snow, cold, or algid conditions. - Canvas is available in a throng of amazing colors that enhance the feel and look of your home and its existing décor. - You’ve already read about wool or silk. If you’re using cushions for decorative purpose in the hallway or bedroom, these two are your best fabric options. - You usually consider silk as one of the royal and luxurious fabrics. It’s damn expensive, necessitating the best care and maintenance. Fabrics like wool and silk are versatile and tactile. They need special cleaning and care, especially if your home has children or/and pets. How Smart Marketers Convert Old Table Runners into Useful Household and Office Items A table runner is a must-have branding tool for businesses. They’re perfect for brand promotion purposes at corporate get-togethers, marketing events, awards, etc. The best table runners in the market are designed for flexibility. They’re typically made of polyester fabrics, which make them useful indoors and outdoors. These durable and wrinkle-free covers can withstand years of damages. When they get dirty, business owners can simply wash their custom covers to make them appear brand new again. However, the custom prints on these covers may get outdated. Your company may change its brand logo, brand colors, etc. Or, the marketing messages on your old set of runners may grow old or irrelevant. What can smart marketers do with their discarded runners? What they do best – get creative! Many marketers have pushed business-to-consumer (B2C) brands to sell products that feature do-it-yourself (DIY) models. Smart marketers apply this same “DIY” mindset when dealing with discarded runners and other marketing materials. The fact that custom runners are absolute gold when it comes to decorating makes the DIY re-design processes easier. Here are some super-creative ways to use old, discarded runners as useful household and office items – Does your office have plenty of empty wall spaces? Don’t buy fancy artwork or posters. Install your table runner at the center of the empty wall. The branding elements on the runners will make the office feel more professional. Vinyl runners are super-durable, so they’ll also protect the walls from spills, dust, or debris. Many old-school desks have ugly regions. For example, the points where the desk’s legs meet the centerpiece often look jaded due to repeated use. Hide those parts of office desks with recycled runners. Runners used for branding purposes are typically made of vinyl. Vinyl is a super-strong material that can easily resist moisture damage, ink spills, etc. They’re ideal for busy office desks! Many marketers use their old runners as curtains. Small businesses located on ground floors can benefit a lot from using these cost-effective runners as temporary curtains. Share Branding Statements at Front Desks Chairs, tables, sofas, etc., look great when they’re in front of custom printed runners. If the runners have vibrant branding colors, they’ll look even better in reception rooms or front desks. Employees/visitors feel more connected to the brand when its name or logo is always visible. By re-using your runners as office decoration items, you can drive brand engagement. Cover Electricity Lines In homes and office spaces, many pieces of furniture are often connected to electricity points. The cables make these furniture items look unprofessional and unwelcoming. Workers can use old runners to cover these wires. Vinyl banners and runners are extremely heat resistant and flame retardant. Using them to cover electric cables or wires is completely safe. Durable vinyl runners can last for decades without fading or picking up too much damage. Why throw away such durable marketing materials? Just because the messages or designs on them are not relevant? No! Reuse your durable vinyl runners by following this guide. Tech6 months ago Food Lion Employee Login at ws4.delhaize.com – MyHR4U International4 months ago The 4 Golden Rules When Building An Online Email Distribution List International4 months ago Hemp Designs And Fashions – Is Hemp Fashion Really Fashion? International4 months ago Joint Alleviation Using Affordable Exercise Equipment International4 months ago Top Five Tips To Reverse Impotence Naturally Travel5 months ago A Quick Traveler’s Guide to Malaga – You Can’t-Miss International4 months ago 5 Must-Know Building A Muscular Body Tips For Future Teens Interested In Body Building International4 months ago All You Must Know About Binary Options Trading
<urn:uuid:bd43e2a1-700f-4ea5-98e6-fab627df1ab9>
CC-MAIN-2021-43
https://chatonic.net/the-benefits-of-black-and-white-abstract-art-for-home-decor/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00311.warc.gz
en
0.932459
3,422
2.96875
3
Climate Change Impact Part 1. Background It is widely accepted that the climate is changing, and will change more in the future, as a result of human activity. I have carried out many studies where I have quantified the impact of changes to climate. These have been in Europe, Asia, the Pacific and Africa. This posting is an introduction. Other postings will examine specific studies. There is widespread acceptance the climate is changing and that humans are driving, at last part of, the change. As a consequence, it is normal for infrastructure projects to examine the potential impact of climate change and then adjust the design to take account of it. This involves developing a quantified timeline of the changes. So, in this and a series of following posts I am going to describe how to quantify the impact of climate change based on my experience in many parts of the world: Europe, Asia, the Pacific and Africa. The purpose of these posts is two-fold: - Firstly, to pass on my experience to others who are required to quantify climate change. - Secondly, and unashamedly, to advertise my skills and experience. This post is introductory. Following posts will be more detailed and specific. Impact of Climate Change NASA lists the projected impacts of climate change - Change will continue through this century and beyond - Temperatures will continue to rise - Frost-free season (and growing season) will lengthen - Changes in precipitation patterns - More droughts and heat waves - Hurricanes will become stronger and more intense - Sea level will rise 1-4 feet [0.3 to 1.2 m] by 2100 I have examined all these types of impact – and a few more. A climate model represents the earth as a series of cells (or boxes). - These cells are of the order of 100 km by 150 km horizontally and have around ten levels of atmosphere and a similar number of levels of the ocean. - The models simulate the interaction between each of the model cells about once every hour. - The execution time of climate models is of the order of 1 minute of computer simulation for one day of simulation. Typically, a model will simulate the climate for a period of more than 200 years and the execution time will be a few months. The above figures are a generalisation for global climate models and individual models will have different values for the above parameters. In particular, regional models, which on represent part of the earth’s area, will have a finer grid. Representative Concentration Pathways (RCPs) The whole purpose of climate models is to calculate the changes in climate due to human activity and if these are found to have negative consequences, to evaluate mitigation options. The changes in human activity can lead to an energy imbalance – with more energy being absorbed by the earth than is radiated back into space. The best-known factor is the production of Carbon Dioxide, which allows more energy in to the earth’s atmosphere than out of it, but others include the effect of soot particles in the atmosphere and changes to the reflection of radiation. Exactly what humans will do to the atmosphere in the coming century is unknowable so four possible trends have been considered. These are known as Representative Concentration Pathways (RCPs). They are labelled by the associated energy imbalance in watts per square metre at the end of this century: RCP 2.6, RCP 4.5, RCP6.0 and RCP 8.5. The first of these would occur if humans severely curtailed their emission of greenhouse gases. The last of the four assumes a future with little or any limitation of emissions. The use of RCP values was introduced in 2013. Before that the equivalent was SRES (Special Report on Emissions Scenarios) values. Some of the studies I worked on used SRES values. As stated above, global climate models work at a grid size of the order of 100 km side. (The phrase ‘of the order of’ is used as there is variety of scales between different models.) However, it is sometimes necessary to consider areas that are smaller than this, for example a specific length of proposed road. Going from a model cell to the specific area is known as ‘downscaling’. In theory, there are two methods: dynamic and statistical. However, the ‘dynamic method’ effectively requires a climate model with a reduced grid size developed for a specific study which in all but a few cases is impracticable. The alternative, known as the ‘statistical’ method or the ‘delta method’, assumes that the changes in climate projected for a model cell apply uniformly over the whole cell. For example, assume that a model cell projects a temperature increase of 2°C but that the observed temperature within the area of the cell is from 9°C to 15°C. The projection will be that that in all locations the increase will be 2°C. This method places reliance on observed climate data. Source of such data will be discussed later. Source of climate projections For climate projections, I use almost exclusively the Climate Explorer site (https://climexp.knmi.nl ) run by the Netherland’s Meteorological Service. The only exception has been in a few cases when ‘pre-digested’ projections were provided by the client. Use of the web site is free and if you sign up it facilitates use by ‘remembering’ your previous selections. In terms of projections I use mostly two sets of projections: - Monthly CMIP5 scenario runs - Annual CMIP5 extremes The acronym ‘CMIP5’ refers to the ‘Coupled Model Inter-comparison Project Phase 5’. The ‘scenario runs’ part of the site has output from climate models under four groups: Surface variables, Radiation variables, Ocean, Ice & Upper Air variables, and Emissions. In most cases for impact analysis it is the variables in the first group that are important. These include temperature and precipitation. The ‘extremes’ part of the site has a second set of projections. These were developed by the Expert Team on Climate Change Detection and Indices (ETCCDI). Values are provided for 31 variables. These include maximum daily precipitation, number of frost days (when the minimum was zero or below), number of ice days (when the maximum was zero or below) and growing season length. Selection of climate projections The climate explorer site has projections for more than 20 climate models. In addition, some models are run for multiple ‘experiments’ in which slightly different but credible model parameters are used. So, which one to use? In some cases, there might be guidance on the choice of climate models, for example from previous studies. Often however a decision has to be made on which models to use. What I have often done is to compare the simulated climate model output with observed values. This is rarely simple. For example how do you choose between a model which is biased (with values consistently higher or lower than observed) but which represent the inter-annual variation with a different model which is less biased but does not represent annual variations? Sources of observed climate data The best source, if available, is from the meteorological and hydrological services in the country you are working in. For various reasons that is not always possible. Sometimes, for example, the meteorological service requires payment which the project has no funds for. Other sources of data include: Climate change impact Quantifying how the climate will change is but the first step to estimating the impact of climate stage. For example, for the impact on water resources it necessary to run a hydrological model with, firstly, observed climate data and, secondly, projected climate data. Climate change impact studies The following is a list of the climate change impact studies to be covered in other posts. - Southern Bangladesh. The impact of climate change on rural communities including temperature and rainfall changes and the effect of sea level rise. - Tonle Sap is a shallow lake/wetland in Cambodia. The hydrology is complicated as at times the lake receives water from the Mekong river and at times discharges to the river. A model of lake levels was developed which calculated changes in level due to climate change. - The Mekong River Basin. A hydrological model was developed for the whole of the Mekong basin from the Himalayas in China down to the final flow measuring station in Cambodia. A hydrological model was used to estimate changes in flow due to climate change. - Great African Lakes. The three ‘Great’ lakes (Lakes Victoria, Malawi and Tanganyika) are important for their fisheries. Data on lake temperature was decoded and the impact of climate change on water temperature was estimated. - Hydrology of the Tagus river basin. The Tagus (Tejo/Teju) is one the most developed major river basins in Europe. A water resources/hydrological model of the basin was developed and the impact of climate change evaluated. - Road flooding in Vanuatu. The impact of climate change on road flooding and rural economy was studied. - Road flooding in Samoa. Data from different sources were combined to estimate flooding at different elevations. The impact of climate change was also studied. - Road flooding in Kyrgyzstan. In this case flooding was but one of the potential problems the other one being icing during winter months. Again, the impact of climate change was studied. - Variation of climate change in Zambia. - The Yesilirmak Basin in Northern Turkey is highly developed for hydropower and irrigation. It was projected that average flows would decrease and, equally importantly, the seasonal distribution would change. At present, as a result of snow melt, the peak flow is in early summer at the start of the irrigation season; in future the peak flow will be in December. - The Kagera Basin flows through 4 countries (Rwanda, Burundi, Uganda and Tanzania) before entering Lake Victoria. An extensive data base of flow, rainfall and climate was available this was sufficient for a hydrological model, HYSIM, to be calibrated. It was concluded that the increase in evaporation and in precipitation would to some extent cancel each other out.
<urn:uuid:aea53628-5d6f-4ddf-ac56-8242669a863d>
CC-MAIN-2021-43
http://www.climatedata.info/blogger/index.php?categories=impact
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00109.warc.gz
en
0.948193
2,163
3.09375
3
The process of bleaching your teeth, also known as tooth whitening is basically the process of bleaching your teeth. The bleaching agent is applied on the tooth's surface and it changes the color. Sometimes the agent is applied directly to the tooth's surface. It is usually utilized by those who wish to have to have a whiter smile, as well as by those whose teeth are discolored due to causes like ageing or staining caused by smoking and drinks containing caffeine. To help prolong the process of whitening it is also possible to switch on the lights at the dentist's office. There are several strengths available for dentist whitening kits depending on what level of whitening you want. You should ask your dentist what strength they recommend for your teeth. Generally, tooth whitening kits that are applied correctly in accordance with the instructions, offer a significant level of whitening. Tooth whiteningwith the right ingredients and formulas can make your smile appear more white and sturdier than before. The teeth whitening kits are much more effective than dental services provided by dentists. They are reasonably priced and can be used at home without causing any damage to your gums or teeth. The bleaches that are used are mild and can be used by children too. Patients can avail bleaching services from numerous dentists. However, these aren't very strong and can cause a lot of discomfort. Dentist whiteners usually come with pastes, gels, and strips. They employ peroxide-based whiteners to make the strips, gels and pastes. Some use hydrogen peroxide-based whiteners. These are very strong chemicalsthat are capable of making your teeth brighter and whiter. Some of the top-selling tooth whiteners that are available currently are from TheraBrite and LefyBrite. Dentist whiteners are extremely effective and provide amazing results. However, it is important to ensure that you get your teeth bleached with a qualified professional, as there are certain chemicals and equipment involved in this process that could cause harm to your teeth if it is not done correctly. There are many kinds of products for whitening your teeth, such as strips, gels or gels, as well as toothpaste. You must choose the one that is most suitable for your needs and your budget. You should know that the whitening agents work by applying an acidic treatment to the teeth. The agents can remove stains and discolorations from the teeth and make them shine like new. The treatment can last from only a few minutes up to hours, depending on the severity of your issue. Different whitening agents are employed by dentists to address different issues. Your dentist may use a bleaching gel if you suffer from sensitive teeth. If you're experiencing lots of cavities or cavities, your dentist will likely employ hydrogen peroxide as a paste. Your dentist will recommend laser treatment for teeth whitening when you suffer from severe gum disease or staining. Online ordering of dental products for whitening is possible. There are numerous online stores selling teeth-whitening toothpastes, gels and other items. You can look at prices and browse the selection of products. Before purchasing a product, make sure to read customer reviews. Some websites don't provide reviews from customers or provide testimonials. You should purchase these items from websites with a high reputation and from reputable dental practitioners. A professional treatment with a dentist is the best way to whiten your teeth. These treatments can be expensive therefore you could save by purchasing teeth whitening toothpastes online. To make your smile radiant you must always follow the recommendations of your dentist. There are also products available over-the-counter that can be used to whiten your teeth. These gels, strips, and other products are available in a variety of forms. One of the most sought-after over-the-counter items is an whitening toothpaste or mouthwash. All you have to do is brush on the bleaching gel before swabbing it on your teeth. There are many benefits and disadvantages of using dentist whitening, teeth whitening kits and in-office treatments. You may not see the same results when you use an at-home product as if you were going to an in-office treatment. You also cannot perform the treatments regularly as it becomes difficult to remove the stain. If you monitor your oral hygiene, then you will be successful in keeping your smile white and bright. Be sure to brush your teeth twice per day, visit your dentist regularly and only use over-the-counter remedies. What to Look for in the Top Dentist: There are a lot of ways to locate a top dentist. It is essential to start with the best place possible. Consider the price and experience, location, and cost to locate the best dentist for you. You can examine the different locations to locate the right dentist. Doing some research is the most effective method to find a dentist. Ask friends, family members or neighbors for their recommendations. Contact your local pharmacist or your general physician. If you're moving to another city, contact the dentist you currently see to determine whether they're in network. Contact the state or city dental association. Other ways to find a dentist is visiting the American Dental Association website or calling the state or city dental association. They have an approved list of dentists. Another source to find the list of dentists that are approved is on the website of the American Academy for Cosmetic Dentistry. The website also lists approved dentists. This website lists approved providers by specialty as well as by location. It is important to set an appointment when you've found someone you like. It is recommended to make an appointment to have a check-up. This will allow the dentist to determine the dental issues that require to be addressed. If you are comfortable you can make a date for your treatment. There are many ways to locate dental work. You can search your telephone book for dentists or search at the Internet for ads. Call each dental clinic and ask for the receptionist. They will be able to schedule an appointment for you. The most efficient method to find a family dentist practice is to use the yellow pages in your phone book. Shopping online is simpler when the clinic accepts credit cards. Many dentists have payment options that include annual installments along with monthly installments and take out loans. If you have questions about their services, contact the office to talk to the team of pediatric dentists. To get the answers you need you'll need, it's a good idea to visit the clinic in person. A good pediatric dentist will make an appointment for both of your children to see them at the same time. When scheduling your appointment, you will be required to bring the dates for all your children's routine checkups. Your dentist may recommend preventative services that you can do while your children are sitting in their chair. Some options include taking out the garbage and brushing teeth. It is also important to be aware of when the dental office for your family closes for the day , so you can make your next appointment for cosmetic dental care without worrying about the appointment being booked. You should also consult your dentist for CEs. They should be licensed and required to complete continuing education courses. This is crucial as you want to find the right dentist for you. You should also inquire whether they're willing let you tour the facility so you can examine how well the dental office is constructed. In addition, ask them about emergency care services. You'll have peace of peace of mind knowing that your dentist is prepared to handle any issue that might arise during the course of your child's regular visit. Because this is an area of personal taste and financial resources, it's important to look around for the best dentists. There are certain characteristics you'd like to see in a dentist. You don't feel at ease with the dentist you have chosen. Find another that fits your needs. This can help you save money as you get the best dentist possible for your family. It is crucial to know the things to look out for when you are looking for a new dentist. There are many things to consider when choosing the right dentist. If you're seeking better oral health, you should ensure that the dentist you select utilizes the best methods to care for your teeth. A good dental program will comprise both preventative and emergency care. In addition to that, there are numerous cosmetic dentists who use porcelain veneers, bonding crowns, teeth whitening , and other procedures to enhance the appearance of your smile. If you're looking to get healthier teeth and gums, then a dentist who is functional might be the right option for you. When it concerns your oral health, you need to ensure you find a dentist who knows what they're doing. It is the only way to make sure that you get the best possible care for your teeth. Dental Implants are small pieces composed of titanium, titanium, or dental steel that are surgically implanted in the jaw bone to return the original shape and function after injuries or disease have damaged or destroyed teeth. Dental implants are thin metal screw that interfaces directly with the patient's bone or jaw to function as an artificial anchor crown, bridge, crown dentures or other dental prosthetic. It is placed in the jaw at an angle that will allow it serve the necessary function. Dental Implants can replace or restore function in many parts of the mouth. Dental implants are tiny metal wires that are less than one millimeter in size. Through a small incision made within the mouth, they are placed into the jaw bone. Once placed the implants are surgically molded to conform to the jaw and are then secured using an specialized screw to keep them in the desired position. After healing for between six and nine months patients can use their new teeth as well as dental implants with much greater confidence than they could before the procedure. There are two kinds of dental implants: endosteal and aperiosteal (both are made from titanium). Endosteal implants are the most popular kind of implant in the present. They are made of titanium. Endosteal implants are put into the bone that is located in the rear of the mouth of patients. The implant is placed in the jawbone , and remains in place until the original dental implants are removed. Once the titanium part is formed into the desired shape, the two pieces are then joined by screws. The procedure is disguised by braces that cover the jaw. However, the look of a metal brace can alter the way people see you, so be sure to cover your new teeth with it! Dental Implants function just like other prosthetic tooth implants in that they are placed in the jawbone, so that a tooth or set of teeth may be replaced. After the procedure is completed successfully, a fake tooth or teeth can be attached to the bridgework. Dental Implants can be used to replace one missing tooth, multiple missing teeth or all of your teeth. Before placing dental implants in your jawbone Your dentist will help you choose the best type of dental implant for your needs. Dental Implants are one of the most durable and reliable replacement options available. Dental Implants can be replaced with permanent, affordable solutions provided you have a reputable dentist and premium materials. Before you can get the procedure the dentist will examine the condition of your teeth and gums. The dentist may also take x-rays and CT scans to make sure that the implant is a good fit. Implants made by dental implants might not be the best choice if your mouth has a gap that is large between two or more of its teeth. The dentist may recommend the use of a bridge or other temporary solution in this case. If there is not enough space between your molars, the implant or the bridge, the dentist may recommend a partial plate or a bridge to strengthen the jaw and protect the jaw for long-term usage. Dental Implants may not be suitable for all. Invisalign could be a good option if your jaw is too small, or if you have a wide gap. This treatment is a way of aligning your teeth with dental crowns or false teeth. Your periodontist will likely suggest Invisalign for patients who don't have enough space between their teeth to comfortably support an implant. If you have false teeth or dentures You may be able to switch to Dental Implants. To ensure a successful placement, you will have to undergo numerous oral exams before you can make the switch. If the implants look healthy and your periodontist is satisfied, they will suggest two weeks of oral hygiene. When you don't eat, you will be treated with sedation and a painkiller to control your discomfort. The dentures or false teeth are required to be removed and implants will be placed into your gums. Taylor-Osborne encourages anybody trying to find a dentist to share previous oral experiences or oral concerns, including any anxiety (Best Veneer Dentist Near Me). "Make certain the dental expert comprehends your issues and responses all your questions," she states. Choose a Partner Above all, you desire to pick a dentist who can be a part of your overall health care team. Taylor-Osborne says. Dentist Cleaning Near Me. "Search for somebody who can be a coach to inspire you, a trusted consultant to turn to when health concerns emerge and a partner to make dental care decisions with." More from Mouth, Healthy. If you are trying to find a partner in dental care, then you need to make sure you discover the right dental expert for the long term. However, it is not as basic as it sounds. There are lots of practitioners in Kenosha alone (Wisdom Teeth Removal Dentist Near Me). It could take a good quantity of time and effort, but here are a couple of ideas to help you accelerate the procedure. 1 - Dentist With Payment Plan Near Me. Aim for Convenience This does not constantly mean proximity to your home - Dentist Implants Near Me. The dental office could be near to your work or your kid's school. Aside from location, you must also consider office hours. It will be difficult to arrange consultations for a dentist who only works when you are on the task too. 4. Best Invisalign Dentist Near Me. Ask About Education and Experience Does the dentist have any accreditations? Are they members of any associations? What kind of specialized training has he or she completed? For how long has the practice stayed in business? Does the dental practitioner take part in ongoing or continued education and training? These are very important concerns to find the answers to prior to committing to a dentist. Best Pediatric Dentist Near Me. Examine to see if staff wear gloves, what the treatment rooms look like and the type of innovation they use around the workplace. If you call ahead you can learn if there is a possibility, you can take a trip of the office before you make a consultation (Pediatric Dentist With Sedation Near Me). This will provide you first-hand experience with the personnel, the workplace layout and the dentist - Dental Bridge Dentist Near Me.
<urn:uuid:7f6f3dfb-d64a-4019-a482-bf843556f162>
CC-MAIN-2021-43
https://dentist-with-payment-plans.slo-istra.com/page/read-veterinary-fAcmaUc2oDM4
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00071.warc.gz
en
0.953423
3,115
2.640625
3
What exactly is flash sync speed and should it be a factor in a buying decision? Flash sync speed is the maximum shutter speed possible when using a flash. For most flashes, the flash sync speed, sometimes also referred to X-Sync speed due to the use of Xenon in the flash bulb itself, is around 1/200th to 1/250th of a second. When using flash, your maximum shutter speed is limited to the flash sync speed. In many cases, this is perfectly adequate, as the flash pulse itself is sufficiently short enough (around 1/1000th of a second), and brightly lights up the scene beyond the normal ambient lighting for a fraction of the time the shutter is actually open. Flash sync speed can sometimes be a limiting factor, such as for action photography, as 1/200th or 1/250th of a second may not be enough to stop some kinds of action being photographed when fill flash is not able to overcome ambient lighting. Some higher-end camera gear is capable of higher flash sync speeds. Some models support up to 1/500th of a second, which is better for photographing action. There are also alternative flash sync modes for better camera gear and flash gear. Normally, flash is synced with the "forward" shutter edge, and fires when the forward shutter curtain edge has opened and is moving. An alternative sync mode that syncs flash with the "back" shutter edge. When shooting with back curtain flash, you can produce action ghosting, and freeze your subject at the very end of the exposure, which is sometimes a desirable effect for sports photography. Finally, there is "high speed sync." With this alternative sync mode, cameras may sync to flash at any shutter speed, even up to 1/8000th of a second on top-end models. High speed sync does have some limitations. With normal flash sync, there is a single pulse of the flash. In high speed sync mode, the flash pulses continuously thousands of times a second. This ensures that the scene is illuminated for the duration that the shutter is open and accommodates the behavior of a camera shutter at such speeds. The drawback here is that to provide enough power for continuous flash pulses, the power of each flash is less, by around 1 stop per stop of higher shutter speed. Additionally, since the scene is illuminated continuously for the duration the shutter is open, the flash itself is not as useful for "stopping" action. This is often not that big of an issue, however, as the higher shutter speed itself is capable of freezing action (particularly at 1/4000th or higher.) If you don't need high-speed flash sync, any flash supporting a standard sync speed will suffice. If you need to sync flash at extremely high shutter speeds, then you will need both a camera body and a flash that supports high speed sync. You won't be quite as limited with high speed sync, but keep in mind that the power of your flash will be a little less than normal. (Generally, this is not a problem at all, and you can usually open your aperture to compensate...but it is a factor to be aware of.) Short Answer: Probably not a Factor in Buying Decision The sync speed is the fastest shutter speed you can use when using the flash. It depends mostly on the camera body. Most flashes will sync with most cameras at around 1/250. Unless you have specialist requirements, they will all perform in a very similar way. Other factors are more likely to help you decide which flash to buy. (e.g. price, manufacturer, recharge time, length of flash, controls, remote control, swivel, ...) freeze the action: This does not typically affect your ability to freeze the action, because (as @che points out) the flash only lasts for 1/1000 of a second (typically), so for images where the subject is being illuminated mostly by the flash, it will be frozen as though you were using a shutter speed of 1/1000. Confusing? I find it a bit confusing, to be honest, but it does all work out logically once you get the hang of it! X-sync is the lowest shutter speed during which the shutter is entirely open at some time, and thus allows use of flash. (You don't want it to light just top half of frame.) X-sync differences don't play that much role in stopping action (flash pulse duration is around 1/1000 anyway), but rather in eliminating ambient (or overpowering sun, if you want) with flash in photos when you have both kinds of light. If you have a flash-lit portrait outside, you usually want to have the surrounding landscape a bit darker than the person. Now, if you use a full power of your flash, and get proper exposure of foreground at f/8 and ISO 100, out might even get background overexposed at 1/200 sec if it's sunny day. Being able to go to 1/500 might make the shot possible, or allow you to raise ISO to 200 and save some flash power to get faster recycle times. Sometimes you can kind of "cheat" by using high speed sync which fires the flash multiple times to cover all parts of the frame. This doesn't really help in this situation as it eats flash power you need to have foreground exposed properly. IMO, it can be a factor in a buying decision, but when/if it is, you'll generally know it ahead of time. That is to say, if you've been running into problems (e.g., when using flash under daylight) and wishing you could get a faster flash sync, then it can certainly be worthwhile to get a body that syncs at higher speed. On the other hand, if you haven't run into a problem, then chances are pretty good that you really don't care. At one time, X-sync speed was a serious consideration. When most cameras only synced at up to 1/60th or 1/90th, there were quite a few situations where it caused of a problem. The obvious problem arose when you had decent (but not really bright) ambient light. If you wanted, say, 1/500th to stop action, but only had ambient light to support (say) 1/125th, you ended up with problems either way -- if you didn't use a flash, you only get 1/125th, and with it blur. If you did use flash, you could only use 1/60th, so you had to stop down quite a bit and use the flash at (close to) maximum power to overpower the ambient light enough to keep it from leading to "ghosts". Shooting at full power, however, lead to longer flash cycle times, so you were more likely to miss shots as the flash recycled. Unless you collect relatively old cameras, however, you'll probably get an X-sync of at least 1/200th, which is enough to prevent problems under most conditions. Being able to go higher is sometimes handy, but not all that crucial. A few additional bits and pieces, though it's probably more in the range of trivia than useful information for most people: - You can get X-sync at up to 1/1600th of a second on a few cameras (some of the PhaseOne medium format bodies). Flash selection gets tricky though, because to work right, you need a flash with a duration of less than 1/1600th of a second, where many are around 1/1000th, and some studio flashes have even longer duration. This can give some pretty strange effects though -- at 1/1600th, with a decent-sized studio flash, the flash can overpower the ambient light to the point that even shooting in broad daylight, you can make it look almost like you were shooting at night, with the sky and most background relatively dark. You do have to be careful, though, to avoid the "deer in the headlights" look. With care, however, you can help isolate the subject, even (for example) with a background that would otherwise be excessively busy and distracting. - As far as I know, the fastest X-sync with a focal plane shutter was at 1/350th (Minolta Maxxum/Dynax/Alpha 9 and 9xi). - Those same cameras did high-speed sync at their maximum shutter speed of 1/12000th. A higher flash sync speed is useful if you wish to shoot with flash at large appeture (because you want low DoF) but are shooting at a subject that has bright backlight. A typical instance where this might happen would be a wedding shot with the sun behind the subject. You want flash because you want to bring up the dynamic range, you want large appeture because you want low DoF, and you have bright light so you want a shutter speed faster than 1/250th. In this case a camera that can do 1/500th sync speed is a life saver and if you were doing wedding photos for a living I guess you would rely on that for brightlight shooting. Is it a factor in buying desision? If your going to want to use flash in bright light (for dynamic range) and low appeture for DoF then YES. Otherwise NO. What is flash sync speed? The flash sync speed is a limitation that's based on the shutter mechanism of the camera. Generally speaking, flash bursts can be much shorter than the shutter speeds of the camera. And with focal plane shutters, the shutter speed is determined by the gap between the first and second curtains as they travel across the frame. At the sync speed, that gap is big enough to leave the entire sensor/frame of film uncovered during the exposure. When you go faster than that, the gap is smaller than the frame, and you'll get black bars where the curtains cover the frame (with dSLRs, it'll be at the top and/or bottom of the frame). A typical maximum sync speed for dSLRS is around 1/200s. High speed sync To get around the limitation of X-sync speed, some camera/flash combinations are capable of high-speed sync (called HSS or FP [focal plane] flash)--the camera and flash communicate so that the flash can send out multiple bursts timed to follow the travel of the gap across the sensor, so that the whole sensor sees the same amount of illumination from the flash. But this will reduce the power output of the flash by roughly two stops. High-speed sync becomes useful in two basic situations. (See also: Neil van Niekerk's Tangents post on when to use HSS). When you're working in bright sunlight and want a thin depth of field, the sync speed limitation can put you in a situation where overexposure is inevitable unless you use neutral density filters or HSS. If you're in a flash situation where you can't kill the ambient and you need to freeze fast action (if you can get the majority of illumination from the flash, however, the flash burst is probably high enough to freeze the action on its own). So whether or not sync speed affects a purchase decision depends on the following factors: Do you shoot flash? If not, it doesn't matter. Do you plan to use flash with a fast shutter speed? If you're never going to use flash for fast-action photography where you can't kill the ambient, or you don't want thin depth of field in very bright conditions with fill flash, then you can probably limit your shooting at or below your sync speed. Studio shooting, for example, typically doesn't use HSS. Does the camera you're looking at have a particularly slow sync speed? 1/200s as a sync speed is one thing. 1/160 is another (especially with cheap manual radio triggers that add a delay). And 1/10s is another. For example, the Panasonic GX-7 has a very respectable sync speed of 1/320s. With the built-in flash. It's 1/250s with an external flash. And in silent mode, it's 1/10s--so flash is completely disabled with an external flash. These are limitations you would probably want to factor in when deciding whether or not to buy this camera body. Can your camera body do high-speed sync? Not all camera bodies can. So, if, for example, you have a Nikon D3x00 or a Dx500 body, then the sync speed takes on more importance, because those bodies don't do HSS/FP, so you'll never get above that x-sync speed with flash. The GX-7 I mentioned above can do HSS with a four-thirds HSS-capable flash. Are the flash and triggers you're looking at HSS-capable in your system? Not all flashes or triggers do HSS, either. If you're using off-camera flash, then the camera, flash, and triggers all need to be capable of communicating the sync signals for HSS or you'll be limited to your x-sync speed. In addition to the other answers about flash, there is something else that can be interesting, and it is not related to flash use : x-sync speed is also the time difference between the top and the bottom of your picture. Even if you use the fastest speed (1/8000 for example), there will be 1/400 of seconds (More or less depending on your sync speed) between the capture of the opposite side of your frame. This is not much, but on some fast-moving subjects (The one you are going to need the fast speed in the first place), this can lead to some rolling shutter effect. Using the numbers I cited earlier, this effect will be 20 times more important than the motion blur. A sport car going at 180 km/h will have move 6 mm in the 1/8000th of second, giving no noticeable motion blur, but 12.5 cm in the 1/400th, which will really lead to oval-looking wheel. If you do high-speed day-light photography, flash sync speed can be a factor in the buying decision.
<urn:uuid:e997d950-363f-4f84-9cb6-10f90b6fc07f>
CC-MAIN-2021-43
https://photo.stackexchange.com/questions/836/what-exactly-is-flash-sync-speed-and-should-it-be-a-factor-in-a-buying-decision/849#849
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00470.warc.gz
en
0.954554
2,945
2.9375
3
Who said games aren’t educational? While these games don’t fit your typical educational game mold, they’re just as capable of engaging kids while teaching them important lessons. Here’s a list of our favourites that you should also get to know! Educational Game Guide For Kids In Singapore 1. Assassin’s Creed OriginsWhat This Game Teaches: Ancient History Where To Play It: PC Why watch a documentary about Giza Pyramids and Roman Aqueducts when you can interact with them in an educational game? Explore Ancient Egypt with the Discovery Tour from Assassin’s Creed Origins! With topics ranging from Daily Egyptian Life to Roman history, each tour is bite-sized so players can comfortably consume these nuggets of information. Also involve your character in various activities and participate in history by browsing the Ancient Library of Alexandria or making your own Egyptian pottery. If you’re worried about non-educational aspects like combat, Ubisoft has removed them entirely so your kids don’t get distracted! [caption id="attachment_19629" align="aligncenter" width="640"] (Credit: Mary Harrsch / Flickr)[/caption] 2. Kerbal Space ProgrammeWhat This Game Teaches: Physics, History, Aerospace Engineering Where To Play It: PC, Mac, PS4, Xbox One Make rocket science fun with Kerbal Space Programme! While it might appear goofy with dorky Kerbal characters and hilarious YouTube videos, this educational game teaches the basics of Aerospace Engineering by enforcing Kepler’s Planetary Motion Laws and Newton’s Law Of Motion. If all this sounds too much for you, don’t worry! Kerbal Space Programme has worked with schools to create KerbalEdu: a school-ready version of the original game. The helpful Earth History Campaign covers everything from instantaneous acceleration to orbiting and includes lesson plans to guide players along. You’ll even get to build historic spacecrafts like the V-2 and Sputnik so kids can witness them in action. [caption id="attachment_19634" align="aligncenter" width="640"] (Credit: Not a real bear / Flickr)[/caption] 3. FactorioWhat This Game Teaches: Programming, Optimising Systems Where To Play It: PC, Mac OCD players beware; Factorio isn’t much of a looker but it offers an engaging reward loop system! At its heart, you’ll find an educational game that allows players to experiment and automise everything (conveyor belts, robotic arms, trains). To engage your kids critically, create a factory with a problem like a bottleneck at the refining stage or an unnecessarily long transport route. By letting them resolve the issue, they learn about trouble-shooting, the importance of optimisation and how many smaller parts contribute to the greater whole! [caption id="attachment_19632" align="aligncenter" width="640"] (Credit: Newtomic / Flickr)[/caption] 4. Human Resource MachineWhat This Game Teaches: Programming Where To Play It: Nintendo Switch, PC, Mac, Android, iOS Want to introduce your kids to programming and coding? Try out the educational game, Human Resource Machine! The game oozes a Tim Burton-esque charm as your character has to solve corporate objectives in a grim, monochrome setting. While transferring items from an inbox to an outbox seems simple, your boss continually throws hurdles at you by enforcing various restrictions. Fortunately, the interface is intuitive enough for any budding coder to use! Even when you’re finished with a problem, the game encourages you to optimise your script by adding optional goals. Go from a complete newbie to pro as you debug and test your way to success! [caption id="attachment_19633" align="aligncenter" width="640"] (Credit: BagoGames / Flickr)[/caption] 5. MinecraftWhat This Game Teaches: Creativity, Problem Solving, Teamwork Where To Play It: Nintendo Switch, PC, Mac, Android, iOS, PS4, Xbox One Minecraft has been popular the past decade because it’s an educational game that appeals to all ages! Thrusting you into a pixelated world, players must survive while building houses and mining materials. Sky’s the limit as players can build anything, from towering medieval castles to sculptures of their favourite characters. The game’s built-in redstone circuit system even encourages players to create moving figures: with a flick of the switch, you can create moving structures that automate certain tasks. Minecraft also encourages collaboration as players pool resources and work together, allowing kids to learn about the wonders of teamwork! 6. Epistory – Typing ChroniclesWhat This Game Teaches: Typing Where To Play It: PC, Mac If your kid finds typing boring and tedious, let them try Epistory – Typing Chronicles! Featuring a colourful origami-like world to explore and typing as its core mechanic, Epistory is an educational game that allows players to roam freely. Playing as a girl riding on a fox, you fend off danger by typing in words that appear above enemies to destroy them. Do this in quick succession and you’ll accumulate a multiplier which grants you more points to upgrade yourself. The game is great for younger ones to flex their typing skills, especially in the arena battles where enemies relentlessly pour in and you’re forced to defend yourself by typing rapidly. [caption id="attachment_19628" align="aligncenter" width="640"] (Credit: Fishing Cactus / WikiMedia Commons)[/caption] 7. Universe Sandbox 2What This Game Teaches: Space And Gravity Simulation Where To Play It: PC Universe Sandbox 2 is an educational game that poses the question: what happens if you had complete control of the Solar System? Discover accurate recreations of planets like Jupiter and Venus as you adjust how fast time passes while enjoying the relaxing piano background music. Or you can skip the boring parts and go straight to experimenting with the destructive features of black holes and planetary collisions. Filled with flashy effects and jaw-dropping graphics, this educational game proves that you can learn while having fun experimenting with destructive physics. [caption id="attachment_19638" align="aligncenter" width="640"] (Credit: junaidrao / Flickr)[/caption] 8. Papers, PleaseWhat This Game Teaches: Attention To Detail, Empathy, History Where To Play It: PC, Mac, iOS Don’t let the dreary design and mundane premise fool you, Papers, Please is an excellent educational game that references life during the fall of Russian communism! As border control, you’re tasked with only admitting people if they have the right papers and earn income based on performance. The game throws numerous curveballs at you, ranging from passport discrepancies to weight differences because they’re carrying firearms! The game also touches on morality as you decide whether to let in criminals because of a bribe or if a couple gets separated because of false paperwork. With multiple endings and long-term consequences, try this game if you’re looking for a unique experience. [caption id="attachment_19636" align="aligncenter" width="1280"] (Credit: Papers, Please / Facebook)[/caption] 9. Valiant Hearts: The Great WarWhat This Game Teaches: World War 1 (WW1) History Where To Play It: Nintendo Switch, PS4, Xbox One, Android, iOS While you could use a history textbook to educate your kids, few games have tackled WW1 better than this gem of an educational game! Inspired by tragic WW1 war letters, Valiant Hearts: The Great War explores the first World War from the eyes of a modest soldier as each character experiences the horrors of war. The storyline and music plucks at your heartstrings as you desperately take cover behind mounds of corpses and witness soldiers dying within seconds. To help with educational efforts, the game makes numerous references to real-life locations like Verdun in their helpful colourised images that include informative captions. [caption id="attachment_19639" align="aligncenter" width="640"] (Credit: Carlos Hergueta / Flickr)[/caption] 10. Cities: SkylinesWhat This Game Teaches: City Planning, Management Where To Play It: Nintendo Switch, PS4, Xbox One, PC, Mac Cities: Skylines is an educational game that starts out simple but soon adds layers of complex city building elements! Players learn city planning basics and crisis management as they take on the role of city mayor. With over 36 km² of land, this sandbox strategy game grants players freedom to decide how to run and structure the city. You’ll definitely make mistakes as you encounter congested junctions and unregulated pollution but you’ll learn from these mistakes to make your next city better! Ultimately, your kids will gain a sense of satisfaction as they see their little town expand into a bustling city. [caption id="attachment_19630" align="aligncenter" width="640"] (Credit: Kakha Kolkhi / Flickr)[/caption] 11. Dynasty Warriors SeriesWhat This Game Teaches: Chinese History Where To Play It: Nintendo Switch, PC, PS4, Xbox One While it has earned a reputation for over-the-top combat and hilarious weapons (fans, paint brushes, tarot cards), Dynasty Warriors does its best to accurately represent The Three Kingdoms period! Featuring legitimate historical figures like Cao Cao and Zhuge Liang in epic battles like the ones at Chi Bi and Wuzhang Plains, you’ll learn plenty about Chinese History as you hack and slash your way through enemies. The game’s melodramatic narratives and cutscenes also give players insights on faction motivations as you witness the rise and eventual fall of the Wei, Wu and Shu empires. [caption id="attachment_19631" align="aligncenter" width="640"] (Credit: BagoGames / Flickr)[/caption] 12. Scribblenauts Mega PackWhat This Game Teaches: Creativity, Problem Solving Where To Play It: Nintendo Switch, PS4, Xbox One Scribblenauts Mega Pack contains two of the best educational games: Scribblenauts Unlimited and Scribblenauts Unmasked! The game tasks players with solving puzzles by entering solutions and bringing them to life with extravagant adjectives. Do think outside the box as the game welcomes all kinds of creative answers! Scribblenauts is perfect for any child to think of the limitless solutions on offer. To further draw players in, Scribblenauts Unmasked includes an entire roster of DC characters for you to include in your puzzle solutions as you gawk at how adorable they look in the game’s iconic art style! [caption id="attachment_19637" align="aligncenter" width="820"] (Credit: Scribblenauts / Facebook)[/caption] Love playing games that feature superheroes? Why not check out our guide to the best super hero games! Also, if you need some gift ideas to make your kid’s day, our children’s day gift guide should do just the trick!]]>
<urn:uuid:f8f0e08a-18eb-4b21-9ca2-ef9b93dfdc04>
CC-MAIN-2021-43
https://shopee.sg/blog/educational-games-unique/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00350.warc.gz
en
0.889306
2,349
3.0625
3
Composition in photography is something that every photographer should know. It is the basic concept of photography. If you haven’t heard of this that means you are new to it. If you are doing photography for some time and still not aware of it. You need to read this article. Every other photography word sounds difficult. But it’s easy to understand with proper guidance and practice. Here in this blog, I will talk about composition in general and what it means in photography. Also how to use it in your photographs. And what are the different techniques to get proper composition? So let’s dive in to get better with it. What is Composition in General? You might have heard this word somewhere. The literal meaning for composition is. To add or to mix something to produce an effective outcome. When you will relate it with the general world. You will find it everywhere. A block of concrete is a composition of cement, sand, grits, and chemicals. The food you eat is a mixture of so many ingredients. Yes, those ingredients are the composition like oil, vegetable, fruits, and Spices. So here the final words: “Adding something to the system to process an effective outcome” Now the let’s see it from the artist’s point of view. Let it be music, painting, filmmaking, or photography. All use these words in their field. Its simple means to add elements to produce something effective. The musician composes music by adding sound and track of different instruments. The painter adds different colors to their canvas to draw a beautiful painting. The filmmaker added many footages to create one movie. So like all other artists it’s same with a photographer in photography. But in their own perspective. Let’s check out, what is the composition in photography? What is Composition in Photography? Composition in photography means adding and arranging elements in the frame to capture a picture. It’s similar to all other artistic platforms. But here you arrange objects and subjects in the available scene to produce appealing photographs. Before we get deeper into the composition let me speak a few words about frame and framing. It will make it easy to understand. A frame in photography is the structure beneath which all your photographs are captured. It is always rectangular. The reason behind it is the image sensor, which is rectangular shape. Framing in photography is nothing but the process of deciding the frame of your photographing scene. Proper framing can take your photography to the next level. As everything starts from here. Once there was a photographer who learned photography without having a DSLR. Just by using his basic smartphone, and a simple editing app. His pictures got popular through Instagram. The pictures were so appealing people started sharing and learning his ways of taking pictures. The question is why his pictures were so appealing and popular to the viewers. It was because of his framing techniques. He was limited to his mobile and he did all the possible things to make his photographs look beautiful. Thus, he explored framing and its benefits in photography. So this is the basic thing which will make your photos stand out. So, how is it related to composition? Everything in your photographs is within the frame. Whatever you compose will be inside the frame. So before composing, decide your frame to get better images. Here’s the final word: “ Composition means deciding, adding, and arranging the elements of your scene into the frame. To capture a balanced image.” How to Compose Your photos? Composing makes your photographs look amazing. It’s easy to frame a better composition. There are many techniques of composition, which you can follow to capture a balanced image. I will talk about the different ways of composition later on. But before that let’s check out some basic things you need to know while composing. - Know your needs for photography - Check the light condition - Decide your subject - Other priorities after the subjects - Check the background and foreground - Check the available color for better contrast - Create your frame - Check what all things can be added into the frame and arrange it. - Do your camera settings and bang it on - Click the shutter button OMG! Such a long checklist. Who will bother to follow all this? Oh, wait for a second! If you’re a beginner photographer. You will follow these things unconsciously as it will become your habit by regular practice. These processes sound very long but all the expert photographers does this. Which is not being written or said. As it takes very little time to decide these things. Now let’s check out the different ways/methods to compose your photos. Few Ways to Compose Your photos? There are many different methods you can compose your photos. Though it’s not a hard rule to follow these ways. These are just the proven one which work pretty well in a specific situation. You can compose the pictures as per your wish, if it meets your needs and provide the message to the viewers. I have discussed the important one on this list. They are : - Rule of thirds - Leading Lines - Symmetry and Patterns - Diagonal and triangle - Fill the frame/Crop it - Use negative Space - Isolate the subject Rule of thirds The rule of thirds in photography is the most common composition method. You must have heard of this, if not we will make it easy for you to understand. In this, we divide the frame into thirds, horizontally and vertically both. Three-section horizontal and 3 sections vertical. Let us check it this way. Divide the frame into 9 parts by drawing 3 straight lines horizontally and 3 straight lines vertically. It will form a 9 square where each line will intersect each other. This intersection line is also known as a point of interest. Now the rule says placing the subject in any point of interest or somewhere near to the point will make the picture more balanced. You can also place the subject on the line or near the line. Though it’s always not necessary to follow the rule of thirds. There can be exceptions which will make the picture look even better. The best example of this can be a bird in flight, where you have placed the bird in the point of intersection flying towards the right. Like in portrait shots placing the subject in the middle of the frame. It will look pretty well. So you can break the rules. In your frame, if you find any type of lines leading towards somewhere will give an appealing picture. It shows a message of forwardness, a leading path towards the subjects. The leading line can be anything in the frame like a road, patterns, or any objects. If you saw any leading lines in your frame utilize it. Along with one method, you can combine many other methods in one composition. Like you can use the rule of thirds with leading lines. These lines are not necessary to be a straight line. It can be a curved line also. It depends on how you see it and how you want your viewers to see it. Leading lines capture a more balanced composition. Symmetry and Patterns If you will look at the literal meaning of symmetry it says the equal balance part of something. Yes, exactly you can do the composition in such a way that there is a symmetry formation in the image. You can also do a mix of other composition methods for a balanced image. Patterns make the picture look beautiful. If you find any pattern formation in your scene. You can add it to your composition. The patterns don’t only mean geometrical shapes like square, triangle, or a circle. It can be of any shape and size. It all depends on the way you define your frame and arrange the patterns into it. Colors the most important one and the least notice one among the beginners. The colors have their science behind influencing people. It’s the viewers in your case. It makes your image look more attractive. Every color has its value to make the picture look more powerful. Everything white in the frame will not look good. Until you add a spot of different colors which makes it more attractive. Like black, red, or whatever you find to add in the composition. The colors will be the objects and the subjects of your scene. Use the Color composition to make your pictures look more attractive. As said above you can add other methods to make it even better. Monochrome photography has only two colors black and white. But have you ever noticed those pictures? It gives such a powerful message and looks amazing to see. There is one small factor of color which makes it different. Diagonal and Triangle The shapes like forming diagonals and triangles with the elements of the composition will make the picture look dynamic. While deciding your composition you can find a diagonal line or a triangle shape. You will mostly find these shapes in architecture, building, or bridges. You can use the method in landscape photography or architecture photography. Though it’s not necessary to use this in this mentioned photography only. You can do it in any photography if you find these shapes. Fill the frame/Cropping Fill the entire frame with particular subjects. You can do the same by cropping the frame and fill the frame with the subjects. This composition is best for photography, where you want to show the features of the subject. Like in portraits photography and product photography. Where the details matter to the viewers. Take an example of a magazine wherein in the front page a beautiful model is displayed with her eyes and lips only. The frame is filled up completely. The ad was of a makeup product. So from here, you can make out the purpose of such composition. Use Negative Space First of all let’s understand, what is negative space in photography? The unwanted space apart from the subject in your frame is the negative scape. Suppose you followed the rule of thirds and placed the subject on the right vertical line. Now that the subject is on the right there is enough space on the left. Such cases utilize those spaces by adding them to the frame. Using the negative space makes the pictures look more attractive and add sense to it. Isolate the subject The most common one if you have not noticed then let me tell you. The picture with the blur background or foreground. Oh, I see, Yes, these are images you love the most. It is all because the focus is isolated on the subject by blurring the background. This is one of the best ways to make the subject stand out from the background. You have often seen such types of photos in portraits photography. Also in wildlife photography and candid wedding photography. In portraits, we isolate the subject by blurring the background. The same with capturing animals. We tend to isolate the animal by blurring the background. These types of photography happen when you want the focus of your viewers on a particular subject in the frame. This is the old school composition. Any newbie knows to check the background while capturing images. Even the kids know the background is good or bad for the picture. Background in photography places an essential role. If it is not good it can ruin your picture. So whenever you are hold your camera do consider this. Even if you follow any other composition method in the same frame. Sometimes unknowingly, you fail to see some unwanted objects in the background. Which makes you feel bad when you see it in post-production. This generally happens in street photography or while doing candid photography. Later on, you feel why I didn’t notice this object in the background. If possible you can change the background by photo manipulation in photoshop. Now you have understood about composition in photography. It will help you to create balanced and appealing images. This is one of the important concepts you should know before starting your photography. In this blog you have learned about composition and the different techniques used for it. If still, you have any doubts or questions. Feel free to comment below. I would love to hear from you. Here is the complete guide to photography: - Understand Exposure triangle in Photography - Understand Shutter Speed in Photography - Understand Aperture in Photography - Understand ISO in Photography - Understand White Balance in photography - Understand Histogram in Photography - Understand Depth of Field in Photography - Understand Rule of thirds in Photography - Understand Camera modes in Photography - Understand Metering Modes in Photography - Understand Focus in Photography Let your photography skills help you make money more money. Its for you: For those who want to win big and spread their work to the world:
<urn:uuid:65e1017d-83eb-4509-80d6-167922c6c1b4>
CC-MAIN-2021-43
https://phodus.com/composition-in-photography/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00550.warc.gz
en
0.937979
2,666
3
3
This post contains affiliated links. Please read my disclosure page. Surprisingly, not all bottled water is safe. I spend money to drink better water than the tab water but find out most bottled water is not what I think it is after all. It is pretty disappointing. When my brother-in-law informed me that most bottled water contain microplastics, I had to find out if water my family and I drink is safe. In this post, I share with you, how bottled water can be toxic and what you should watch out for when buying bottled water. How Bottled Water Can Be Toxic Bottled water can be toxic in several ways. First, bottled water is found to have microplastics in the water and we end up ingesting these microplastics. Second, plastic bottle can leach harmful chemicals into the water. Third, some bottled water does not have healthy minerals and its pH level is too acidic. Let’s take a look what they all mean. Microplastics In Bottled Water In recent global study by journalism organization Orb Media, microplastics were found in 93% of bottled water tested. Microplastics are small plastic particles exist in our environment. They are less than 5mm in diameter. There are two kinds of microplastics. Primary microplastics are manufactured to be used in the product such as scrubbers in exfoliating facial scrubs. Secondary microplatics are derived from larger plastic debris and are found in our marine or land environment. In this bottled water case, it will be secondary micro plastics that are found in the bottled water. (Microplastics) In the study, 250 bottled water was purchased and tested in nine countries. On average, they found 10.4 particles/ liter that were 100 microns (0.10 mm) or bigger. Smaller particles were also discovered and there were 314 particles per liter on average. Some bottles had thousands of particles while some had no particles found at all. Only 17 of the 259 bottles tested had no plastics found in water. 93% of tested bottles contained plastic particles. That means most of the bottled water in the market have micro plastics present in water. Popular bottled water companies that have microplastics in their water include Aquafina, Dasani, Nestle Pure Life and San Pellegrino. So where are these plastic particles come from? Plastics found in bottled water included polypropylene, polyethylene terephthalate (PET) and nylon. 54% of plastics found in bottled water is polypropylene. Polypropylene is usually used in bottle caps. Since polypropylene is used in bottle caps and was found in the bottled water the most, one theory is that opening a bottle may have contributed to particles found inside. There hasn’t been any study done how microplastics affect on human health yet. The World Health Organization is launching a review into the potential risks of plastic in drinking water. I assume ingesting plastic particles everyday for a long period time wouldn’t be good for us. I certainly do not want to ingest plastic particles every day. (Microplastics found in 93% of bottled water tested in global study) (Plastic Particles Found In Bottled Water) Plastic Leaching Chemicals Most disposable water bottles are made from plastic called polyethylene terephthalate (PET). Polyethylene terephthalate (PET) is used for beverages, microwave food trays and food packaging films. While this plastic is fully recyclable, it leaches endocrine-disrupting chemicals according to research. One research warns that endocrine disruptors such as phthalates and antimony may be leached into water from PET containers. Another study done by University of Florida concluded that most of their tested water bottles leached antimony and Bisphenol A(BPA). In the study, the temperature at which bottled water is stored influenced the degree of leaching. Prolonged storage and elevated temperature especially contributed to leaching of these chemicals. Therefore, if a seller stores the bottled water in a warm place for a long period of time or if you leave a bottled water in a hot car, chemicals from plastic will likely go into the water more and faster. (Polyethylene Terephthalate May Yield Endocrine Disruptors) (Effects of storage temperature and duration on release of antimony and bisphenol A from polyethylene terephthalate drinking water bottles of China.) So what are antimony, phthalates and Bisphenol A(BPA)? Antimony is a toxic material used in making the bottle material, PET. Research has found that the longer water sat in the plastic bottle, the more antimony leached. Although levels found were below Environmental Protection Agency (EPA) limit, the chemical leached into water is toxic, therefore there should be more research done about this. Antimony can cause diarrhea, nausea, dizziness, depression and vomiting . (Antimony leaching from polyethylene terephthalate (PET) plastic used for bottled drinking water.) Phthalates are endocrine disruptors. They are very harmful to us, especially to our children. They are linked to infertility, birth defects, breast cancer, autism spectrum disorders, asthma, low IQ, type II diabetes, neurodevelopmental issues and behaviour issues. (Phthalates are everywhere, and the health risks are worrying. How bad are they really?) Phthalates detected in some bottled water may be either leached from the plastic packaging materials or got contaminated during bottling processes. Although phthalates level didn’t exceed Environmental Protection Agency (EPA)’s concentration limit in the research, phthalates are toxic chemicals and I don’t want my daughter to ingest them through water. (Phthalates residues in plastic bottled waters.) (Concentrations of phthalates in bottled water under common storage conditions) Bisphenol A(BPA) is a chemical that makes a certain plastics or resins. It is an endocrine disruptor and causes infertility, heart disease, breast and prostate cancer, fatal brain development, Asthma, type 2 diabetes and reproductive disorders. It is especially damaging to fetuses, infants and children. BPA is found in polycarbonate plastics such as office water coolers or 5 gallon bottles. Usually bottled water uses Polyethylene terephthalate (PET) which doesn’t contain BPA. However, many BPA-free bottles replaced BPA with fluorene-9-bisphenol, or BHPF. Then does BHPF get leached from a bottle into the water and is it safe? A study done by Peking University in Beijing found BHPF in water from 23 of 52 water bottles and baby bottles tested. They also found that BHPF binds to the body’s oestrogen receptors and blocks their normal activity. Frederick S. vom Saal, an endocrinologist studying hormone disruptors at the University of Missouri-Columbia says there’s no level at which BHPF or BPA are safe. (BPA-free water bottles may contain another harmful chemical). (Your BPA-free water bottle may contain another harmful chemical) Acidic PH Levels In Bottled Water Another problem with bottled water is acidic pH levels. Some bottled water have pH (potential of Hydrogen) level that is too acidic. Water can be neutral, acidic or alkaline. Pure water or neutral water has pH of 7. Numbers lower than 7 are acidic, and numbers above 7 are alkaline. Ideal pH level of drinking water should be between 6.5 and 8.5. Healthy streams or lakes will have pH level between these numbers. So does Tap water generally. (Drinking Water PH Levels) So are acidic water and alkaline water bad? How do they affect our health? Let’s look at acidic water first. When you think of bottled water, you may think it is coming from glaciers, lakes or springs. However, many brands come from a municipal supply. Bottled water includes every type of drinking water including tap water, filtered water and spring water depending on a brand. Some brand may add fluoride or minerals. Some brands’ bottled water will be highly filtered or purified. Popular bottled water companies such as Nestlé Pure Life, Aquafina or Dasani use water from public water sources. When bottled water companies make bottled water, water may or may not go through filtering or purifying process to remove any chemicals or contaminants. Usually when water is sourced from a municipal supply, bottled water manufacturers will filter or purify water before bottling, Types of purification include reverse osmosis, distillation, carbon filtration and deionization. However, filtration or purification also gets rid of healthy minerals from water making water more acidic. For example, reverse osmosis is the most popular water filtration method because it is the least expensive method of purifying water. It removes bacteria, contaminants, heavy metals, chemicals, poisons, viruses and parasites but also eliminates good minerals that your body needs. According to Dr.Lawrence Wilson from L.D. Wilson Consultants, Inc., reverse osmosis produces an extremely mineral-deficient water. He explained that drinking mineral-free water such as reverse osmosis or distilled water for more than a few months will leach vital minerals out of our body. This water without healthy minerals is similar to acid rain. (Reverse Osmosis Water – A Poor Product) (Differences Between Drinking Water And Distilled Water) Drinking acidic water can also harm our teeth. This is because acidic liquids have a corrosive effect on teeth. Our enamel starts to erode at a pH level of 5.5 so when you drink water with a pH level lower than 5.5, your teeth will start to erode. A 2015 study done by the American Dental Hygienists Association concluded that most of bottled water brands tested were acidic. In the study, of the 12 bottled water brands tested, 10 had a pH of less than 7. Only two of those 12 were alkaline. The study recommends dental professionals to educate their patients the tooth decay and dental erosion caused by drinking acidic water. (Is Your Drinking Water Acidic? A Comparison of the Varied pH of Popular Bottled Waters.) How about alkaline water? Neutral water pH is 7 and pH level of water from 8 to 14 is alkaline. Typical alkaline water has a pH of 8 or 9. Some alkaline water companies promote pH level between 8.5 and 10. Alkaline water is often marketed as healthy water. However, alkaline water may not be as beneficial to our health as some people claim it to be. There has been debate about alkaline water whether it is beneficial to our health or not. Some people believe it can neutralize the acid in our body so they believe it is healthier than any other water. They also say it can prevent chronic diseases like cancer, support immune system, cleanse colon or slow the aging process. However, some health professionals do not agree with choosing alkaline water over neutral water. Well known author and nutritionist, Dr. Bob Arnot, M.D. said in a recent Men’s Health Journal article that our body is designed to adjust to its optimal pH balance no matter what we ingest. For instance, once alkaline water enters our stomach, our body simply pours in greater amounts of acid to neutralize it. Some health professionals also say there’s not enough scientific evidence to support that alkaline water is better or good for our health. There has been a few studies done alkaline water has helped certain health conditions. However, more research is needed to back up the claim that alkaline water is better or good for our health. (PH Paranoia: Understanding Alkaline Water Claims) On top of that, not all alkaline water is the same. Alkaline water can be divided into natural alkaline water and artificial alkaline water. Natural alkaline water is water from springs and contains natural minerals. Artificial alkaline water, on the other hand, is usually made from tap water and it undergoes a process called ionization or electrolysis. Ionization or electrolysis makes water to have a certain pH and makes water artificially alkaline. However, this water has none of the minerals that natural alkaline water has. Study done by World Health Organization says that there are health risks from drinking demineralized water. It concluded that demineralized water lacks some essential minerals to our health and have possibility of getting toxic metals or bacterial contamination from it. (The Dangers Of Alkaline Waters)(Alkaline Water: Benefits and Risks) Also, alkaline water can be a burden to people with a kidney problem. Kidneys keep your body pH within normal range. For instance, when the body becomes more alkaline, kidneys start to excrete more bicarbonate ions into the urine to keep pH balanced. However, when you drink heavily on alkaline water on daily basis, kidneys can fail to excrete all of the bicarbonate ions and it can put extra burden on kidneys. Therefore, alkaline water is not good for people with kidney disease since it can damage kidneys more. (Alkaline Water: Why You Should Not Drink It) As I have mentioned above, acidic water can be damaging to us and alkaline water may not be as healthy as being marketed to be. Ideal pH level of drinking water should be between 6.5 and 8.5, not too far from neutral 7. PH level range 6.5 and 8.5 is neither too acidic nor too alkaline to have negative health effects. In addition to the possible negative effects of highly acidic or alkaline water mentioned above, pH extremes such as pH above 11 or pH below 4 can cause skin, eye and mucous membrane irritation according to the World Health Organization. (Health Effects of pH on Drinking Water) Not all bottled water is safe and most of them actually possess health risks. Most of bottled water tested in the study contained microplastics in the water. That means we can ingest tiny plastic particles when we drink water. Also, most of bottled water is packaged in plastic bottles usually made by PET. Since bottle material is plastic, it can leach chemicals such as antimony, phthalates or fluorene-9-bisphenol, or BHPF into water from the bottle. This happens especially if the bottled water is exposed to warm or hot temperature or stored for a long period of time. Many bottled water is also either too acidic or alkaline. Drinking acidic water can erode and damage our teeth. Acidic water is also mineral deficient and it can take vital minerals out of our body. Alkaline water, on the other hand, is often promoted to be healthy but some health professionals believe there should be more evidence to support that alkaline water is better or good for our health. Artificial alkaline water does not contain natural minerals as natural alkaline water does. It just has high pH level to make water more alkaline artificially without healthy minerals. Alkaline water is also not ideal for people with kidney diseases since it can burden kidney function. These are a lot of things to consider before choosing a safe bottled water! Hopefully, this post helped you understand all of the toxic possibility of bottled water. In my next post, I will write about safe bottled water brands you can choose from. PLEASE SHARE THIS POST WITH YOUR FRIENDS OR LEAVE ME A COMMENT! 🙂 If you want to find out which bottled water brands are safe, please read my post ‘Safe Bottled Water Guide: Which Bottled Water Brands Are Safe?‘
<urn:uuid:0cc1cd7d-a1d5-4e8d-8b9b-152358f2ff1f>
CC-MAIN-2021-43
https://gonewmommy.com/2018/04/04/how-bottled-water-can-be-toxic/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00390.warc.gz
en
0.950226
3,262
2.53125
3
Unless you are intentionally brewing a dark beer, there are several factors that can darken the color. The type of roast and preparation temperature influence the Maillard Reaction, which results in the color of your beer. Malts roasted at higher temperatures result in darker, maltier beers. Added moisture, as in wet roasting, further contributes to a darker result. When using an extract, liquid malt extract generally results in darker beer, as compared to dry malt extract. To brew a lighter beer, choose a lightly roasted malt. Make sure that fermentation is complete, then allow enough time for the yeast to settle. You may choose to implement cold-crashing or gelatin filtering to expedite the process. When using an extract, choose dry malt extract for a lighter resulting color. Keep a controlled, steady temperature throughout your brewing process. The color of your beer does not necessarily indicate off-flavors in the result. In some cases, oxidation or incomplete fermentation contribute to both off-color and flavor. However, your beer should retain the desired flavor profile if brewed properly, based on the malt used. Why is my homebrew always so dark? There are several reasons that can cause a dark homebrew, including the malt type, roast, and boiling temperature. Before delving into the details, let’s talk about dark beer in general. Not all beers are light in color. Dark beers are often brewed intentionally for their full, toasty flavors. For the history buffs: The northern Chinese province of Henan is considered to be one of the earliest sources of dark beers. Archeologists discovered 9,000-year-old clay pots with remnants of fermented rice, honey, and fruits. Halfway across the world, dark beers were brewed in several European medieval monasteries. For instance, the Weltenburg Monastery in Kelheim, Bavaria, continues to produce dark beer since 1050. Back to the present: Malt is the primary component that determines the color of your beer. Simply put, the darker the malt, the darker the resulting beer. The color of malt is contingent on the temperature and duration of kilning. Like bread in a toaster. The palette of dark beer varies from deep yellow-red to black-brown. Specific color tints can be measured by the Lovibond scale (L), a system developed in the late 1800s by Joseph Lovibond. Dark colors are commonly the result of equally dark malts, which are dried at higher temperatures. The temperature at which the soaked grain is dried has a direct impact on the dark or roasted aroma of your beer. Temperatures over 100 degrees Celsius create a roasting effect, resulting in a darker malt. The higher the temperature, the more intense the resulting (dark) beer. Why is extract beer too dark? Liquid malt extract (LME) is known to result in darker beer because it is concentrated. The wort is concentrated using an evaporator and a vacuum to remove moisture, causing the resulting wort to darken. On the flip side, the brewer saves time by skipping mashing and sparging, by which the extract is created. Homebrews created from extract often turn out darker than you may expect. Boiling the extract activates the Maillard Reaction, causing a reaction like caramelization. This reaction plays a prominent role in darkening your wort. The Maillard Reaction? Let’s reference our handy Oxford Companion to Beer: “[The] Maillard Reaction is a type of non-enzymic browning that adds color and flavor to many types of processed food, including beer. The reaction is named after the French chemist Louis-Camille Maillard (1878–1936).”Oxford Companion to Beer More on the Maillard Reaction The Maillard Reaction is more likely to occur at higher temperatures, low moisture levels, and alkaline conditions, such as during kilning. Maltsters manipulate different aspects of kilning to achieve combinations of color and flavor in malt, used later by brewers. Given that moisture enhances the Maillard reaction, wet roasting results in darker, maltier brews. Dry roasting tends to yield drier, toastier flavors with less sweetness. The Maillard Reaction can also occur in the kettle, during mash or wort boiling. The resulting melanoidins, brown nitrogenous polymers, create darker tones in your beer. How can all-grain beer get too dark? The method by which malt is toasted results in an array of flavors, from lightly roasted to black malt. These malts impart their color to your finished brew. Some brewers add additional ingredients, such as cherries or dark rock candy. These ingredients can also contribute to the color of your resulting beer. Brown beers generally have an EBC of 80, characterized by a copper or dark-brown color. Additives can include caramel, chocolate, raisins or currants. For example, the Newcastle Brown Ale is a good example of a relatively simple, yet flavorful brown beer. Here, the malt consists of light malts mixed with dark malts, that add dark color and aroma. Black beers have an EBC of over 80, characterized by a brown-black color. Beers turn opaque black at around 120 EBC, such as an Imperial Stout. The flavor profile contains distinct notes of dark chocolate, coffee, or blackened bread. If you did not intend to brew a dark beer, it may be an issue with oxidation or equipment. Excess oxidation can turn your beer dark and add a stale taste. You can tell by the taste if this issue is the reason for your dark beer. Oxidation can occur during transfer to the secondary fermentation vessel or during packaging. Certain beer types are more susceptible to over-oxidation, such as the New England IPA, due to higher hops and proteins. This issue may be as simple as old equipment, specifically the brewing kettle. For instance, a kettle with “hot spots” or burnt base can burn your wort, causing it to darken. Can you fix a batch of dark homebrew? Your beer may appear darker because of suspended yeast or other particles. Different strains of yeast flocculate at different rates. For instance, London Ale III remains suspended for longer, even after fermentation is complete. Although suspended yeast often causes a “yeasty” flavor (see my post of “yeasty” off-flavors), it may contribute to a darker color of your beer. In this case, there are a few ways to help settle the sediment and lighten your beer. You can choose to mature the beer for longer. Other methods, such as “cold crashing” or gelatin filtering can expedite the precipitation process. But, how do you keep it from happening next time? How to stop extract beer from getting too dark When using LME, pour the liquid into hot water while stirring, ensuring that it dissolves evenly. LME is thick, like molasses, so it does not dissolve well in cold or lukewarm water. If small clumps remain, they can stick to the bottom of your pot and caramelize or burn, thereby darkening your brew. In most cases, the Maillard Reaction is responsible for the dark color of your wort, especially when using malt extract. To stay the reaction, try adding your extract in the last ten minutes of boiling. This way, you limit the time that your wort darkens. To avoid this issue, turn off the heat when stirring in the liquid extract. Continue stirring until there are no clumps or syrupy strands. Dry extract generally results in lighter beer than liquid extract. For this reason, some brewers choose to do a partial mash with DME before adding LME. Boiling with LME can cause the beer to darken, while DME does not. Be sure to use fresh extract, especially when liquid. Older LME, when stored improperly, is exposed to a longer period of warmth, which stimulates darkening. Be sure to check the date of expiration on your packet before using. Anecdotally, boiling at maximum volume (adding as little top-off water as possible) helps to keep the resulting beer light in color. You might have to use two kettles to facilitate a full boil, but fellow brewers say that the extra work is worth the result. How to stop all-grain or partial mash beer from getting too dark The easiest way to avoid a dark beer is to choose lightly roasted malts, intended for pilsner or lager. If your beer is over-oxidized, there is no way to reverse the effect. Not only will your beer be dark, it will have a stale or cardboard-like flavor. Aging the beer further will only make it worse. Check that the base of your kettle is clean, without noticeable scorching. When mashing, be sure to use a functional thermometer to keep an even, proper temperature. Too high of a temperature can darken your wort. In the same vein, boiling your wort too vigorously can cause darkening. Boiling at higher temperatures stimulates the Maillard Reaction, so try to keep a controlled, gentle boil. Concluding thoughts on dark beer A proper dark beer is not so easy to create. It must have a dark color and a roasted, refined aroma, without scratchy tones. If you intend to create a dark beer, you need to carefully process your raw materials, paying attention to the temperature at each step. Nowadays, it is easy to find roasted malts for any palette. For dark beer, some malts add fruity or sweet nuances, while others add coffee tones. Remember, the color of your beer does not necessarily impact the flavor. A light beer may look appealing to some, but if done incorrectly, may still result in off-flavors. On the contrary, dark beers are not necessarily heavy or “bready.” Some types have a pleasant, full and fruity profile. With this in mind, choose the right malt or malt extract for the desired style of beer, rather than the color.
<urn:uuid:0f3dc980-48d5-4318-9d32-46a7e003e32b>
CC-MAIN-2021-43
https://learningtohomebrew.com/homebrew-beer-too-dark-extract-all-grain/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00471.warc.gz
en
0.915682
2,109
2.765625
3
Written in EnglishRead online |Statement||illustrated by Martha Hart.| |LC Classifications||D27 .A45| |The Physical Object| |Number of Pages||160| |LC Control Number||67086819| Download Sailors in battle Sailors in Battle [Kenneth Allen] on *FREE* shipping on qualifying offers. Books Advanced Search New Releases Best Sellers & More Children's Books Textbooks Textbook Rentals Sell Us Your Books Best Books of the Month. This is done quite well, but some might find that gives the book a slow start. Once battle begins the book is hard to put down. It's action all the way. The bravery and sacrifice of the US sailors shines though. The Japanese sailors, it appears, have begun to Cited by: 1. In each account of a battle or major event in the history of the Navy, Cutler generally tells it from the viewpoint of a particular sailor. As the historical facts unfold and a battle looms, you can feel the sweat and fear and courage of that sailor. It's almost as if you're in his place facing possible death/5(54). The nonfiction book The Last Stand of the Tin Can Sailors: The Extraordinary World War II Story of the U.S. Navy's Finest Hour is the first full narrative account of the Battle off Samar, which the book's author, James D. Hornfischer, calls the greatest upset in the history of naval by: 1. Today's sailors have too little appreciation of their heritage. To counter this problem, Thomas J. Cutler has compiled a history of our naval heritage in the form of A Sailor's History of the U.S. Navy. The work is unique in two important ways. First, it is written thematically rather than chronologically. The largest battle - in terms of numbers engaged as well as in acreage involved - was the battle of Sailor's Creek, fought on April 6, Sailor's Creek was an unmitigated disaster for Robert E. Lee's army, costing him 7, casualties he could ill afford, and prompting him to say, "My God!Reviews: 4. There are no fiery battle scenarios in the book, but her presentation of the lives and experiences of the crew members brings the action to life. Unlike her sister ship USS Washington, the Showboat didn't engage - and sink - an IJN battleship (Kirishima) or take part in epic ship vs. ship sea battles but she was an important part of the USN Reviews: The Battle of the Atlantic was the longest naval campaign of the war, running from to looks back on the lives of sailors during the long Battle of the Atlantic. His ninth book. The Enemy Below is a DeLuxe Color war film in CinemaScope, which tells the story of the battle between an American destroyer escort and a German U-boat during World War movie stars Robert Mitchum and Curt Jürgens as the American and German commanding officers, respectively, and was directed and produced by Dick film was based on a novel by Denys Rayner, a British naval. The Kronstadt rebellion or Kronstadt mutiny (Russian: Кронштадтское восстание, tr. Kronshtadtskoye vosstaniye) was an insurrection of the Soviet sailors, soldiers and civilians of the port city of Kronstadt against the Bolshevik government of the Russian was the last major revolt against the Bolshevik regime on Russian territory during the Russian Civil War that. Told in the first-person, through the eyes of many U S Naval personnel, the book begins with the early careers of the sailors in the Navy, through to the Leyte Gulf battle. Day to day, hour to hour, minute to minute descriptions of the battle puts the reader on the bridge, on the decks, and below decks during the battle. Sailors in the Holy Land. Following the success of his first book about a U.S. Navy flight crew's desperate battle to survive a ditching in the icy north Pacific, Andrew Jampoler has turned to an equally exciting Navy adventure set in the desert of Ottoman Syria more than one hundred fifty years ago. The Battle of Sailor's Creek was fought on April 6,near Farmville, Virginia, as part of the Appomattox Campaign, near the end of the American Civil was the last major engagement between the Confederate Army of Northern Virginia, commanded by General Robert E. Lee and the Army of the Potomac, under the overall direction of Union General-in-Chief Lieutenant General Ulysses S. Robert A. Maher was a young sailor who served on the Navy destroyer USS Borie DD in that battle, and his personal account of the war culminates in this decisive battle. As leading fire-controlman and gun director pointer, Maher was stationed immediately above the bridge, where he had a clear view of events throughout the battle. Tin Can Sailors is the name of The National Association of Destroyer Veterans in the United currently numbers approximat members as of the end of "Tin can sailor" is a term used to refer to sailors on destroyers. The Last Stand of the Tin Can Sailors is a book by James D. Hornfischer about the Battle off Samar on Octoin which destroyers saw off a much. The Battle of the Atlantic pitted the German submarine force and surface units against the U.S. Navy, U.S. Coast Guard, Royal Navy, Royal Canadian Navy, and Allied merchant convoys. The convoys were essential to the British and Soviet war efforts (read more about the Arctic convoys to the USSR in. The history of the United States Navy divides into two major periods: the "Old Navy", a small but respected force of sailing ships that was notable for innovation in the use of ironclads during the American Civil War, and the "New Navy", the result of a modernization effort that began in the s and made it the largest in the world by The United States Navy claims Octo as. Sailor on the Beach in the Battle of Iwo Jima by Marvin D. Veronee, Lt. USNR (Ret.), big page PAPERBACK, © 'A memoir of Naval Service in World War II, September 30 - June 26 ' Highlight of book with First Battalion, 28th Marines ashore, D-Day through 5th Marine Division's 36 days at Seller Rating: % positive. America’s Sailors in the Great War: Seas, Skies and Submarines, by Lisle A. Rose, University of Missouri Press, Columbia,$ Given the approaching centennial of America’s entry into World War I, it seems fitting historians should reexamine the nation’s participation in that epochal conflict. The service records of these men, North and South, are contained in the Civil War Soldiers and Sailors System. Please note that the Civil War Soldiers and Sailors System contains just an index of the men who served in the Civil War with only rudimentary information from the service records (including name, rank and unit in which they served). With his new book, "The Last Stand of the Tin Can Sailors," literary agent and author James D. Hornfischer has documented one such lesser-remembered World War Two tale with a reverence befitting the brave men who fought and died for America's events of the book take place during the Battle of Leyte Gulf, which stands as the largest. The go-to source for comic book and superhero movie fans. 'Battleship' Featurette: Real Sailors Recruited to Battle Aliens 'Battleship' Featurette: Real Sailors Recruited to Battle Aliens. Check out a behind-the-scenes clip for 'Battleship', which reveals how actual Navy personnel were employed to ensure the alien invasion blockbuster is. About half of the ish sailors and marines at Andersonville were captured at this mostly-forgotten battle. Among them was Frederic Augustus James, the only sailor known to have kept a diary while at Andersonville. The battle is marked by a single plaque at Fort Sumter, that refers to it as "The Night Attack.". The black sailors' story fits awkwardly, if at all, within that image. The study of African Americans in the Civil War navy must begin with determining their numbers. During the first decade of the twentieth century, when the secretary of the navy was quizzed about the service of black men in the Civil War, senior officers who had served in the. This book is the first to profile these heroic men and their actions, which helped turn the tide of war. pages. 16 photos. 2 maps. A Call To Colors by John J. Gobbell Item B $ pages (Paperback) - Only 4 left The Battle of Leyte Gulf took place on October 24 and 25 in Most sailor's tattoos were line drawings done in black or blue ink by amateur sailor-tattoo artists (Dye ). 38% of all reported tattoos on American sailors from to depicted initials, names, and dates; 21% depicted things of the sea; 9% patriotic symbols; 9% symbols of love; 8% religious symbols; and 4% depicted people and animals. Books shelved as naval-warfare: The Hunt for Red October by Tom Clancy, Castles of Steel by Robert K. Massie, The Price of Admiralty: The Evolution of Na. “In The Last Stand of the Tin Can Sailors, James Hornfischer drops you right into the middle of this raging battle, with 5-inch guns blazing, torpedoes detonating and Navy fliers dive-bombing The overall story of the battle is one of American guts, glory and heroic sacrifice.”—Omaha World Herald. This book came to us from a Goodreads giveaway, and it was quite good. This history book gave a somewhat complete account of the D Day invasion. There were accounts of military actions on all 5 beaches, Omaha, Gold, Sword, Utah, and Juno. Truely the spirit of battle is told from the Commander's point of view all the way down to the civilian/5(99). Search For Sailors An important part of the Civil War took place on the high seas or the rivers which flowed through the South. Civil War sailors' lives could be spent in the tedium of blockade duty off the Gulf or Atlantic Coasts, in the din of battle inside an ironclad. Red-Box English Sailors in Battle XVI-XVII Century Plastic Model Military Figures 1/72 # Red-Box 1/72 Italian Sailors XVI-XVII Century Set #2 (40) Trumpeter USS San Francisco CA38 Heavy Cruiser Plastic Model Military Ship 1/ Scale # The multiphase Naval Battle of Guadalcanal consisted of a series of destructive air and sea engagements closely related to a Japanese effort to reinforce land forces on the island. In early November, the Japanese organized another Guadalcanal convoy, embarking 7, troops and their equipment in another attempt to retake Henderson Field. In conjunction with their troop landings. The Battle of Britain produced many airmen of great skill and accomplishment; high achievers who made their mark in one of history's most memorable and demanding campaigns. But only a few of these men distinguished themselves in such a way as to become legends in their own lifetimes. Among the greatest of these was Sailor Malan. Battle Fleet by Paul Dowswell is a great historical fiction book. This book takes place in the 's in the UK. The main characters are Sam and his friend Richard. This book follows Sam as he works on multiple boats. His first job is as a regular sailor on a merchant ship. Then, after he gets off of the merchant ship he joins a navy ship as an /5(10). Honorable Mention, Lyman Awards, presented by the North American Society for Oceanic History This book is a thrillingly-written story of naval planes, boats, and submarines during World War I. When the U.S. entered World War I in AprilAmerica’s sailors were immediately forced to engage in the utterly new realm of anti-submarine warfare waged on, below and above the. Get this from a library. At war, at sea: sailors and naval combat in the twentieth century. [Ronald H Spector] -- "From the Battle of Tsushima in to the Gulf War innaval historian Ronald Spector explores every facet of twentieth-century naval warfare. Drawing from more than one hundred diaries. Ship Name Person Named For Command at Pearl Harbor Award Post- humous USS Bennion (DD) CAPT Mervyn S. Bennion, USN USS West Virginia Medal of Honor * USS Cassin Young (DD) CAPT Cassin Young, USN USS Vestal Medal of Honor USS Flaherty (DE) Ensign Francis C. Flaherty, USNR USS Oklahoma Medal of Honor * USS Frederick C. Davis, (DE) Ensign Frederick C. With an introduction read by Max Hastings. A companion volume to his best-selling Armageddon, Max Hastings' account of the battle for Japan is a masterful military history. Featuring the most remarkable cast of commanders the world has ever seen, the dramatic battle for Japan of was acted out across the vast stage of Asia: Imphal and Kohima, Leyte Gulf and Iwo Jima, Okinawa, and the. Saber has a pretty good amount of battle experience under her belt, both as a Servant and as King Arthur leading armies. But her skills with a sword can’t match Sailor Mars’s magical skills with fire. Related: Fate: 5 Reasons Why Shirou Emiya & Saber Are The Perfect Pair (& 5 Why They're The Worst) Sailor Mars has long-range combat abilities, both with fire and with her bow and arrow, so. Mr. Hornfischer talked about his book, [The Last Stand of the Tin Can Sailors: The Extraordinary World War II Story of the U.S. Navy's Finest Hour], published by Bantam. In the book. book.” —Vice Adm. Ron Eytchison, USN (Ret.), Chattanooga Times Free Press “A brilliant, fast-moving book worthy of the sailors who fought the first major work to concentrate solely on the Battle off Samar does admirably for the sailors what.Pure D Simple! Hands Down genuine story about fighting a Naval Battle Out gunned and Out numbered. The most important factor of the Samar naval battle is in what these extraordinary Fighting Sailors accomplished in the Larger Picture. Hope you enjoy the Reading as much as I did.Sailor Moon fans are well aware that Queen Serenity and Princess Serenity are powerful beings, so why do they need the Sailor Senshi to protect them? Each member is a princess of her respective planet, so why do they need to put their lives on the line for the royal family of Earth's moon?
<urn:uuid:cebabe38-cf56-44a5-9ae3-b751e04bd131>
CC-MAIN-2021-43
https://baziwukikyjupu.ekodeniz.com/sailors-in-battle-book-28594wh.php
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00429.warc.gz
en
0.947099
3,144
2.71875
3
Education and Early Career Jim Egan was the eldest son of James Egan and Nellie (Josephine) Engle; Jim’s only sibling, Charles, was born in 1922. Jim attended Holy Name School and later the Eastern High School of Commerce, in Toronto. In high school, he took a mixed science course of biology, chemistry and physics, as well as English literature and composition. He proved to be an indifferent student, though, and abandoned formal education in the second year of high school. Always attracted to the natural world, Egan spent the years 1937–39 working on farms near Bailieboro, Ontario. He tried to enlist after the declaration of war in September 1939, but was rejected due to a corneal scar (see Second World War). Although Jim did not graduate from high school, he was a self-taught person who pored over books and magazines on biology and the natural world. As able-bodied men signed up to go to war, many employment opportunities became available for others. Through his interest in biology and his farming experience, Egan was able to secure employment as a departmental technician at the Department of Zoology, University of Toronto (see Zoology). Egan later worked in the insulin production department of Connaught Laboratories (now Sanofi Pasteur Limited) and in the tissue culture lab. He helped with typhus, polio and cancer research under the direction of Dr. Raymond Parker, a key figure in the search for a polio vaccine. In 1943, Egan successfully enlisted in the merchant navy as an ordinary seaman and served until 1947. He worked on several ships, travelling in the Mediterranean and the South Seas. During and after the war he came into contact for the first time with a “gay world,” including bars and clubs in cities such as London and Hamburg. Back in Toronto after the war, Egan met John (Jack) Norris Nesbit (1927–2000) in 1948; they soon entered a committed relationship and remained partners until Jim’s death in 2000. Egan’s laboratory experience led him to establish a series of biological supply businesses beginning in 1949 and continuing off and on into the 1970s. Gay Activism and Community Work Egan’s serious reading interests extended to the gay icons of the day — André Gide, Walt Whitman, Oscar Wilde — and the works of pioneering sexuality researchers such as Havelock Ellis. The publication of Alfred Kinsey’s Sexual Behavior in the Human Male (1948) was a great inspiration to him. Egan began to study homosexuality in greater depth and, beginning in 1949, his letters to local newspapers such as the Globe and Mail, Toronto Daily Star and Toronto Telegram protested misleading or sensational reports on homosexuality and he defended it as a natural orientation. This was a brave stance, 20 years before homosexuality was decriminalized in Canada, and Egan first used a pseudonym when signing letters or articles, then his initials, J.L.E. (Much later, he used his own name.) At the beginning of the 1950s, it was not unusual for newspapers, and particularly the tabloids, to use sensational headlines when covering the LGBTQ2 community. The headline “Queers Flushed from ‘Love’ Nest” degraded same-sex couples; “’Pansies’ Bloom in Cocktail Bar” suggested that openly gay people were abnormal and should not be allowed to drink at respectable bars. Egan fought back against such slurs. “I simply let [the press] know that there was at least one person out there who was not going to sit by and let them get away with what I considered to be gross inaccuracies and libels.” During 1949 and 1951, Egan expanded his letter-writing campaign to mainstream American mass-market magazines such as Coronet, Ladies’ Home Journal, Parents’ Magazine and Redbook, but they didn’t publish his letters. He also attempted to sell story ideas to them that depicted homosexuality in an honest and straightforward fashion, but was unsuccessful. Egan also began to correspond with like-minded individuals. Beginning in 1951, he had a brief correspondence with Henry Gerber (1892–1972), founder in Chicago in 1924 of the Society for Human Rights, the earliest gay rights organization in the United States, and editor of Friendship and Freedom, the earliest known American gay magazine. By 1953, Egan was corresponding with members of the Mattachine Society in Los Angeles, formed in 1950 as an American gay rights organization. “In October 1950, a letter of mine was published for the first time in True News Times. TNT published a ridiculous gay gossip or tidbit column written by someone who called himself Masque. My letter complained about some of the stupid, nasty remarks that appeared there. These gay columns were filled with innuendo, such as ‘What well-known bartender had been out with what well-known queen?’ I never approved of this kind of gay trivia.” — Jim Egan in his memoir, Challenging the Conspiracy of Silence (1998), published by the Canadian Lesbian and Gay Archives. Egan’s first real success in writing came in November–December 1951, when his seven-part series “Aspects of Homosexuality” was published in the Toronto tabloid True News Times (TNT). No longer just writing letters of complaint, Egan now had a platform to publish overviews of same-sex desire through history, legal and scientific aspects of homosexuality, and the need for more tolerance — all of which was heavily influenced by his reading. Egan intended to counter what he considered outright lies and distortions in the press, and to challenge what he called “the conspiracy of silence” surrounding the truth about homosexuality. This represented a key moment in the history of gay journalism in Canada: the first published long articles written from a gay point of view. Egan used his experience with TNT to expand to a larger publication. He persuaded Philip Daniels, publisher of the Toronto tabloid Justice Weekly, to publish “Homosexual Concepts,” signed J.L.E., a 12-part series of articles from December 1953 to February 1954. This was followed by an untitled series, in 15 parts, published between March and June 1954. In Justice Weekly, Egan expanded his scope, including insights into debates on the “cause” and “cure” of homosexuality, as well as reports on current events, such as the ongoing purge, led by Senator Joseph McCarthy, of hundreds of United States government employees because of their sexuality. When Egan stopped writing for Justice Weekly, he suggested that Daniels contact foreign gay publications to inquire about reprinting material in return for an exchange subscription. Daniels did so, resulting in the reprinting of more than 200 articles of interest to the gay community from publications such as ONE Magazine and Mattachine Review, beginning in 1954 and extending into the 1960s. These articles in Justice Weekly were an important source of information for gay Canadians before the foundation of a gay press in Canada in 1964. Between 1955 and 1963, Jim and Jack took up farming in rural Ontario, and for a time ran a pet and garden supply store in Beamsville, Ontario. During this period, Jim’s gay activism declined, though Jim and Jack participated in the fifth Midwinter Institute of ONE, Inc., in Los Angeles, in 1959, where they met many leading activists, including Dr. Blanche Baker, Dr. Evelyn Hooker, Jim Kepner and W. Dorr Legg. Inspired by these contacts, Jim published two articles in ONE Magazine during 1959, and revived his letter-writing campaign to newspapers and magazines. Back in Toronto in 1963, Jim served as a resource person and tour guide to journalist Sidney Katz, whose two-part series “The Homosexual Next Door: A Sober Appraisal of a New Social Phenomenon” was published in Maclean’s magazine in February and March 1964. These are considered to be the first full-scale articles published in a mainstream Canadian magazine to take a generally positive view of homosexuality. The Homosexual Next Door was reprinted in booklet form by the Mattachine Society in 1965. In 1964, Egan and Nesbit’s relationship came under strain due to Jim’s increasingly public activism. Jack was a private person and did not want to draw attention to his sexual orientation. Jim’s involvement with “The Homosexual Next Door” was apparently too high profile, in Jack’s estimation. After a brief split, Jack persuaded Jim to move to British Columbia to start a new life together. They started a marine biological specimen company that operated until 1972. Over the years, they moved around Vancouver Island, eventually settling in Courtenay. Although they were not involved in gay activism during their early days in British Columbia, Jack’s level of comfort with gay activism gradually strengthened and in 1985 they co-founded the Comox Valley branch of the Island Gay Society. Egan was also a supporter of AIDS awareness, and served as the president of the North Island AIDS Coalition in 1994. Egan’s interest in the natural world propelled him to become involved with environmental activism in British Columbia, particularly with groups such as the Society for the Prevention of Environmental Collapse and the Save Our Straits Committee. His desire to make a difference through public service ultimately led to politics. In 1981, Egan was elected a regional director for Electoral Area B of the Regional District of Comox-Strathcona. He was one of the first openly gay politicians to serve in Canada; Egan was re-elected twice, and served from 1981 to 1993, when he decided not to stand for re-election. Egan v. Canada In 1987, Egan and Nesbit applied on Nesbit’s behalf for the spousal allowance benefit provided under the Old Age Security Act. Egan and Nesbit had been a couple for almost 40 years and met all criteria for the benefit. Health and Welfare Canada’s denial of the claim based on sexual orientation (they were a same-sex couple) set the stage for a court challenge test case under the Canadian Charter of Rights and Freedoms. Their case, entered in 1988 in the Trial Division of the Federal Court of Canada, claimed discrimination under the Old Age Security Act in its definition of “spouse,” claiming that the current definition discriminated against same-sex couples on the basis of gender and sexual orientation, contrary to section 15(1) of the Charter. The Federal Court dismissed their claim in 1991, stating that their relationship was “not a spousal one.” They appealed the ruling to the Federal Court of Appeal, which in 1993 upheld the ruling of the lower court (see Court System of Canada). They filed an application to appeal to the Supreme Court of Canada, which was accepted. In May 1995, the Supreme Court ruled on Egan v. Canada, and dismissed the appeal. At the same time, it ruled that “sexual orientation” is in the Charter as a ground of discrimination, thus providing protection from discrimination based on sexual orientation (see LGBTQ2 Rights in Canada). The decision has been described as “losing the battle but winning the war.” Because of the Egan decision, laws across the country that discriminated on the basis of sexual orientation in all areas of life (e.g., employment, government benefits, income tax, family law) became ripe for challenge. Legacy and Significance As an openly gay man, Jim Egan is recognized as a pioneering Canadian gay journalist who challenged discrimination and bigotry against LGBTQ2 people. His columns in True News Times and Justice Weekly exposed readers to a gay viewpoint on historical and current affairs. Egan’s support of foreign gay publishers resulted in republication of selected works in Justice Weekly, thus greatly increasing access to gay content for Canadian readers. Egan and Nesbit’s bravery in undertaking the challenge to the spousal allowance benefit under the Old Age Security Act, and the subsequent Egan v. Canada case, was their most consequential legacy. The case led to the ruling that “sexual orientation” be read into the Canadian Charter of Rights and Freedoms as a protected ground of discrimination — a monumental finding in support of LGBTQ2 rights in Canada. The landmark decision opened the door to other activists and set a precedent that paved the way to other milestone victories, including same-sex marriage. Honours and Awards - National Human Rights Award, Lambda Foundation for Excellence (1995) - Honorary grand marshal (with Jack Nesbit) of the Toronto and Vancouver Pride celebrations (1995) - Egan and Nesbit were the subjects of the documentary Jim Loves Jack, by David Adkin (1996) - Paul Harris Fellowship, for his long efforts toward equal rights, Rotary International (1997) - Inducted into the National Portrait Collection, Canadian Lesbian and Gay Archives, for his contribution as a champion of Canadian LGBTQ2 rights (1998)
<urn:uuid:f1e65651-9aab-4fc4-b079-3f26d1d9a7b5>
CC-MAIN-2021-43
https://www.thecanadianencyclopedia.ca:443/en/article/jim-egan
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00150.warc.gz
en
0.969889
2,717
2.640625
3
Charles Steinmetz, the wizard of Schenectady, was the most famous electrical engineer of his day. The story goes that when Henry Ford had exhausted the Ford Motor Company’s resources trying to fix a large electrical generator, he called on Steinmetz for help.1 Steinmetz arrived in Detroit and called for a pencil, paper, chalk, and a cot. After two days, he drew a line on the generator’s housing and asked the technicians to replace 16 wire windings. He submitted a bill for $10,000, a large sum in those days. Ford, surprised, requested an itemized invoice. Steinmetz replied: - $1 to draw a chalk line - $9,999 to know where to draw it Ford paid the bill. For every organization, people are the key asset. Their knowledge defines what the business knows and can accomplish. The knowledge of the staff is also constantly changing, and the knowledge needed for the organization is constantly changing too. This presents lab managers with several key knowledge management challenges, including how to identify: - The knowledge possessed by the people - The knowledge that is unique - New knowledge needed by the organization - How to effectively share and transfer knowledge The knowledge owned by an organization can be located in numerous places. Most labs are familiar with the variety of concrete knowledge or documented knowledge around the lab. Familiar documents containing important knowledge include reports, notebooks, methods, databases, shared drives, and hard drives. The aspect of the organization’s knowledge more difficult to identify and locate is the tacit knowledge contained in people’s heads. Knowledge management is a set of processes and tools to address this organizational need. Here is a set of proven knowledge management processes and tools that will benefit most lab managers: - Identification of critical knowledge (TVA grid) - Knowledge-retention tools - Knowledge mapping - Communities of practice - Idea management - DeBono’s six hats - Best-practice sharing - Lessons learned Each of these tools will be discussed in this article. Knowledge management tools Critical knowledge grid The critical knowledge grid2 used by the Tennessee Valley Authority (TVA) is an excellent tool to map who has the critical knowledge and how much risk there is of losing it. Figure 1 shows the TVA critical knowledge grid. |Leave within 2 years| |Leave within 6 years||Duplicate skills exist in company or easy to get in market||Tacit knowledge but easy to transfer||Tacit knowledge critical to going forward; hard to find in the market| |Criticality of knowledge| The lab manager can use the grid to document and manage knowledge transfer based on which staff have what levels of critical knowledge and when they might be expected to leave the organization. Retirement is not the only reason, as people often exit the organization for transfers, promotion, or personal reasons, so being aware of workforce transitions is critical. Knowledge retention tools Sharing knowledge with colleagues is an excellent way to retain knowledge within the organization. Lab managers should use the tools in Table 1 during cross-training to retain specific knowledge in the organization. |Tacit Tools||Explicit Tools| |On the Job Training||On the Job Training| Storytelling enables senior staff to tell some of their favorite stories, and they usually talk about why in addition to what and how. Effective examples of how NASA uses storytelling to transfer knowledge are given in DeLong.3 On-the-job training and shadowing are related tools. In shadowing, the student watches the teacher execute a task, and in on-the-job training, the teacher watches the student work. Mentoring provides the opportunity to pass not only tactical knowledge but also culture from experienced staff to younger people. Writing internal wikis enables staff to explain pertinent details of the work and explain why different decisions are made. Lessons learned enable the lab manager to establish a learning culture and take advantage of both positive and negative outcomes for learning for the whole organization. Knowledge mapping4 enables the lab manager to choose a specific process important to the organization and follow who requires specific elements of knowledge, who has it, and when it is needed. For many technical organizations, the knowledge map resembles other process maps that are familiar to technical staff. There are several benefits of constructing knowledge maps. The process of creating the map forces lab managers to think critically about what knowledge is needed. Using the maps emphasizes the importance of knowledge sharing and generates an effective tool for less-experienced staff. Of course, there are also challenges in creating effective knowledge maps, including getting the right people in the room and motivating people to share and manage organizational knowledge instead of hoarding knowledge. As with any other effective business process, an important challenge is to institutionalize the process so that the knowledge is always up to date. Community of practice Figure 2 shows the Air Products knowledge management model based on communities of practice.5 Communities of practice (COPs) are focused on a general area of interest. The object is to bring a group of volunteers together with responsibility to achieve the community's business goals. COPs are self-managed, hold regular meetings and events, and communicate regularly about the benefits they generate. The outcome of a COP is to nurture knowledge sharing and mutual learning from others within the community. New challenges require new ideas. As lab managers, we need to have mechanisms to encourage, attract, and evaluate new ideas. There are many ways to ask for new ideas; for example, a physical idea box, a virtual idea box, email, dropping by to chat, the Internet, networking, and brainstorming. A management process that requires active management of ideas through submittal and workflow will work well for an organization. Once new ideas are generated, the ideas need to be sorted and evaluated. Mind-map software6 or other nonlinear tools can be very effective in sorting new ideas. All new idea submissions must be evaluated. That needle in the haystack may be there. In addition, all submitters must be notified about their ideas. Lack of feedback will stifle the flow of good ideas. Good ideas need to be developed. Some relatively small fraction of ideas will hit the mark. De Bono’s six hats All humans carry unintended bias into most decisions. Lab managers need ways to counter the natural bias. Some common biases that need to be addressed include: - Confirmation bias—selective search for evidence - Premature termination—accepting the first alternative that might work - Cognitive inertia—unwillingness to change - Selective perception—screening out information - Wishful thinking—seeing things in a certain (usually positive) light - Choice-supportive bias—distortion of memories of chosen and rejected options to make the chosen options seem more attractive One useful tool to counter unintended bias is de Bono’s six hats.7 Using the tool provided by de Bono enables a more objective way to evaluate ideas or make decisions. The six hats approach enables a group to effectively consider all sides of an issue. Everyone wears the same hat at the same time, and everyone participates in every part of the discussion. Table 2 shows the six hats. Once the new ideas have been sorted and evaluated, some can be tried. The good old scientific method is often a good way to experiment with new ideas. Demonstrated ideas can be implemented. Implemented ideas can be good practices. Some good practices can become best practices— maybe it will work for someone else too. A best practice is the current best way of doing work that has been implemented.5 The method is generating measurable benefits, and the idea can be replicated elsewhere in the company. Best-practice sharing brings many advantages to the organization, including helping it: - Save money - Share best vendors and pricing - Rapidly share proven solutions to common problems - Rapidly get input on possible solutions - Seek proven solutions - Rapidly share experience globally to related operations - Connect people from different areas, businesses, or regions - Rapidly share opportunities As mentioned above in the discussion of knowledge retention tools, lessons learned can be a powerful tool to create and propagate a learning culture. A popular and highly effective process for the capture and fast transfer of lessons learned is the after-action review.8 Lessons learned are designed to enable individual and organizational learning. The tool can be utilized before, during, or after any event or project. The lessons learned approach is primarily a tacit knowledge tool; participation is the key. It brings insight to not only what, how, or when things were done but also why they were done. A lessons learned tool consists of five questions: - What did you expect to happen? - What actually happened? - Why did it happen? - What can we learn? - What do we need to do based on our learning? Both positive and negative results should be discussed. To enable full participation, no blame or finger-pointing is allowed. Lessons learned can bring significant benefits to the organization: - Create a culture of learning - Create a psychologically safe environment - Share what people know - Prevent the repetition of undesirable outcomes - Appreciate new ideas - Impart tacit knowledge that is difficult to express in writing - Give background, context, and history; explain why - Describe issues encountered - Reveal how problems were solved - Help align a team with their work Knowledge is critical for any organization. Lab managers need to know the who, where, when, why, and how of a high number of different processes and protocols. To be successful in this role, the lab manager needs proficiency in a number of different knowledge management tools, a number of which were reviewed in this paper. A strong knowledge management program will retain critical knowledge, seek new knowledge, and generate a learning organization. The authors would like to acknowledge colleagues past and present at Intertek, Lehigh University, and Air Products. 2. Moria Levy (2011), “Knowledge retention: minimizing organizational business loss,” Journal of Knowledge Management, Vol. 15, Issue 4, pp. 582-600. 3. DeLong, David, Lost Knowledge: Confronting the Threat of an Aging Workforce, Oxford University Press, New York, New York, 2004. 4. APQC, Knowledge Mapping Concepts and Tools (Collection), https://www.apqc.org/knowledge-base/collections/knowledge-mapping-concepts-and-tools-collection 5. APQC, The Air Products Profile: Best Practices in Integration, APQC Publications, Houston, TX, 2007. 7. De Bono, Edward, Six Thinking Hats, Little, Brown and Company, New York, 1985. 8. Collison and Parcell, Learning to Fly: Practical Knowledge Management from Leading and Learning Organizations, Capstone Publishing Limited (Wiley), 2004.
<urn:uuid:a91cc8e4-e05e-4297-8364-4c8a176416f1>
CC-MAIN-2021-43
https://www.labmanager.com/leadership-and-staffing/effective-knowledge-management-tools-and-techniques-2833
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00071.warc.gz
en
0.923073
2,287
2.765625
3
All seven novels analyzed in this thesis demonstrate an evolution of morality, both for individual characters as well as within the genre of children’s fantasy literature when these texts are united as a whole. Starting with Victorian morality and characters like Alice and Sara who demonstrate a consciousness of proper societal practices and behaviors, both Carroll and Burnett portray the girls in such a way that they do not fully adhere to these practices, or rather, are independent of them. Alice, an example of a typical upper-class Victorian child, finds herself alone in Wonderland. This is not surprising due to the fact that in upper-class Victorian culture, the children are often kept separate from their parents, either being physically kept in a different part of the home like the nursery or sent away to places like boarding schools where their primary care is given by another, usually a non-parent adult figure. This Victorian trait of separation is true in the stories of both Alice, who is physically kept separate from her family in Wonderland, and Sara, whose father sends her to boarding school in England. However, in both novels the child characters manipulate their settings in order to grow or mature independently. Alice claims power for herself in Wonderland when she stands up to the Red Queen, something which strays from the stereotypically “proper” behavior of Victorian girls, and by having her grow larger at the end of the novel, Carroll demonstrates a literal growth in her character. Similarly, in the world of her boarding school, Sara’s use of imagination manipulates the space around her, allowing her to be a princess even in the confines of her destitution. Just as Alice differs from the “proper” Victorian child, as does Sara in her fluctuating social status. Though still an orphan, by the end of the novel Sara’s wealth has returned, giving her a social status higher than Miss Minchin who has treated her so poorly, resulting in the social destruction of both Miss Minchin and the school. Though Miss Minchin is an adult, Sara’s social power is one that separates her from the traditional Victorian power structure of adult and child. In the end, both Carroll and Burnett utilize their child characters’ lack of parents or parental supervision—something not unfamiliar to the traditions of the upper-class Victorians—to promote the growth and independence of Alice and Sara as individual selves. In this way, both female characters are able to experience moral growth that is separate from the societal expectations of the Victorian era; something which is largely influenced by the manipulation of the fantastic second-world that exists in each of the novels. Barrie, too, demonstrates a separation from the social expectations of the late Edwardian era in Peter Pan. Barrie blends character traits of both the middle and upper classes in his depiction of the Darling family. The family is rather close, something Barrie expresses in his portrayal of Wendy’s love of sharing her mother’s jewelry and knowing where Mr. Darling’s medicine is kept, and yet the Darlings have both a nursery in their house and a nanny to take care of their children. Closeness of family is a trait commonly attributed with middle-class Victorian families as they could not afford to keep a nanny on staff. Barrie mocks this, however, through his use of Nana, the Darling’s nanny who also happens to be a Newfoundland dog. Although the family has both a nursery and a nanny, neither of these things can be taken seriously due to the ridiculousness of their nature. Thus, the family’s status is questionable. Barrie’s depiction of the Darling family is demonstrative of the shift of the Edwardian period away from its Victorian predecessor. Unlike in Alice’s Adventures in Wonderland or A Little Princess where it is clear what class status both girls possess through their reference to cultural norms (making it easier to detect when they depart from these norms), the Darling family is one that blends the moral and familial traits of both the upper and middle classes, confusing the reader’s understanding of which class’ morals they should be expected to follow. This is something that is also expressed in Wendy’s desire to return home after her experiences in Neverland. Although she is a child who has grown up in a household that does keep the children partially removed from the parents (due to the physical separation of the nursery from the rest of the house), Wendy expresses pity for her parents upon realizing that she can no longer remember them. In the end, she actively desires to return home rather than stay young forever in Neverland. Wendy’s acknowledgement of the real-world consequences of her actions while in the fantastic realm demonstrates her growth as a character, and this directly relates to Barrie’s characterization at the beginning of the novel. Without the contradictory elements that make up the Darling family’s dynamic, the moral growths of the children in Neverland would not be able to take place as neither Wendy or her brother’s would feel the pull to return home that they do due to the closeness of their family ties. As explored earlier in this essay, a shift in morality is also apparent in C.S. Lewis’ The Lion, the Witch and the Wardrobe, where the four Pevensie children face moral challenges that stimulate their own growth as individuals and propel them toward adulthood. The coming-of-age subgenre, although present beforehand, increased in prevalence after the turn of the 20th century. Lewis’ choice to merge this subgenre with his work of children’s fantasy literature is interesting when his history with war is taken into account. The influence of Lewis’ participation in World War I on The Lion, the Witch and the Wardrobe is unknown; however, parallels can be drawn between the innocence lost in war and the forced growth that many soldiers experienced, and the moral growth of the Pevensie children (Melton). The scenes of war in Narnia are often—if not entirely—depicted with clear divisions between those who are good and those who are bad. In the real world this is often not the case; however, it is this divide that allows the Pevensie children to grow in “goodness.” This particularly applies to Edmund whose initial introduction to Narnia was through the White Witch, who exposed his more selfish and evil characteristics. There is an evolution for Edmund’s character in particular that forces him to acknowledge his own faults and redeem himself in the eyes of Aslan, his siblings, and the reader. Whether or not Lewis intended to draw moral parallels between the war in Narnia and his own experiences in World War I, the two share the general idea of a progression of growth. During a time closely proceeding an event of worldwide strife, The Lion the Witch and the Wardrobe provides a fantastic world in which the barriers between good and evil are very clear, and there is no doubt for the characters or the reader who is “good.” This clear divide is something which is desired in war but often not achievable, as “good” and “bad” are typically muddled; a truth that Lewis may well have known, and thus created a world where this is not so. Just like Alice, Sara, and Wendy before them, the Pevensie children also reject the real-world setting of their novel; most notably Lucy, when she refuses to question the goodness of Aslan and the righteousness of the fight against the White Witch. Both Peter and Susan also demonstrate these same qualities later in the novel when they do not hesitate to fight on behalf of Aslan and Narnia in order to defeat the evil witch queen. Unlike the turmoil the child characters were experiencing in the real-world of the novel (as they are implicitly sent to the English countryside in order to escape bombings in London which took place in World War II), Lucy, Susan, and Peter are able to escape the confusion of the war in the “real” setting of their narrative and, rather, take part in a war with clearly marked sides. Edmund, too, eventually joins the side for good and redeems himself. In this way, all four characters are able to progress toward adulthood with a positive moral growth that stems from their choice for good. Charlotte’s Web and Bridge to Terabithia, also texts in the coming-of-age subgenre, have parallels to their authors’ own lives, reflecting the moral growth of their characters. In 1938, E.B. White’s move from New York City to rural Maine greatly influenced the setting in Charlotte’s Web in which nature is emphasized as a haven for both character growth and the fantastic (Neumeyer). White was known to love the privacy and seclusion of his farm; a seclusion which is necessary in Charlotte’s Web as the barn acts as a fantastic “other-realm” for Fern in her growth as a character. A large portion of Fern’s individualized growth in the novel, though a concern for her mother, takes place in part due to her ability to communicate with the animals in her uncle’s barn and witness them as her friends. The unusualness of her interactions with the animals in terms of what is “normal” outside of the barn (animals that do not talk) is what sets Fern apart as a character who, in her own way, rejects the traditional setting of her reality. At the same time, the seclusion of Fern’s experiences in the barn also makes Fern’s transition closer to adulthood at the end of the novel more prominent. White’s fascination with nature and its influence on relationships—something which is demonstrated by the rural setting of the barn and the friendships Fern builds because of it—makes her transition toward adulthood even more obvious when she chooses to stray from these friendships (Neumeyer). When Fern chooses to leave Wilbur at the fair in order to play with her male friend Henry, she is actively choosing to leave the isolation of the barn in order to partake in the real world. Her interest in Henry—or boys in general—indicates her growth from childhood to adulthood. Fern rejects the barn in order to mature in the real world. Similarly, Jess’ relationship to Leslie in Bridge to Terabithia instigates his own transition toward adulthood while in a world that differs from the normal reality of the setting. Paterson’s novel, one which loosely follows the drowning of her son’s childhood best friend, delineates a world in which the main character Jess is uncomfortable with himself. Leslie’s demonstration of imagination eventually influences Jess in a positive manner, allowing him to forgive his own fears. The traditional setting of this novel, set in the southern United States, is itself a limit that defines the stereotypical roles of both Jess and Leslie; roles they each break when they become the king and queen of their imagined world Terabithia. Paterson claims to have wanted to build a world in which the grief and self-exploration of her child characters can be experienced authentically (Misheff). At the end of the novel, Jess’ moral growth in terms of his self-actualization and ability to perceive himself as strong stem naturally from his experiences with Leslie. This may be attributed to the novel based on the authenticity of Paterson’s portrayal of her real-life experiences. Rather than submit to his grief at the loss of his friend, Jess is able to appreciate the lessons Leslie taught him during their friendship, and so was able to develop in moral maturity. Both in Charlotte’s Web and in Bridge to Terabithia, Fern and Jess, like the other child characters examined in this thesis, are able to separate themselves from their settings in a way that allows them to grow morally. Fern learns lessons of morality and friendship in the fantastic second-world of her uncle’s barn through the conversations she holds and observes with Charlotte and Wilbur. These lessons then allow her to explore the adult “real” world of the novel while growing as a character which is demonstrated through her experiences in the barn; a place that is both rural and removed, allowing Fern to interact with nature in an environment free from social restrictions. Jess, too, demonstrates a growth in his character when he welcomes his sister to Terabithia at the end of Bridge to Terabithia. Rather than allowing May Belle to succumb to her fears, Jess explains (in a way that is opposite of his character traits in the beginning of the novel) that one should not be ashamed of his or her fears. Like Fern, Jess is able to utilize the lessons he learns in Terabithia and transfer them to his experiences in the real world, demonstrating an individual maturation of morality. Lastly, Harry Potter and the Philosopher’s Stone, just as its predecessors (particularly The Lion, the Witch and the Wardrobe) in this thesis have shown, also depicts a conflict between good and evil that results in the positive development of morality for its child characters. Much of Rowling’s early influences stemmed from her relationship with her parents, primarily her mother, whose death contributed toward several scenes and/or depictions of love and death in Harry Potter and the Philosopher’s Stone, and Rowling’s life as a single mother (Orford). Harry must confront Voldemort at the end of the novel and face the man who murdered his parents; however, it is by the love of his mother, who sacrificed herself when Harry was an infant so that he could live, that Voldemort is defeated at the end of The Philosopher’s Stone. The love of Harry’s mother, something which he yearns for in her absence and which he experiences at the end of the novel with his defeat of Voldemort, is a consistent theme throughout the text. Parallels can be drawn between this conflict and Rowling’s own grief after her mother’s passing. Rowling draws from her own experience with losing a parent, not only making this one of the larger themes of the book, but also one of the moral challenges that stimulates growth in Harry’s character. Upon discovering the love of his mother is still with him, literally demonstrated in Harry’s fight with Voldemort, Harry is able to truly embrace goodness and its triumph over evil. In much the same way, Rowling’s experience as a single parent can likely be read as an influence on the growth of Harry’s morality, demonstrated through his ability to establish a life for himself at Hogwarts that is separate and improved from the life he knew with the Dursleys. After divorcing from her husband, Rowling moved to Edinburgh where she was jobless, a single mother, and had little familial support with the exception of her sister who lived nearby (Orford). However, it was in this setting that Rowling was able to begin constructing the novel that would later lead to her immense success with the Harry Potter series. Rowling’s initial lack of familial support with her ex-husband is reflected in Harry’s time with the Dursleys; a household that not only did not want him, but despised him. Transitioning from the Dursley’s household to Hogwarts, Harry is able to make friends and establish a life that is far more morally rewarding than what he had previously lived. Rowling, too, in her move to Edinburgh and her creation of Harry Potter and the Philosopher’s Stone while in an ambiguous state of unemployment, experienced significant transition in regards to what she knew as normal, and it is quite possible that this change in her personal life was reflected in Harry’s own moral progressions. By tracing these moral progressions throughout all seven novels combined, there is a clear pattern of characters deviating from the societal norms of their personal realities. In each era of history and in each setting of the novels, the characters in all of these works of literary importance display traits of independence that strengthen each of them morally and, in many cases, this enforces a progression toward adulthood. These works are both a reflection of the eras in which they were published and a rejection of them. Authors Carroll, Burnett, Barrie, Lewis, White, Paterson, and Rowling have all produced characters that uniquely embody a change from what is considered societally acceptable to something that inspires personal growth and the achievement of “goodness.” It is this change that allows readers to both relate to, be inspired by, and learn lessons from the experiences of these child characters.
<urn:uuid:73e44485-7904-4195-ac87-b3de4ea529e0>
CC-MAIN-2021-43
https://ultimatelyuselessstories.com/2016/05/09/thesis-post-4-conclusion/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00590.warc.gz
en
0.971458
3,409
3.234375
3
How to Plant - Define the borders of your lawn and calculate square footage. - Rototill thoroughly, 8” to 12” deep, to break up any hard lumps of soil. - Rake out all debris, stones, etc. - Slope the soil away from your home and level it by dragging a 6′ x 6” board over the whole area. Then rake the soil even. - With a roller, roll the whole soil area three times, in opposite directions, until firm. - Apply a skiff of peat moss to hold moisture for the seed. If the peat is dry, add a little sand to secure it and prevent it from blowing away. - Add ‘Seed & Sod Starter’ fertilizer at the rate specified on the bag. - Seed using one pound of seed per 200 square feet (approx. 1 Kg per 400 square feet). Spread the seed in opposite, or crisscross, directions. - Rake the seed in lightly. - Water thoroughly and evenly, but don’t let the water puddle. Keep the seed moist! New seed must not dry out, especially when they have just germinated. Depending on the time of year, it can take 10-14 days to germinate. Seeding should be done when night temps. are 8°C or higher. - As soon as your new lawn is long enough, mow it – it’s like a good first pruning! TIP: Each time you mow, mow in an alternate direction, i.e. North-South one time, then East-West, then diagonally etc., from the last time. This will keep your grass standing straight and will help prevent thatch from developing. Did you know: 15m² of a healthy lawn releases enough oxygen daily for a family of four? (Scotts) Ongoing maintenance takes less time than dealing with continual issues! Healthy lawns are a product of feeding, watering, mowing, overseeding and, in our area, regular liming and aerating. A good fertilization program will colour up your lawn, keep it green and cut down on it’s overall water requirements. Well fed lawns have root systems that resist heat, drought and wear, and thick top growth that makes it hard for weeds to take hold. Slow-release nitrogen fertilizers are, by far, the best for both lawn grasses and the environment. They keep grass green without excessive growth. Ideally you should feed your lawn three to four times a year, once in early spring, late spring, late summer, and fall. Try using Easter, Victoria Day, Labour Day and Thanksgiving as reminders! In our area, weekly watering is usually not a problem! Lawns typically need 2.5cm (1”) of water a week so that moisture penetrates the root system. Light, more frequent watering is ineffective and encourages the root system to stay shallow, which is not what you want. Watering deeply and less often encourages the roots to go deep and will help your lawn stand up to periods of heat, stress and drought. Use a rain or water gauge if you’re not sure about your depth. If using a sprinkler, watch it and adjust the settings as needed…you don’t want to waste any of this precious resource on the sidewalk! Should you be watering by hand or with a sprinkler, try to do so in the morning instead of in the afternoon/early evening. Please be sure you always respect municipal water restrictions. Lawns should be mowed to approximately 4cm (just under 2”) long. This length helps lawns retain moisture, encourages deeper roots and will help resist weed development. Try not to cut off more than one-third when you mow (it’s okay to leave this light amount on your lawn… less raking!). Be sure your blades are sharp and alternate your mowing pattern each time to prevent thatch. Overseeding helps rejuvenate tired lawns and thicken up healthy ones (making it even more difficult for weeds to take hold). To give grass seed a place to take hold, mow your lawn slightly lower than usual, use a rake to scratch up the area to be overseeded and sprinkle a bit of a peat moss/sand mix overtop. The best seed to use is ‘Natural Knit’ spreading rye grass. Perennial rye grasses are usually fine bladed and tremendously resilient, and they add new life and vigor to any lawn. Overseed at a rate of one pound of seed per 200 square feet (approx. one kilogram per 400 square feet). The seed will sprout in about seven to ten days. Like all grass seed, it must be kept moist the entire time it is germinating and even after seed has sprouted. To get the seed off to a great start, use Seed & Sod Starter. Overseed after curing weed and moss issues as well. Note: Adding 10-20% Dwarf White Dutch Clover or the new Microclover to you lawn during the overseeding process will make your lawn more drought tolerant, less nitrogen dependent and more pollinator friendly. Liming is very important in our area especially in heavy soils. Winter and spring rains drop the soil pH level and turn it acidic. When this happens, grass is less able to absorb nutrients and moss, which thrives in acid soil, starts to set in. Lime will help adjust the soil pH back to ‘neutral’, and it is best applied in fall and spring. Dolopril Lime is specially formulated lime for easier spreading and provides fast release and long lasting results. A 10kg bag covers approx. 2000 square feet. Lime is also welcome in garden beds, but be sure to avoid liming your potato patch, and keep it away from acid loving trees and shrubs such as: azaleas, rhododendrons, camellias, heather and blueberries. Heavy winter and spring rains compact our soils. Aeration opens up the soil to allow oxygen to penetrate in and around the roots. It also creates better drainage and helps deter moss. Lawns should be aerated in the spring and fall, and it can be done properly with either a hand or mechanical aerator that pulls cores of soil out of the lawn. After aerating, rake out the plugs and apply 1/4” of coarse sand (rake it in). This will help keep the plugs open and percolating. The best control for weeds is a thick, lush lawn where weeds can’t get a toe hold and compete with your grass. That said, regular mowing, liming, aeration and overseeding will help you stay on top of things! Weeds are tenacious, however, so you will likely have to deal with them at some point. With the decrease in availability of broad application herbicides we look to more eco-friendly alternatives. These products work differently than traditional solutions and must be applied when the weather is dry and above 10°C to maximize their effectiveness. Always read the directions on any lawn care treatment you purchase to ensure you are aware of the correct product use and function, mixing and application rates, safety precautions and first aid. Yes, even on organics! The only way to truly eliminate moss from your lawn is to burn it off with a quality moss killer (containing iron sulphate). It can be liquid or granular but the procedure is the same. When you are confident of having 48 hours of dry, warm (above 12°C) weather after application, moisten the moss and then apply the moss killer. Once the moss has blackened, rake it out. To keep moss away: keep the soil from becoming acidic by applying lime in the fall, aerate (and apply sand) in the spring and fall, and maintain good nutrient levels with slow-release nitrogen. Try to keep moss off your trees, roofs/sidewalks to prevent spreading. Thatch is a condition where lawn grasses lay down and you end up mowing bent over grass stems instead of upright grass blades. As a result, the grass is often brown after mowing. You can rent ‘dethatchers’ or purchase a ‘roto rake bar’ which is a blade with a spring on both its ends. Simply replace your rotary lawn mower blade with a ‘roto rake bar’ and tear up the turf, lifting all the bent grass stems. Overseed once complete. Lawn Care Activities Throughout the Year The activities below depend on your particular lawn care needs (i.e., don’t apply moss control if you don’t have moss!) and what the weather is doing. Mow and water as needed throughout the year. - January: When using de-icer, try to avoid areas where runoff will impact lawns. Apply lime. - February: Apply lime (if you haven’t already). - March: Aerate, rake and apply 1/4” of sand. Fertilize (late March). - April-May: Apply weed or moss control as needed/temperatures dictate. Fertilize after sufficient time has past. - June: Overseed. - July: Fertilize (if you’ve missed one round). Mind watering restrictions during summer. Water deeply less often rather than lightly more often. - September: Overseed (keep it moist!). - October: Aerate and apply sand. Fertilize. - November: Apply lime. Remember, always leave a resting period in between lawn care activities. Wait ‘three mowings’ or a minimum of two weeks before performing a lawn treatment. Some Common Lawn Care Terms - Soil: Soil is a mixture of organic and inorganic materials, microorganisms, nutrients, air and moisture. - Peat Moss: Decomposed sphagnum mosses. Helps keep moisture next to germinating lawn seed. - Lime: Lime is an important source of calcium and magnesium for your lawn and garden. It is a basic substance and used to increase soil pH, thus helping to neutralize acidic soil (most plants grow best in soils with a pH of 6-7). It is primarily composed of ground limestone, but newer forms such as prilled lime (Dolopril) are manufactured and enhanced for easier application. - Aerate: To pull plugs of lawn and soil out of the ground allowing for improved drainage and air circulation. Applying a 1/4” layer of sand over the aerated area will fill the holes and vastly improve drainage. - Thatch: Matted lawn caused by mowing repeatedly in the same direction. The laid-down grass is often missed by the mower and lays on the grass as a thatch patch. - Overseeding: Sowing grass seed over existing lawn to encourage thick, healthy lawn growth, or to fill in patches. Frequently Asked Questions Can I apply lime when I seed? No. Lime can be applied approximately four weeks before seeding or after the third mowing. How long after seeding can I fertilize? Fertilize only after the third mowing.
<urn:uuid:69d29d99-aa55-4eaf-bfdd-ad5d2a47e516>
CC-MAIN-2021-43
https://mintergardening.com/lawns/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00310.warc.gz
en
0.916414
2,358
2.703125
3
Thirty years after the entry into force of the UN Convention against Torture (CAT), there is still a considerable implementation gap and torture continues to exist worldwide. One of the most significant developments over the last few years has been the establishment of National Preventive Mechanisms (NPMs). While these bodies carry out regular monitoring visits to places of detention, the follow up and implementation of their recommendations issued upon the visits remains a weak point. However, this will be decisive in proving that they can make the necessary sustainable contribution to the prevention of torture. This year the UN Convention against Torture (CAT) is celebrating its 30th anniversary and has been ratified by 156 States as of November 2014. As noted by the UN Special Rapporteur on Torture, if States implemented their obligations under the CAT and other relevant human rights treaties as well as soft law documents, torture could be effectively eradicated. However, there is a considerable implementation gap and torture can still be found worldwide, often as a routine practice. Monitoring of places of detention has been recognised as one of the key methods to prevent torture and ill-treatment. On this basis, 25 years ago the European Committee for the Prevention of Torture (CPT) was established to visit detention places in member states of the Council of Europe. The important work of the CPT served as an inspiration for the development of the Optional Protocol to the CAT (OPCAT), which entered into force in 2006 and established an international Subcommittee on the Prevention of Torture (SPT). These international mechanisms are however not in the position to carry out regular monitoring in all member states and do not have sufficient resources to follow up their recommendations and to offer support in their implementation. Therefore the more important contribution of the OPCAT is that it obliges its member states to establish independent National Preventive Mechanisms (NPMs) to carry out regular visits to all places of detention, report about the situation of torture and ill-treatment, make recommendations to the state and enter into a dialogue on their implementation. The gap in continuity and effective follow-up by international monitoring bodies can thus be filled by NPMs, if provided with the adequate mandate, powers and resources. Despite its importance, so far research on the follow-up procedures of NPMs and implementation of their recommendations is sparse. The ongoing research carried out by the Ludwig Boltzmann Institute of Human Rights and the Human Rights Implementation Centre at the University of Bristol on NPMs in the European Union shows that the follow-up and implementation of recommendations is still a weak point of most NPMs. While NPMs clearly see follow-up and monitoring implementation of their recommendations as their responsibility, most of them have not developed a specific strategy or tools in that regard. This weakens the effectiveness of NPMs, which may eventually lead to a certain ‘monitoring fatigue’ within the mechanisms and the monitored institutions. Ultimately it might even call into question the ability of NPMs to have a measurable impact on the prevention of torture and ill-treatment. Therefore it is of crucial importance that NPMs develop a clear strategy and invest the necessary resources in following up their recommendations and at the same time measure and promote their implementation. The basis of effective follow-up is comprehensive, consistent and analytical reports and effective recommendations. NPMs produce visiting reports and annual reports. In order to analyse some issues in more detail, some NPMs have also produced ‘thematic reports’ (eg. in Bulgaria, France, Poland and Spain). Regarding recommendations the ‘double-SMART’ criteria have been developed for NPMs suggesting that all recommendations should be: Specific, Measurable, Achievable, Results-oriented, Time-bound, Solution-suggestive, Mindful of prioritisation, sequencing & risks, Argued, Root-cause responsive and Targeted (source: Association for the Prevention of Torture). In view of ensuring adequate follow-up and implementation it is naturally crucial that recommendations state who should do what by when. Unlike international mechanisms such as the SPT and CPT, NPMs are not constrained by the principle of confidentiality, under which the authorities monitored are the ones who decide whether and when the findings are published, thus restricting the possibilities for follow-up. Instead the NPMs are meant to operate under the principle of transparency, opening up places of detention to public scrutiny. Therefore, it is key that NPMs publish visiting reports and recommendations (as well as relevant information on their working methods) and there are only a few NPMs in the EU who do not yet do so. Moreover, State authorities are obliged by the OPCAT to “examine the recommendations of the NPM and enter into a dialogue with it on possible implementation measures” (art. 22 OPCAT). In some countries this has been transposed into a legal obligation to either conform to the recommendations or inform about the reasons for failing to do so (eg in the case of Austria). The role of NPMs is to advise the authorities and provide guidance to find concrete solutions to the problem of torture and ill-treatment in a constructive and cooperative spirit. It requires many different ways of interaction to create a trustful relationship and promote the implementation of the recommendations. These may be formal (eg. through working groups) as well as informal (eg. regular meetings or phone conversations) but should go beyond just a written exchange. The dialogue should also engage different levels of the State, including the responsible staff in the institution visited. The assessment and documentation of implementation and the measuring of compliance with human rights norms is an essential part of the follow-up work of an NPM. It is all the more surprising that many NPMs do not yet document their recommendations in a systematic way or have no clear methods or a formalised system for measuring their implementation. The majority of NPMs in the EU stated that their primary tool for following up and assessing implementation are follow-up visits. However, some NPMs also document the recommendations and their implementation in a database, use indicators and benchmarks or encourage state actors to develop action plans as a basis of measuring implementation (see for example in Spain and the UK). Besides these and other good practices, NPMs could also benefit from the extensive research and practice of other institutions in measuring human rights to consequently develop their own methodology and strategy. Assessment and evaluation do not yet ensure the implementation of recommendations and many monitoring mechanisms complain about the failure of States to take adequate measures to prevent torture and ill-treatment. Therefore a collaborative follow-up process would be useful where the NPM cooperates with state as well as non-state actors. While regular contact with state authorities is in place for all NPMs, the involvement of civil society is not always evident. Some NPMs include civil society organisations (eg. Slovenia) or independent experts (eg. Austria) or cooperate in joint visits to places of detention (eg. in Bulgaria, Croatia and Estonia) and other NPMs cooperate with civil society representatives through special advisory bodies (eg. Austria, Portugal, Spain). However, it seems that a formal inclusion of civil society does not always guarantee effective cooperation in practice. Despite the important role civil society plays in the prevention of torture, many NPMs have no particular cooperation strategy and in some countries there appears to be a lack of trust on both sides. Many NPMs also lack a strategic approach to engage with the media, although this is recognised as indispensable in order to ensure the visibility of the NPMs’ work, to influence how information is channeled and to build partnerships to create public pressure. Moreover, the regular exchange with international as well as other national monitoring mechanisms can be very useful to exchange ideas and experiences on the working methods as well as comparative good examples of implementation from other states. Overall, it appears that more time and resources within NPMs are needed to strengthen the implementation of their recommendations. This requires a thorough analysis of stakeholders and partners and the development of a concise strategy to bring about change in laws, institutions, skills and mindsets. The establishment of NPMs is a sign of great hope for progress towards prevention of torture worldwide. However, whether NPMs can have a systemic and sustainable impact depends on the implementation of the recommendations addressing the root causes of the problem. This is naturally the responsibility of the State and requires political will by the authorities, but the NPMs play a pivotal role in this process by developing an effective strategy of follow-up and engaging with different stakeholders to advise and pressure authorities to comply with their obligation to prevent torture. More research and exchange on good practices among torture monitoring bodies can certainly help NPMs to reflect upon this role and develop their own approach and strategy. Ultimately strengthening the follow-up and implementation of recommendations will be crucial for their success. About the author Moritz Birk is Head of the ‘Human Dignity and Public Security’ team at the Ludwig Boltzmann Institute of Human Rights in Vienna, Austria. The Institute is currently working with the Human Rights Implementation Centre at the University of Bristol on an EU-financed research project engaging with NPMs in the European Union to identify best practices of follow-up and to strengthen the implementation of their recommendations. The project will be concluded in May 2015 with the publication of a Good Practice study on follow-up and implementation of torture monitoring bodies. The above stated examples only represent preliminary research results. Update! The report on this research is now available: Enhancing impact of National Preventive Mechanisms. About this blog series To mark our 25th Anniversary and prepare for the Crime Congress in Qatar in April 2015, PRI is running a series of monthly expert guest blogs, addressing interesting current trends and pressing criminal justice challenges in criminal justice and penal reform. Blogs will be available here on our website and as podcasts on the 25th of each month from May 2014 to April 2015. All blogs in the series so far can be found here.
<urn:uuid:dc8f80e6-8fc1-426c-ac6a-63e3d41803c0>
CC-MAIN-2021-43
https://www.penalreform.org/blog/turning-recommendations-reality-improving-impact-detention-monitoring-bodies/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585280.84/warc/CC-MAIN-20211019171139-20211019201139-00470.warc.gz
en
0.949519
2,050
2.59375
3
Not surprisingly, the planetary boundaries framework has triggered scientific debates, where some criticisms have been presented. However, several of the criticisms have been based on key misconceptions, says lead author Johan Rockström and his co-authors. Addressing some key misconceptions Johan Rockström replies to criticism of planetary boundaries concept The planetary boundaries concept has appeared prominently in discussions related to Rio+20 as a scientifically based framework in support of global sustainability and the development of sustainable development goals. Not surprisingly, the planetary boundaries framework has also triggered scientific and broader debates, where some criticisms have been presented. Critique is the basis of scientific progress and is welcomed in scientific endeavours such as the one we are engaged in. Nevetheless, several of the criticisms advanced in these discussions have been based on some key misconceptions. Here, we address these. The planetary boundaries framework (hereby called PB) has been criticised for not being well adapted to policy. It is important to stress that the planetary boundaries research is first and foremost designed to advance Earth System science. However, the fact that it has already attracted considerable attention in the policy sector suggests that it could indeed become a useful policy tool with further development. Our assessment of why the PB framework resonates with governments, businesses and NGOs is the growing awareness of risks facing world development with continued increase in human pressures on a finite Earth system. The conceptual framework for the planetary boundaries research is based on identifying biophysical boundaries that are intrinsic to the operation of Earth as a system. They are not policy based adaptations with assessments on land needs per capita, land productivity assumptions, or amount of land required to provide energy/food needs of a certain size of world population. In that sense, the concept is void of policy assumptions. In other words, we welcome policy related discussions, but only beyond the biophysical boundaries we are trying to identify. These biophysical boundaries, once identified, potentially provide a framework for policy decision-making. The original scientific questions we posed were: 1. What environmental processes regulate the stability of the Earth system? 2. Do these processes have well-defined thresholds at global or regional levels, or do they contribute significantly to the resilience of the Earth System? 3. What boundary positions do they have? It was a scientific effort to identify the ample evidence that Earth not only is a coupled self-regulating system, but also a system with finite limits. We sought to identify boundary positions beyond which we cannot exclude non-linear changes in one or several sub-systems on Earth. It is up to societies to choose where the boundary position is placed. We chose to place it at the lower end of the uncertainty range in science as a measure of applying a precautionary principle (e.g., for climate change at 350 ppm (CO2)). One could also take a more risk prone approach, opting for the higher end of our analysis of uncertainty, in this case at 550 ppm (CO2). This is a social choice, but the range is based on an Earth System analysis. Critics have suggested an additional normative dimension, that the biosphere is the basis for human wellbeing. We argue that this is no longer a normative issue for argument, but rather a fact based on empirical evidence. Welcome to the Anthropocene The PB research concludes, based on paleo-climatic evidence, that the environmental conditions during the Holocene is the only state that we know for certain can support the modern world we live in. It may be incorrectly perceived as a normative statement, but it is above all a robust and evidence-based conclusion which is difficult to refute: 1. Human civilizations only started to develop after the onset of the Holocene. 2. The Holocene is an unusually long and relatively stable inter-glacial period in the late Quaternary, providing a predictable and relatively low risk geophysical environment for the biosphere, which in turn provides the basis for human prosperity. 3. Within the Holocene environment we have propelled ourselves to a world economy hosting seven billion people committed to nine billion by 2050. Before the beginning of the Holocene, human numbers were much lower and we existed in hunter-gatherer societies only. Assessing the boundaries We note that even in the recent critical discussions over whether or not there are tipping points in the Earth System, most agree that there is strong scientific evidence of tipping points in the climate system, in the stratospheric ozone layer, for ocean chemistry (i.e., acidity) and for phosphorus. These are four out of our nine boundary processes. Since the publication of the original Nature paper, Steve Carpenter and Elena Bennett have shown very clearly, based on ample evidence, that we have both a marine and a terrestrial planetary boundary, due to tipping points in freshwater ecosystems (Carpenter, S.R. and Bennett, E.M. (2011) Reconsideration of the planetary boundary for phosphorus. Environmental Research Letters 6: doi:10.1088/1748-9326/6/1/014009). So, what about the remaining boundaries? Let's start with biodiversity. It's an interesting coincidence that almost at the same time as the recent Breakthrough Institute report was released, significant scientific support was added to a planetary boundary for biodiversity through two Nature articles published in June (Barnosky et al. doi:10.1038/nature11018; and Cardinale et al., doi:10.1038/nature11148). Indeed, the Barnosky paper goes even further than we did: it demonstrates the risk for a planetary scale state shift if the human influence on biological systems (biodiversity and ecosystems) is pushed too far. The Cardinale paper neatly summarizes why biodiversity matters for human well-being and the risks associated with losing it. Our own research acknowledged the difficulty of setting a planetary boundary on how far humanity can afford to lose biodiversity before triggering non-linear changes in ecosystem functioning, with flow-on effects for societies, but there is enough evidence to demonstrate the critical role biodiversity plays for ecosystem resilience, i.e., the ability of ecosystems to stay in a desired environmental state . On land use change, we assessed how much of the Earth's land cover we can change for anthropogenic purposes before risking major shifts in ecosystem functioning (habitats for biodiversity; carbon sequestration; water flows through landscapes; moisture feedback from terrestrial ecosystems, etc). We made it very clear that land itself is not associated with a global tipping point but rather contributes to, as a slow variable, the resilience of terrestrial ecosystems. This in turn is coupled with other boundaries such as water, biodiversity, nitrogen, phosphorus and climate. There is ample evidence of how land use change has turned productive landscapes to degraded (much less productive) areas or forests to savannas and steppes. They are all examples of regional scale tipping points, and some of them have global consequences via atmospheric teleconnections (e.g., substantial conversion of the Amazon Basin forests to savannas or grasslands). On water, there is scientific evidence of water induced tipping points at larger system scales. The Aral Sea is one example. Water has a regulating function for regional climate systems and is crucial for the stability of ecosystems. Needless to say, it is a sine-qua-non for all food production and carbon sequestration in landscapes. Unsustainable water use can push farming systems into degraded states. We have always stressed the fact that many of the PB definitions are tentative. However, they all depart from a consistent, common approach of identifying non-linear change/tipping points that can have dramatic impacts for humans. We concluded, as most critics should be aware of, that water, land, biodiversity loss, nitrogen and phosphorus all constitute "slow variables" in the Earth System. We never claimed there were "planetary tipping points" for these slow variables, but rather evidence of tipping points at local and regional scales that add up to a global concern if they occur at the same time in multiple places on Earth (thereby causing local social problems and triggering feedbacks affecting regional to global scale processes, such as the hydrological cycle or the climate system). There is much agreement Despite the criticisms that have been raised, the discussions demonstrate that there seems to be a shared view that biophysical thresholds do exist and that resource constraints are a challenge for prosperity in the world. There also seems to be agreement on the scale challenge, because if operationalised, planetary boundaries need to translate to the relevant scale where both the environmental and governance processes occur (which was also explicitly acknowledged in the original Nature and Ecology and Society papers on Planetary Boundaries). The governance implications of the planetary boundaries concept is a research challenge in its own right. This is why the original framework cannot simply be taken off the shelf and translated directly to operational policy. What it can do already at this stage, however, is to be used as a framework to guide sustainable development goals in the Anthropocene. On behalf of Rockström et al. (2009), Lead author and Executive Director, Stockholm Resilience Centre Research news | 2021-10-27 Why resilience has come of age We speak to the Global Resiliene Partnership’s CEO Nathanial Matthews about the growing global focus on resilience, expectations and hopes for success at the COP26 summit, and GRP’s role in getting there Research news | 2021-10-26 Our engagements during COP26 The Stockholm Resilience Centre will be involved in several large campaigns and activities through the Global Resilience Partnership and the Global Commons Alliance Research news | 2021-10-25 Strong civil society crucial to halt destruction of the world’s tropical forests What is needed - and often missing - is a shared transformational objective and priority to keep the forests standing Research news | 2021-10-25 The rise of Earth altruism Humanity may only be a short distance away from new norms needed to tackle the growing global sustainability challenges Research news | 2021-10-20 Understanding the evolution of the Anthropocene Peter Søgaard Jørgensen reflects on how evolution relates to the real life problems of today Research news | 2021-10-20 Why the blue economy is at a tipping point A sustainable and equitable ocean economy is within reach. But risks of the opposite loom, according to three reports commissioned by the Ocean Risk and Resilience Action Alliance
<urn:uuid:5f1f8733-a772-45a7-a3fe-5d653a92ca3c>
CC-MAIN-2021-43
https://stockholmresilience.org/research/researchnews/addressingsomekeymisconceptions.5.5d9ea857137d8960d471296.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00711.warc.gz
en
0.932294
2,154
3.109375
3
Intercession of the Theotokos The Intercession of the Theotokos, or the Protection of Our Most Holy Lady Theotokos and Ever-Virgin Mary, is a feast of the Mother of God celebrated in the Eastern Orthodox and Byzantine Catholic Churches. The feast celebrates the protection afforded the faithful through the intercessions of the Theotokos (lit. Mother of God, the Eastern title of the Virgin Mary). In the Slavic Orthodox Churches it is celebrated as the most important solemnity besides the Twelve Great Feasts and Pascha. The feast is commemorated in Eastern Orthodoxy as a whole, but by no means as fervently as it is in Russia, Belarus and Ukraine. It is not a part of the ritual traditions of, and therefore is not celebrated by, the Oriental Orthodox Churches or some jurisdictions that allow Western Rite Orthodoxy. Yet the feast is perfectly consistent with the theology of these sister churches. It is celebrated on October 14 (OS October 1). The Western Rite Communities of the Russian Orthodox Church Outside of Russia (ROCOR) do celebrate this feast on their calendar (www.rocor-wr.org). The Protection of the Theotokos or the Intercession of the Theotokos (Church Slavonic: Покровъ, Pokrov, Ukrainian: Покрова, Pokrova), like the (Greek: Σκεπή, Skepḗ) has a complex meaning. First of all, it refers to a cloak or shroud, but it also means protection or intercession. For this reason, the name of the feast is variously translated as the Veil of Our Lady, the Protecting Veil of the Theotokos. It is often translated as Feast of the Intercession or Feast of the Holy Protection. With some reservations, the Pokrov icon may be related to the Western Virgin of Mercy image, in which the Virgin spreads wide her cloak to cover and protect a group of kneeling supplicants (first known from Italy from about 1280). According to Eastern Orthodox Sacred Tradition, the apparition of Mary the Theotokos occurred during the 10th century at the Blachernae church in Constantinople (modern-day Istanbul) where several of her relics (her robe, veil, and part of her belt) were kept. On Sunday, October 1 at four in the morning, St. Andrew the Blessed Fool-for-Christ, who was a Slav by birth, saw the dome of the church open and the Virgin Mary enter, moving in the air above him, glowing and surrounded by angels and saints. She knelt and prayed with tears for all faithful Christians in the world. The Virgin Mary asked Her Son, Jesus Christ, to accept the prayers of all the people entreating Him and looking for Her protection. Once Her prayer was completed, She walked to the altar and continued to pray. Afterwards, She spread Her veil over all the people in the church as a protection. St. Andrew turned to his disciple, St. Epiphanius, who was standing near him, and asked, "Do you see, brother, the Holy Theotokos, praying for all the world?" Epiphanius answered, "Yes, Holy Father, I see it and am amazed!" According to the Primary Chronicle of St. Nestor the Chronicler, the inhabitants of Constantinople called upon the intercession of the Mother of God to protect them from an attack by a large Rus' army (Rus' was still pagan at the time). According to Nestor, the feast celebrates the destruction of this fleet sometime in the ninth century. The icon of the feast, which is not found in Byzantine art, depicts in its upper part the Virgin Mary surrounded by a luminous aureole. She holds in her outstreched arms an orarion or veil, which symbolizes the protection of her intercession. To either side of her stand numerous saints and angels, many of whom are recognizable to the experienced church-goer: the apostles, John the Baptist, St. Nicholas of Myra, etc. Below, St. Andrew the Fool for Christ is depicted, pointing up at the Virgin Mary and turning to his disciple Epiphanius. Usually, the veil with which the Virgin protects mankind is small and held either in her outstretched hands or by two angels, though a version similar to the Western European Virgin of Mercy image, with a larger cloak covering people is found in some Eastern Orthodox icons. The Feast of the Intercession commemorates the miracle as a joyous revelation of the Theotokos' protection, which is spread over the world, and the Mother of God’s great love for mankind. It is a religious holy day or feast day of the Byzantine Rite Eastern Orthodox Churches. It is not found in Oriental Orthodoxy nor some jurisdictions of Western Rite Orthodoxy though it is perfectly consistent with the theology and beliefs of these sister churches. It is held annually on October 14, which corresponds to October 1 on the "Old Style calendar" i.e. the Julian calendar still used in some Eastern Orthodox Churches. It is served as an All-Night Vigil, with many of the same elements as occur on Great Feasts of the Theotokos. However, Pokrov has no Afterfeast. The Western Rite Communities of the Russian Orthodox Church Outside of Russia (ROCOR) do celebrate this feast on their calendar. (www.rocor-wr.org) Some, but not all, regions of the Russian Federation celebrate the Feast of Intercession as a work holiday. In Ukraine, it is celebrated on October 14 as a religious, national, and family holiday. The Mother of God as the Intercessor and Patron became firmly established among Ukrainians: princes, kings, cossacks, and hetmans chose the Mother of God as their patroness and protectress. An icon in the National Art Museum of Ukraine shows the Virgin Mary protecting the Ukrainian cossack hetman Bohdan Khmelnytsky. By decree of the Ukrainian President, October 14 — Pokrova Feast Day was promulgated also as Ukrainian Cossack Day (Ukrainian: День Українського козацтва). October 1 (in the Julian calendar) is also the feast of St. Romanus the Melodist, so he is often depicted on the same icon, even though he and St. Andrew lived at different times. He is often shown directly below the Virgin Mary, standing on a bema, or on a kathedra, chanting from a scroll. The scroll represents the various kontakia which have been attributed to him. Churches dedicated to PokrovEdit Many Orthodox churches worldwide are named after this feast. The first churches dedicated to feast of Pokrov appeared in Russia in the 12th century. Probably the most famous Russian church named for the feast day is Saint Basil's Cathedral, Red Square, Moscow, which is officially entitled "the Church of Intercession of Our Lady that Is on the Moat" (Russian: Собор Покрова пресвятой Богородицы, что на Рву) or shortly "Intercession Cathedral upon moat" (Russian: Храм Покрова "на рву"). The other one is the Church of Intercession in Bogolyubovo near Vladimir on the Nerl River (Russian: Церковь Покрова на Нерли, Tserkov Pokrova na Nerli). Both churches are on the United Nations' World Heritage List, the latter as part of the site White Monuments of Vladimir and Suzdal. There is also a Church of the Intercession of the Holy Virgin in St. Petersburg. Other notable churches commemorating this feast are Intercession of the Holy Virgin Russian Orthodox Church in Manchester, England, and the Russian Orthodox Church of the Intercession of the Holy Virgin & St. Sergius in Glen Cove, New York. Saint Mary the Protectress in Irondequoit, New York is a notable Ukrainian Orthodox Church dedicated to the feast of Pokrova. A great many Ukrainian churches are also named in honour of this feast in Canada. In north Wales there is the Church of the Holy Protection (Eglwys yr Amddiffyniad Sanctaidd) at 10 Manod Road, Blaenau Ffestiniog, where the liturgy is celebrated partly in Welsh, also English, Greek, and Church Slavonic. The church, under Archimandrite Deiniol, is under the omophor the Ukrainian Orthodox Church of the diocese of Western Europe under the Ecumenical Patriarch. Previously it belonged to the Belarusian Autocephalous Orthodox Church, and previous to that to the Diocese of Sourozh of the Moscow Patriarchate. - "Protection of the Holy Virgin", Tranfiguration of Our Laord Russian Orthodox Church, Baltimore, Maryland - "Holy Protection of the Virgin Mary". Protection of the Blessed Virgin Mary Ukrainian Catholic Church. Archived from the original on 5 January 2014. Retrieved 16 October 2013. - Demchinsky, Sterling. "Icons of the Theotokos (Bohoroditsia)". Ukrainian Churches of Canada. Retrieved 16 October 2013. - Neil K. Moran; Singers in Late Byzantine and Slavonic Painting, p. 126ff, Brill, 1986, ISBN 90-04-07809-6 - "Feast of Intercession Celebrated in Ukraine as Religious and National Holiday". RISU - Religious Information Service in Ukraine. 14 October 2012. Retrieved 16 October 2013. - "Про День Українського козацтва Президент України; Указ від 07.08.1999 № 966/99". Verkhovna Rada of Ukraine. 7 August 1999. Retrieved 16 October 2013. - Shvidkovsky, D. S. (2007). Russian architecture and the West. Yale University Press. ISBN 978-0300109122. p. 126. - St. Petersburg website. Accessed February 7, 2010. - Pokrov Church of the Russian Orthodox church website from UK. Accessed February 7, 2010. - Покров Пресвятой Богородицы (in Russian) The article was used for iconography description. - Basil Lourié. The Feast of Pokrov, its Byzantine Origin, and the Cult of Gregory the Illuminator and Isaac the Parthian (Sahak Partcev) in Byzantium |Wikimedia Commons has media related to Intercession of the Theotokos.| - Celebration of Pokrov in Russia - Icons of the Intercession - The Protection of our Most Holy Lady the Mother of God and Ever-Virgin Mary Icon and Synaxarion of the feast - The Feast of the Holy Skepi of the Theotokos from the Website of the Greek Orthodox Archdiocese of America - Saint Andrew, Fool-for-Christ - (in Russian) Pokrovsko-Vasil'evsky monastyr (Protection-Basil monastery) - Pokrov Foundation, a Bulgarian Orthodox Christian organization
<urn:uuid:2064be5f-d13c-4b55-8a9a-4b10b71213e7>
CC-MAIN-2021-43
https://en.m.wikipedia.org/wiki/Intercession_of_the_Theotokos
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00510.warc.gz
en
0.925217
2,448
2.8125
3
- A device, usually one of a pair of rings connected to a chain that is attached to the ankles or feet to restrict movement. - Something that serves to restrict; a restraint: the fetters of tyranny. tr.v. fet·tered, fet·ter·ing, fet·ters - To put fetters on; shackle. - To restrict or restrain: thinking that is fettered by prejudice Within Buddhism, fetters are primarily discussed in the earlier schools of Buddhism and the term is typically translated from the Pali term samyojana into English as chain or bond. There are a number of ways of conceiving of them; - Intrapsychic phenomena that tie us to cyclical, habitual states of being and experiencing - Structures embedded within the mental and emotional layers of an individual bound to a cyclical, atomistic self - Collective psychological and emotional planes which we are submerged in from birth Phenomenologically, it might be better to define them as psycho-emotional patterns centred on the phantom I that are maintained through interwoven fictional narratives that are personal and historical, collective and ideological. In any of the descriptions above, they are expressed or lived through habitual behaviour, thought patterns, feelings, belief patterns and assumptions visible and implicit, all entwined in conditioned sensory habits of perception. In the Pali canon ten fetters are identified[i]; - Belief in a self (Pali: sakkāya-diṭṭhi) - Doubt or uncertainty, especially about the teachings (vicikicchā) - Attachment to rites and rituals (sīlabbata-parāmāsa)[ - Sensual desire (kāmacchando) - Ill will (vyāpādo or byāpādo) - Lust for material existence, lust for material rebirth (rūparāgo) - Lust for immaterial existence, lust for rebirth in a formless realm (arūparāgo) - Conceit (māna) - Restlessness (uddhacca) - Ignorance (avijjā) These fetters will be discussed in conjunction with the awakening stage they are part of below. It is interesting that fetters were originally considered not only very difficult to remove but to span lifetimes. This brings up a question regarding the ontological nature of emotion as many of the fetters are connected to feeling. What are emotions exactly? At a very basic level they are a form of energy that moves through the body. The primary emotions are shared amongst all humans and animals alike and since we are not in possession of them, it would seem that they represent a shared spectrum of energy movement. From a non-dual perspective, emotions do not exist as independent objects to be afflicted with or as forces to be controlled: they are simply part of the fluctuation of human experience. The collective nature of fetter formation needs to be highlighted as it is very often downplayed in Buddhist teachings. Our social reality is based on creating subjects, consistent persons that interact through reliable identities shaped from birth to adulthood. Identities that adhere to social norms in order to reproduce and sustain the dominant ideology, which is not a single fixed form out there somewhere, but more akin to a map that we are situated in and which we confuse for reality. Due to Buddhism’s limited elaboration of the collective dimension of me-making, it is unable to provide sufficient means for breaking through our embeddedness in the collective me-making of our society, culture, generation, historical phase, etc. Because it cannot provide sufficient tools for addressing our collective self, it can only watch passively, or offer a Buddhist identity as an alternative means for navigating such terrain. Finally, since we do not have a single conclusive definition of what mind is and considering that Buddhist definitions can be contradictory, we cannot objectively posit the fetters as truly existing within the structure of the brain or within consciousness. At this point, recourse to a phenomenological exploration of the fetters and how they are typically experienced by an average individual is the logical option if we want to take this model into consideration. A map is a map after all; it is not the geographical features it attempts to record. Taking a phenomenological approach, the question that arises is how are these phenomena experienced by people and how do we define those experiences in strictly human terms? Stage one: stream entry Taking nirvana as freedom from, the four stages can be defined in terms of what we progressively free ourselves of. In each case, the four stages signify a break from identification with a number of fetters. I will stray further from traditional descriptions in an attempt to establish a phenomenological reading. The three fetters dismantled during the first stage are; - Identity view/self-identity (personal, direct perceiving of the emptiness at the root of the phantom I and experiencing a profound destabilising shift as a result) - Sceptical doubt (specifically regarding the truth of non-self, impermanence and its implications and the root causes of the suffering-self) - Clinging to rites and rituals (recognising the role of the symbolic, disidentification from dominant symbols, losing enamoredness for solely symbolic forms, or the stabilisers of identity; usually accompanied by an appreciation for the role of direct experience over theory) The first fetter is concerned with how we actively view the self. At a more instinctive or primitive level it is simply how we state ‘I’ and how that resonates with an assembly of interwoven narratives which solidify a sense of uniqueness that is special, separate from the world somehow and very much ‘me’. This illusion of a fixed, permanent self that exists apart from the world is connected but somehow separate. This is the most important fetter to break with as it forms the foundation for all the other fetters. Gaining freedom from it requires that we free ourselves of this illusion and see clearly how the self as we thought it to exist is empty of any solid, fixed features, it is hollow and beset by spaciousness. The first fetter is an intrapsychic phenomenon and a form of psycho-emotional entrapment, as such gaining freedom from it would imply a major break from the nucleus of self-identity. We recognise ourselves as selves that are embodied through the habitual flavours, moods and acts of our senses, thoughts, physical sensations and relational habits to events, spaces, objects and people. We play out stilted roles that are infused with gaps. Seeing through the first fetter must occur holistically for an uncoupling from all this to occur. Phenomenologically speaking it is to be experienced in the body through sensations, through the senses as clear perception, and piercing clarity of mind. This fetter is the most important of all and represents the foundational break from an illusory I. Not only does it represent the key Buddhist insight of emptiness, but it opens up the ability to view others, experience and phenomena as also being devoid of a permanent, fixed self nature. It is funny really, because this in itself is not such a big deal. We know objectively through the sciences, but also through western philosophy dating back to Hume, that nothing is fixed and eternal. To know it firsthand and to experience an override of the delusion of an atomistic ‘I’ pushes against so much of what constitutes our sense of self that it is easier said than done. That does not mean it is not possible, however, or a task that needs to be relegated to future lifetimes or decades from now. The second fetter is sceptical doubt. Typically this is worded as sceptical doubt regarding Buddhist teachings. Shorn of Buddhism as a social construct, how does such a thing exist and dissolve for a person who is not a Buddhist. That is to say, if a non-Buddhist gains freedom from this fetter, how does he or she experience it and know it to be so? If sceptical doubt traditionally refers to the Buddha’s teachings, which teachings should we assume are confirmed by this process? Do we include moral injunctions to avoid oral sex for example? A crude example I admit, but the point should be clear, doubt in this case has to be towards phenomena that are not restricted to Buddhism. Sceptical doubt then ought only to refer to phenomena that are directly visible and knowable in the world we inhabit. Direct insight into impermanence, the absence of atomistic selves, the nature of the suffering-self and the need for some form of ethical behaviour if we are to avoid creating unnecessary suffering are the best candidates and none are the property, real of otherwise, of Buddhism. The opposite of doubt is faith. Scepticism on the other hand points to critical engagement. We must keep in mind that the fetters are psycho-emotional phenomena and are not restricted to intelligence and the rational mind. There are different forms of faith. Blind faith is a form of ignorance based on grasping at certainties and immaturity. I usually think of it as needing mummy or daddy to take care of you. Faith in its most basic meaning implies confidence and trust. Faith in the foundational truths of Buddhism can emerge through witnessing them at play within and without. This naturally flows from direct, experiential perception of the vacuous nature of our own form. Clinging to rites and rituals The third fetter is the most unusual, that is to say it clearly relates to forms of behaviour and belief and in its wording appears to imply religious or spiritual activity. I have always found this an odd occurrence to take place at the initial stage of awakening. Buddhism is abound with both rites and rituals so my initial thought was why would this be the case. In attempting to tease this model from the hands of Buddhism, I began to think about it differently. If the self is a narrative that is sustained by habits, in feelings, actions, thoughts and relationships, then what we have immediately is a sense of how to proceed. We are by nature ritualistic creatures, and rites might be redefined, not as exclusively religious or spiritual, but as the acts that we carry out to affirm and solidify the feelings, conclusions, sensations, thoughts and beliefs that make up the scaffolding that surrounds the phantom I. We engage in rituals collectively that have the same function of maintaining agreed upon ideas regarding identity and the range of experiences we can have, emotions we can feel, thoughts we can explore. We might not define them in such terms but any decent sociologist will tell you that society and relationships are ritualistic by nature. Seeing through such forms may lend itself to a radical liberation from the ideological prisons that make up our self-structure, absorbed and adopted from the society, familial circumstances and education that we were moulded by. This begins to sound a lot more radical than talk of how many lifetimes are left before the samsaric prison break. This view may explain why retreat is the preferred method for inciting the movement into stream entry, considering that such an environment requires a solid break from our everyday lives and isolation not just from distractions, but also the networks of interbeing that sustain our particular form of self. Stream entry as metaphor may be understood thus. The stream may be thought of as the continuous and uninterrupted flow or emergence of being with the loss of these fetters leading to three distinct changes in self-identification: - Self-referential conditioned & habitual being relaxes, and increasingly dissolves into an open sensorial merging with what is immediate. - Confidence in this openness, in groundlessness and ongoing emergent being builds and undermines the returning echoes of the self structure that was previously inhabited. - We lose faith in the ritualistic formalities of our existence, relationships and habits of self and can no longer maintain the status quo. Ideological allegiance becomes forced, difficult to sustain. Ideas of ideological purity fall apart and an open expanse becomes visible, filled with the projects of man. What takes place within all this is an emerging and ongoing meeting between the infinite (emptiness, space, meaninglessness if you prefer) and the remains of our limited conventional-self. Phenomenologically, in achieving stream entry, we experience a flow of ever widening perception into the illusion of the self and selves, and are met with, for want of a better term, the remarkableness and open-endedness of being and inter-being. What emerges is increasing room to respond creatively to ongoing circumstances. This becomes possible once we have discarded the suffocating nature of self-referentialness and the obsessions and compulsions of the atomistic self. Along with all this, there is an immense reduction in the types of suffering categorised under the term dukkha and this brings us into line with the main promise of Buddhism.
<urn:uuid:fcff908e-75ff-4aeb-90da-ee4e4ce38c9c>
CC-MAIN-2021-43
https://imperfectbuddha.com/tag/sanskrit/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.94877
2,661
2.625
3
The media reports warning that the risk of nuclear plant accident in Fukushima is at level 7, the full scale . Which means and what consequences can result in accident Accidents happen on Sunday from the Fukushima nuclear power plant raised the alert of the INES scale, an instrument that is used worldwide to set the information of nuclear events. Depending on the scale of International Atomic Agency, the nuclear events are classified into seven levels . The newspaper El Mundo explained that from 1 to 3 are considered "incidents" , while the 4-7 refers to "accidents" and each level rise on the scale indicates the severity of events is approximately ten times higher the previous. Fukushima is at level 7. One by one Level 1. At this level includes the overexposure of a person over the statutory annual radiation levels . Minor problems with safety components. And the loss or theft of radioactive sources. Level 2. This includes the exposure of a person over 10 millisievert or exposure of a worker above the annual regulatory limits . In addition, level 2 would be established when radiation levels exceeding 50 millisievert when operating in an area, significant pollution within a facility, major security flaws, but no significant effect or inadequate packaging source high radioactivity. Level 3. Exhibition 10 times higher than the annual levels in a worker , non-lethal effect on health, such as burns, severe pollution in an area not covered, loss or theft of a source of high radioactivity and error handling. Level 4. Liberation radioactive materials under a death by radiation , fusion fuel or fuel damage causes a release greater than 0.1% of core inventory, and release considerable quantities of radioactive materials within a facility. Level 5. Limited release of radioactive material that requires the taking of countermeasures, a number of deaths by radiation, severe damage to the reactor core and release of large quantities of radioactive material in an installation as Windscale Pile occurred in Britain in 1957. Level 6. Significant release of radioactive materials . Occurred in 1957 in the Russian central Kyshtym. This is the level that is now the central Fukushima Level 7. Liberation serious radioactive materials reaching effects on health and the environment , which requires the application and extension of countermeasures. This is the case of the Chernobyl disaster in 1986. PS: Do you have more messages from Tom Kenyon and do you regularly receive them? It would be great to open a group where to post his messages - I simply LOVE them - they carry HIGH FREQUENCY - LOL! Very Happy you enjoyed the high frequency energies of the Hathors ~ I resonate highly with them as well. Which is why I have been following Tom Kenyon for about 10 years now. Yes I do subscribe to his posts/channellings. Also, they are found on his website under the "Hathors" tab in the "Archives." Tom specializes in sound healing, raising our frequencies through Hathor channelled sounds. There are free sample mp3 downloads available as well as the full version for sale on his site: www.tomkenyon.com Check it out ~ there is a lot of other interesting things he does as well. It would be my Pleasure to post his channellings when they are released, for LightGrid. Infinite Love & Light to YOU Dear Angel Sister, and to ALL, CrystaLin Joy O.K., this is not what I thought I was going to be doing today . . . However, since my inner guidance has taken over, I am now to share this. Years ago a good friend and famous Oracle, Judith Moore, said I was going to discover a way to neutralize radiation from things like Uranium. She said I would be creating and discovering several other things that were completely unrelated. I did not resonate with what she was saying at the time since much of the stuff was not in areas I tend to work with. Bottom line . . . I flat out thought she was wrong. Fast forward to this morning. I have a million things to do. I'm writing (channeling) several books simultaneously. I have a presentation to give later today. I have emails to answer, phone calls to make, and a son to homeschool. Yet I found myself filing of all things. Filing. Boxes and boxes of loose papers, folders, etc. needing to be put away. (There is a point I promise) I saw the messages coming in about the Nuclear meltdown and the various things to do for it. Just as I'm reading a bit of that, I came across this page of channeled notes in my box to file. It reads: The platinum ray harmonizes everything. (That will be very important in a minute). The turquoise ray is the ray of prosperity and opulence. The sapphire blue ray is the ray of protection. It is the color of cobalt blue. It is white inside with cobalt blue outside. This is the color of Divine Love. A few days ago I told Sonja Myriel that I had a meditation to assist channels in cooling down their body when they channeled and that I didn't know where to post it. Now I understand why. I am to explain it here and then alter it to a group meditation done to cool down and harmonize the reactors at the nuclear facilities. Let's all pray that Judith did in fact know what she was talking about so long ago. Using the Platinum Ray to heal the friction caused when high frequency (whatever you are channeling ~ energy or information) comes into contact with lower frequency (the frequency of your body and this earth plane). I always see the person seated (not sure why). Imagine a drop of liquid platinum (sort of like liquid mercury, but definitely not that vibration, a mixture of gold and silver in color) being placed on top of their head right at the crown. Imagine it melting down into their body. It coats the entire brain, then travels down the spinal column, and out every nerve all the way to the tiniest tips of the nerves. You are coating inside and outside each of the nerves. Friction causes heat, which causes inflammation, which causes tiredness, and speeds up the aging process. The person is literally being fried from the inside out as the high frequency energy and information comes through their body. The vessel itself ~ the human body ~ must be purified and cleansed internally and externally. The nutrients it receives should be coming from blessed life food sources along with plenty of pure water. And I also highly recommend mineral supplements. (I use a particular kind if anyone is interested). Minerals burn up quickly when you are channeling energy and information. This purification assists in not only allowing the body to be more in resonance with the higher frequencies, it also allows the information to be received with less distortion. But it is not enough. Once the nerves have begun to "heat up" from the high frequencies being received, additional steps are required to cool it down and keep it cooled down. That is the purpose of the liquid platinum visualization above. If you have platinum jewelry, wear it. I'm wearing my wedding band (for the diamonds and platinum energy frequencies) even though I am no longer married. I just decided I was now married to my Light work and the ring is a reminder of that. I have also created an essence I call the "Platinum Principle" to go along with the next book by the same name in the series I'm working on. Anyway, I get that it will greatly assist channels in dealing with all this. Back to the visualization. The liquid platinum not only cools down nerves that are already being irritated from the friction, it coats them like a lubricant. In an engine, if you don't keep the various parts well lubricated, you get friction, which causes heat, which eventually causes irreparable damage. Let the liquid platinum act as the necessary lubricant creating complete harmony between whatever frequency you are and the frequency of the information you are bringing in. On to bigger things: Cooling Down the Nuclear Facilities: Just like the human body and dealing with frequencies, that is exactly what is happening inside the nuclear plants. Very high and fast frequencies are causing great irritation with the surrounding vessel ~ building in this case. See the entire facility and surrounding area in a giant sphere of platinum light. Ask the dolphins to come and surround the sphere weaving their magic, sending the frequencies necessary to neutralize and heal everything. Then see a giant drop of liquid platinum at the top of the sphere and then watch it melt down into the sphere and the facility and the surrounding area coating everything in liquid platinum ~ cooling, healing, neutralizing everything in it's path. Let not even a molecule go uncoated. Do this as a group and repeat at least every 12 hours. In Love and Light, Please share as you feel guided! Thank you so much, beloved sister! I can already feel the cooling energies of thhe platinum drop helping me to cool down - feels great :-) The first part of your description of how to cool down the nuclear reactor corresponds exactly to what I have been doing all along! I also like your desctiption of the cobalt blue ray and will use this picture you give to replace the simple blue cloak of protection which I have been holding for each and every person concerned. I see each person now in a cobalt blue cloak of protection - cobalt blue outside - white inside - and inside of it pink magenta LOVE serves as yet another protective energy, which helps people to see what is needed at a certain time to help themselves and others by staing in the high vibrations of LOVE. I will immediately post this in our group "Help for Mother Earth and Her Beings" and share the discussion with all the members of lightgrid. Arctorus also posted a discussion on how to cool the reactors: and it contains this picture wich I find most helpful to better visualize and understand how a reactor works: This is the link
<urn:uuid:615efece-cc7b-4846-8fcf-43ad161b85be>
CC-MAIN-2021-43
https://lightgrid.ning.com/group/selfhealingandsometips/forum/topics/levels-nuclear-pollution?commentId=4024228%3AComment%3A70122&groupId=4024228%3AGroup%3A610
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00230.warc.gz
en
0.955491
2,079
3.015625
3
HIV strains are unable to enter macrophages that carry the CCR5-Δ32 deletion; the average frequency of this allele is 10% in European populations. A mathematical model based on the changing demography of Europe from 1000 to 1800 AD demonstrates how plague epidemics, 1347 to 1670, could have provided the selection pressure that raised the frequency of the mutation to the level seen today. It is suggested that the original single mutation appeared over 2500 years ago and that persistent epidemics of a haemorrhagic fever that struck at the early classical civilisations served to force up the frequency to about 5×10−5 at the time of the Black Death in 1347. - mathematical modelling Statistics from Altmetric.com The transmembrane CCR5 chemokine receptor is used by HIV strains to enter cells of the immune system such as macrophages and CD4+ T cells.1–4 The CCR5-Δ32 deletion prevents the expression of the receptor on the cell surface and provides almost complete resistance to HIV-1 infection in homozygous individuals and partial resistance in the heterozygous state.5–8 Later studies showed that men who were heterozygous had a 70% reduced risk of HIV infection compared with individuals who did not carry the mutation.9 The average frequency of the CCR5-Δ32 deletion allele is estimated to be 10% in European populations, but it is virtually absent among native subSaharan African, Asian, and American Indian populations.5,8,10,11 A north to south gradient in Europe was found, with the highest allele frequencies in Finnish and Russian populations (16%) and the lowest in Sardinia (4%).11 HIV has not existed long enough in the human population to account for this selection pressure. From computer analyses based on coalescent theory, the age of the CCR5-Δ32 bearing haplotype is thought to be approximately 700 years (but with a wide range of 275 to 1875 years).10 Here we present a population-genetic model, based on the demography of Europe, in which annual widespread epidemics of plague, a viral haemorrhagic fever, from 1347 until 1670 forced up the frequency of the Δ32 mutation to the present day values. There have been various suggestions for the selection pressure that acted on the Δ32 mutation: The single pandemic of the Black Death (1347–50), assumed to be bubonic plague,10 which killed some 40% of the population of Europe. At most, this would merely have doubled the frequency from about 5×10−5 to 10−4 (that is, if half the population are killed, the proportion of the protected, resistant individuals in the population would rise twofold). Epidemics of bubonic plague every 10 years for 400 years. Mathematical modelling shows that this hypothesis is not valid12 and, furthermore, it has been shown that Yersinia pestis, the bacterial pathogen of bubonic plague, does not use the CCR5 receptor for entry.13 Smallpox epidemics every five years in Europe for at least 620 years from 1347 to 1970.12 This hypothesis also is not valid: a lethal form of smallpox appeared in England only in about 162814 and before that date, smallpox was “not reputed to be a serious malady.”15 Total annual smallpox deaths in London did not rise consistently above 1000 before 1710.16 Variolation began in 1750 and vaccination in 1800, and the number of deaths from smallpox declined dramatically thereafter and had disappeared from Europe by 1900.16 The disease would have had no effect whatsoever in forcing up the Δ32 frequency between 1900 and 1970. Thus smallpox could have acted effectively only during the period 1700 to 1830, whereas the modelling12 shows that over 600 years of epidemics would be required to raise the frequency to 10%. HYPOTHESIS FOR THE SELECTION PRESSURE The area where the Δ32 mutation is found today corresponds exactly with the range of the plagues. It has been suggested17,18 that the pathogen responsible for the Black Death and all the subsequent plagues was an unknown emergent virus with a 100% case mortality which caused a haemorrhagic fever. To avoid confusion, we have named this disease haemorrhagic plague.17 We suggest that this virus also gained entry into the cells of the immune system through the CCR5 chemokine receptor and that the regular plague epidemics of the Middle Ages in Europe served to force up the frequency of the CCR5-Δ32 mutation in that area. Any population-genetic model that seeks to explain the rise in the frequency of the CCR5-Δ32 mutation must take into account the changes in the population structure of Europe, the mortality and immediate effect of the Black Death, and the changing spread of the plagues. The total population of Europe is estimated to have risen from 3×107 in AD 1000 to 7.4×107 at the Black Death.19 Afterwards, it rose again from 5.3×107 in 1400 to 8.8×107 in 1600 (fig 1, line A); extrapolation of the line shows that the overall mortality in the Black Death was around 40%. The rate of population growth in Europe was the same before and after the Black Death (fig 1, line A), but from 1600 to 1700 it was around zero. The number of places in Europe reporting plague epidemics each year is shown in fig 220; the epidemics were more widespread after 1570 and suppressed population growth (fig 1, line A) and increased the selection pressure sharply. In the model, the population is divided into three classes: susceptible homozygotes (z), heterozygotes (h), and resistant homozygotes (r). Three state vectors, each with 65 entries, are defined, describing the number of women in each age group (that is, 0–1, 1–2, … 64–65) within the three classes, at time t. We have previously reported the use of a conventional Leslie matrix to model such a population.16,21 Mortality and fecundity affect the way in which the number of women in each age group evolves from t to t+1. The annual probability of survival at age x from all sources of mortality other than haemorrhagic plague = μx. The function μx for female survival in the years without plague (1000 to 1346; 1670 to 1800) is the stable age distribution Model West level 10, and during the plagues (1347 to 1670) it is Model West level 2.21,22 The number of female offspring born to a female of age x = mx. The mean number of daughters born to a woman = 2.25.16 The incidence of the plague mortality (denoted by σ in the model) consistent with fig 2 is 2.1% of the population per year from 1351 to 1574 and 2.65% per year from 1575 to 1670. The diploid model12 is based on a single locus with two alleles: a common allele at frequency q in susceptible individuals and a rare dominant resistance allele at frequency p = 1−q. The rare resistant homozygotes are assumed not to have died from the plague (that is, ir = 0). The heterozygotes may have caught the disease but did not die from it, for which there is substantial evidence, particularly in the 17th century17—that is, ih = 0. Susceptible homozygotes died from the disease (iz = 1). The population dynamics of the system are defined by difference equations, where changes in the number of susceptible homozygotes, heterozygotes, and resistant homozygotes in age class 1 during a single time step are given by12: where nx(t) = zx(t)+hx(t)+rx(t). The dynamics of z, h, and r in age classes 2 to 65 are given by: The initial frequency of the resistant allele before the Black Death in 1347 is given by p0 = 5×10−5.12 RESULTS OF THE MODELLING The growth of the population of Europe from 1000 to 1800 predicted by the model is shown in fig 1, line B, and corresponds closely with the actual estimates.19 The predicted rise in frequency of the resistance allele from 1347 to 1670, during which time it was forced up by the selection pressure of the plagues, is shown in fig 3. The final predicted gene frequency is 10%. The results of the modelling are consistent with the data contained in figs 1A and 2 and with the suggestion that the frequency of the CCR5-Δ32 mutation was forced up from 10−5 to present day levels of 10% by a continuous selection pressure over 320 years (1347 to 1670) of epidemics of a lethal, directly infectious viral haemorrhagic fever. These conclusions are consistent with the fact that the plagues, like the CCR5-Δ32 mutation, were confined to Europe. The CCR5-Δ32 containing ancestral haplotype was estimated by the use of coalescence theory to have originated around 700 years ago (range 275 to 1875), coinciding with the date of the Black Death.10 However, by this time the gene frequency was at least 5×10−5,12 and the origins of the mutation must have been much earlier. Libert et al11 conclude that most if not all Δccr5 alleles originate from a single mutation event that took place a few thousand years ago. The rates of crossing over between microsatellite IRI3.1 and Δccr5 place this occurrence some 3500 years ago (95% confidence interval, 400 to 13 000), whereas calculations using the frequency of microsatellite mutations estimate that this event occurred 1400 years ago (375 to 3675).11 We have suggested17,23 that the CCR5-Δ32 appeared over 2500 years ago and its frequency was forced up from the initial single mutation10 to the frequency in the mid-14th century by sporadic epidemics of haemorrhagic plague which occurred widely over the eastern Mediterranean area during a very long time span. Examples of these outbreaks before the arrival of the Black Death in 1347 include the following: Haemorrhagic fevers in the Nile valley in Pharaonic Egypt, 1500 to 1350 BC. The evidence is from medical papyrus sources.24 Haemorrhagic fevers in Mesopotamia, 700 to 450 BC and about 250 BC. The evidence is from the diagnostic handbook prepared during the reign of Marduck-apliddina.25 Plague of Athens, 430 to 427 BC. Thucydides left a detailed description of the symptoms of the disease which correspond closely with accounts of the Black Death.26 Plague of Justinian, originated in Ethiopia, moved down the Nile Valley and onwards to Syria in AD 541 and thence to Asia Minor, Africa, and Europe, arriving in Constantinople in AD 542. Procopius described the symptoms and there are striking similarities, both with the Plague of Athens and the Black Death. The Plague of Justinian continued until AD 700, with epidemics flaring up repeatedly.17 The plagues of the early Islamic empire (AD 627 to 744).26 An unpublished report presented at an international conference on ancient DNA at the University of Queensland claims to have detected the CCR5-Δ32 mutant allele in four of 17 Bronze Age skeletons dating to about 900 BC from a burial site in central Germany. Such findings fit readily within the time course above and suggest that haemorrhagic plague, in addition to the major historic outbreaks, may have been grumbling along in eastern Europe from 1000 BC. The steady improvement in public health measures contributed to the elimination of haemorrhagic plague by 1670,17 but the major factor was the sharp rise in the frequency of the Δ32 mutation in the towns and cities of Europe in the 17th century (fig 3) where the epidemics were largely confined, in contrast with the villages and countryside which were rarely struck. If the overall average frequency of the CCR5-Δ32 mutation was 10%, the proportion in the larger towns, where the selection pressure was heavy, must have been very much greater. With so many resistant individuals in these conurbations, the population of susceptible individuals did not exceed the threshold density16 and, gradually, epidemics failed to explode. The effect was found first in France, the plague reservoir, which had passed its peak by 1646.17 Thereafter, England became the plague epicentre, culminating in the Great Plague of London in 1665. The selective advantage of the Δ32 mutation was lost once the plague had disappeared after 1670, and its frequency would then be expected to fall slowly by genetic drift over the next 300 years. Two independent factors may have served to maintain the frequency of the resistance allele during this time. First, preliminary studies have suggested that there are links between protection against smallpox and HIV; older people who had been vaccinated against smallpox were less likely to contract HIV. Experiments with human blood cells have shown that vaccination confers, on average, a fourfold reduction in the infectivity of HIV (unpublished report from George Mason University, Virginia, USA, 2003). Myxoma poxvirus uses the CCR5 receptor to gain entry to its target white blood cells in rabbits.27 Before 1628, when a lethal strain of smallpox appeared in England, the disease would not have exerted any selection pressure on the CCR5-Δ32 allele, but we suggest that the resistance mutation may have provided at least partial protection from smallpox in the 17th and 18th centuries, so maintaining the selection pressure on the mutation. Thus non-European races, which were never exposed to plague, might historically be expected to be particularly susceptible to smallpox: American indigenous populations were especially badly hit when smallpox was introduced by conquering Europeans.16 When smallpox arrived in the Shetland island of Foula in 1720 less than 10 of 200 inhabitants survived the epidemic.28 Second, there is good evidence that haemorrhagic plague did not disappear completely in 1670 but continued in Scandinavia, Poland, Russia, and the European-Asian borders. The plague of Copenhagen in 1711, in which 38% of the population died, was the last outbreak in Denmark, but the Health Authorities there maintained strict control of its borders because Poland continued to be ravaged at regular intervals until 1800.29,30 This maintenance of haemorrhagic plague in northeastern Europe provided continuing selection pressure of the CCR5-Δ32 mutation and explains why it occurs at the highest frequency in Scandinavia and Russia. Competing interests: none declared If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
<urn:uuid:8f0b30a9-90ec-4fe9-aab0-ea8535d38640>
CC-MAIN-2021-43
https://jmg.bmj.com/content/42/3/205
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587770.37/warc/CC-MAIN-20211025220214-20211026010214-00631.warc.gz
en
0.94588
3,141
3.5625
4
How the Rockefellers coopted modern medicine and used polio to create vaccine mythology for profit by Bob Livingston, Personal Liberty [original column here] As the 19th Century turned over to the 20th, John Davidson Rockefeller’s Standard Oil Trust was coming under increasing public pressure, receiving mounds of bad publicity and was even beginning to feel political pressure even as the company amassed more power. In 1892 the Ohio Supreme Court had declared the Standard Oil Trust to be an illegal monopoly and ordered its dissolution. The company complied with the court’s order in appearance, but in reality the directors and trustees all held their power and positions and, if anything, the company just amassed more power and wealth as it diversified. U.S. government action to break up the Trust began in earnest in 1901 with actions to charge the Trust with violation of the Sherman Anti-Trust Act that had been passed in 1890. In addition to oil and gas companies, refineries, transportation companies and banks, Rockefeller owned politicians and the political process. So Rockefeller, the richest man in the world, was becoming “the most hated man in America” by virtue of the negative press over his monopolistic practices and his bucking of the government’s actions to break it up. Now he sought to own the public relations by bribing public opinion through a pretended interest in their health. Whether for truly philanthropic and altruistic motives – as court historians claim – or for more nefarious ones, Rockefeller founded the Rockefeller Institute for Medical Research in 1901. He did so at the behest of his right hand man, Frederick Taylor Gates – a former Baptist “modernist” minister who was really a charlatan with a penchant for amassing wealth under the guise philanthropic fronts. In other words, Gates would have made great television evangelist had television been in existence during his heyday. Rockefeller brought in some of the nation’s top medical men on offers to pay them an average $20,000 per year each (almost $500,000 when adjusted for inflation) for 10 years to “stimulate medical research.” For that and for funding the Rockefeller Institute Hospital and providing grants to address public health concerns (like the bacterial contamination of the New York milk supply), Rockefeller received good publicity. The research they churned out resulted in dozens of medical patents that Rockefeller used to amass more wealth. The medical education field was growing one in the early 20th Century with the homeopathic and allopathic schools of thought competing for dominance. Medical schools were springing up across the country – some of good quality and some more dubious. Rockefeller and Gates had as their ultimate goal the control of medical education, its prestige and its profits. Rockefeller created the General Education Board and established Gates as the head. Gates’ medical experience consisted of his father’s medical education. Gates’ father, however, never practiced medicine and instead became an itinerant preacher. Rockefeller’s medical education also had come through his father. “Doc” William Avery Rockefeller was a snake-oil salesman who peddled a tonic called Rock Oil that he claimed would cure cancer. In 1910 the Rockefeller-Gates General Education Board combined with the Carnegie Foundation and the American Medical Association to buy their way into controlling interests in the major medical schools and propped them up with grants and “securities” to fund medical education and began a campaign of calumny and slander against those schools in which the socially and religiously elect political bosses had no interest. The competing schools were represented as low-grade and inferior. This forced the shuttering of over half the medical schools in the U.S. This gave the Rockefeller-Gates/Carnegie/AMA cabal total control over medical education and licensing in the country and granted them the power to dictate how medicine could be practiced and who could practice it. Meanwhile, the Rockefeller Medical Institute was churning out medicines and distributing their manufacture to various Rockefeller-owned drug companies, including IG Farbenindustries in Germany – a conglomeration of six major German pharmaceutical companies. (A side note, the IG Farbenindustries building is the only German building the allies were forbidden from targeting during World War II, and it survived the war virtually unscathed.) Once they’d essentially taken over the drug manufacturing industry, the Rockefeller-Gates/Carnegie/AMA cabal not only controlled who could practice medicine and how medicine could be practiced in the U.S., it also controlled which drugs could and could not be prescribed. And it required the prescribing of the drugs based on profit rather than whether they were effective or even harmful. Fast forward now to the 1950s and rise of the “polio epidemic.” A polio epidemic was indicated by the medical powers as when an area saw 35 cases per 100,000 in a given year. The number of polio cases in the U.S. increased until 1952 when it peaked at around 58,000 cases with about 3,200 deaths: This in a nation of 157.6 million people, as shown in the graph below from ProCon.org. On April 12, 1955, the U.S. government licensed the Jonas Salk vaccine which contained an inactivated polio virus. Less than a month later the vaccine was suspended to investigate cases of paralysis caused by the vaccine. After changes were made to production methods the vaccine was program was restarted on May 27. In August of 1960 Albert Sabin, M.D.’s oral polio vaccine was licensed and recommended by the U.S. Surgeon General. Salk’s vaccine was phased out in 1968. The Rockefeller Institute for Medical Research, the Roosevelt Warm Springs Foundation and The National Foundation of Infantile Paralysis (March of Dimes) combined forces to push a PR campaign to turn polio into a major health crisis and promote the need for vaccines. PR master Carl Byoir and stars like Sammy Davis Jr., Tennessee Ernie Ford and Lucille Ball were employed to promote the need for the Salk polio vaccine. But by the time the vaccine began to be used on Americans the number of polio cases had declined to 28,985 with 1,043 deaths. Polio cases in England and Wales were also dropping precipitously, with no vaccine. In 1956, six New England States reported sharp increases in polio rates, from more than double in Vermont to 645 percent in Massachusetts, despite — or rather because of — the polio vaccine program. Idaho and Utah saw such an increase in polio cases and deaths they halted the vaccine program. Salk was, in fact, a medical criminal on the order of Josef Mengele who used Federal funds to conduct medical experiments on helpless patients at a Michigan insane asylum. The Rockefeller-controlled pharmaceutical companies — which benefitted financially from the polio outbreak, with Byoir in charge of the public relations — funded the writings on the history of polio and its treatment. As such, the pharmaceutical companies were presented in the best possible light. Americans were not told about the many people who developed paralysis after being vaccinated against polio. Nor were they told about David Bodian, M.D., Ph.D., from the Poliomyelitis Laboratory at Johns Hopkins University. Bodian told the International Poliomyelitis Conference in 1954 — the year before the polio vaccine was introduced — injections and other vaccines, such as the DTP vaccine, “may be causing polio.” Polio’s rise in the United States coincided with an increased use of pesticides in the 1950s, many of which were byproducts of chemical weapons manufactured during World War II. These pesticides were found to increase susceptibility to viral infections, studies showed. Exposure to pesticides also came through milk, which was heavily contaminated with pesticides and from crops being dusted with government-approved pesticides. Much of the milk had to be destroyed, according to a U.S. Senate report. Researchers recognized a higher incidence of polio among those who had undergone tonsillectomies. Ice cream, made from contaminated milk, was commonly given to children following tonsillectomies. And tonsils play a key role in warding off infections — a role not understood in the mid-20th Century. The “decline” in suspected polio cases in the U.S. can be traced to sleight-of-hand by the Centers for Disease Control and Prevention, which over the decades following 1950 began to reclassify what was and was not considered polio. In other words, over the last 60-plus years the CDC eradicated polio by simply giving a different name to diseases that would have in the 1950s been called “polio.” In 2002 the journal Lancet published a study showing that about half of the cases of non-Hodgkin’s lymphoma being diagnosed annually were linked to a monkey virus called SV-40. SV-40-contaminated polio vaccines were given to a generation of Americans in the 1950s and 60s. SV-40 is now being found in brain and bone tumors and in other cancers previously rare that are becoming more prevalent. It has also been detected in lung cancers and lymphatic cancers. Former Food and Drug Administration virologist John Martin said, “SV-40 infection is now widespread within the human population almost certainly as a result of the polio vaccine.” How widespread? About 200 million Americans were exposed to SV-40 between 1955 and 1963, and estimates put the cancer rate in that population at one in 200 as a result of exposure to SV-40. And researchers have found that it can be transmitted sexually and can be passed down from mother to child. Evidence of SV-40 infection have been found in children born after 1982, leading some experts to suggest the virus may have even remained in the polio vaccine until as late as 1999, even though it was suspected by some researchers like Dr. Maurice Hilleman and Dr. Benjamin Sweet in the late 1950s. Yet the myth that polio was eradicated by vaccine drives the modern-day vaccine schedule in which children receive about 26 vaccinations before they reach the age of 18 months, and another dozen or so before age 18. With strategically planned and executed applied social control systems as the backdrop, what are some of the unintended consequences of adopting a policy of “shots preventing disease?” Firstly, parents are consistently, relentlessly, and blatantly lied to that their children must be up to date on their vaccines in order to attend school. Rarely are they informed that they have the option to apply for a vaccine exemption (religious, medical or philosophical depending on the state). However, new pharmaceutical-backed School Based Health Clinics are being implemented around the country and parents may soon find that they no longer have a choice in the matter. Secondly, what about going to the pediatrician? The “Polio Question” is unsheathed immediately by many “brave” pediatricians to pierce through any parental doubt on vaccination. They constantly remind parents of the graphical pictures with children in iron lungs; memories which are etched in our collective consciousness. Parents are then scolded, “You don’t want polio to come back, do you?” When many sincere parents continue to ask questions the pediatrician cannot answer, they are encouraged to be promptly slain. Third, the “Polio Question” has become a potent weapon of social ridicule. Parents attempting to offer “valid opinions,” empirical observations, and anecdotal stories of their personal experiences with their vaccine injured children have become the object of ridicule. They are marginalized as not “adequately educated” by a bonafide, scientific medical institution. More over, “rebellious parents” are regularly accused of child neglect and free loading on society. They are maliciously threatened with the heavy hand of Child Protective Services and the Judicial System. It matters not that a majority of the parents and family members who are actively questioning vaccines have observed first hand a life threatening reaction to vaccines. Which brings us to the most egregious strategy of vaccine compliance: persecuting families with vaccine injured children. This post has been read 942 times!
<urn:uuid:b891e0e5-f7d1-4e04-9e18-0a6546e1ab67>
CC-MAIN-2021-43
https://brianrwright.com/CoffeeCoasterBlog/?p=9004
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00230.warc.gz
en
0.973276
2,515
2.65625
3
One set of hoofs or pair of feet can find, but never make, a path. It is the constant repetition of hoofs or feet "going the same way" that beats down the grass, leads to the water hole and the river ford, points out the low place in the mountain, and marks the trail from start to destination. Feet of the wild animal made the first trails, short or long. Moccasins of the Red Man beat them down and extended them. The conqueror and the religious found and followed them. American hunter and trapper etched them more distinctly afoot or on horse. The creaking wheels of the trader's ox-wagon cut them deeper. Boots of the soldier raised their dust and sank in their mud. And then, not too many years ago, twin bands of steel defined them permanently. Trains sped along the trail the buffalo or antelope had started; and the Indian, the Conquistador's horses, the steps of the "black robe", the fur hunter's wanderings, the wagon train's camps, and the soldier's campaigns had scratched across the surface. |Caravan on the Santa Fe Trail in the 1850's crossing the Pawnee River in the Great Bend section of Western Kansas near the present-day site of Lerned, Kansas, on the Santa Fe main line. (from a painting by M. Gundlach)| Before the railroads came, all commerce between the Missouri River and the Rocky Mountains was carried on by caravans of pack mules and wagon teams. the most notable highway across the prairies was know as the Old Santa Fé Trail, between the Missouri River and Santa Fe, N. M. The expedition led by Captain Becknell, that went overland from Franklin, Mo., in 1821, marks the beginning of important wagon trade between these points, though the first pack-mule party for Santa Fe was outfitted as early as 1804. In 1825-27 the U. S. Govt. surveyed a line through from Fort Osage (Sibly), trading posts being established there and at Independence. Independence was the principal eastern terminus until 1848, when it was superseded by Westport Landing (Kansas City), and later, in 1863, by Fort Levenworth. The Santa Fe Railway reached the city of Santa Fe in 1880, and the well-worn trail became a thing of the past. The map reproduced on page 4 shows the route of this historic trail in sufficient detail to enable the traveler on the Santa Fe Railway of today to see where the two run almost side by side. The old trail is marked by granite monuments erected by the D. A. R. From Independence to Santa Fe, wagon parties routed by way of the Cimarron cut-off, traveled about 775 miles. The Upper Arkansas River route, across Raton Pass, was much longer (850 miles) but safer. There were so many conflicts with hostile Indians beyond Council Grove that detachments of U. S. Troops often went along, to guard lives and property. The earlier caravans of pack-mules, usually numbered 75 to 200 animals and made 15 miles a day. After the introduction of prairie "schooners," drawn by mules or oxen, the jornada or day's journey, was seventeen to eighteen miles. At first the traders made only one trip a year, but by 1860 caravans left every few days. An average caravan consisted of 26 wagons, each drawn by 5 yoke of oxen or 5 spans of mules. A wagon load was five to seven thousand pounds, and an average day's journey 17 miles. In 1846, 375 wagons were employed, also 1,700 mules, 2,000 oxen and 500 men; this was increased, by 1866, to 3,000 traders' wagons. During the height of the traffic 50,000 ox-yokes were used annually. The largest train (1 mile long and 4 columns abreast) was composed of 800 army wagons carrying supplies for General Custer's Indian campaign of 1868. The first overland mail stage coach started from Independence for Santa Fe in 1849; in the early 60's daily stages were run from both ends of the route; each Concord coach carried 11 passengers, the fare being $250, including meals; the trip required 2 weeks. Today on the Santa Fe Streamliner, the journey consumes only 14 1/4 hours, and the railroad fare is about $20 one way in chair car. Shortly after the beginning of the year 1848 gold was discovered in California and the California territory was transferred to the United States. These great events brought out the real and immediate need for a good transcontinental trail, a route across the mountains, rivers and deserts from where the old Santa Fe Trail left off onward to the golden sands of California. By 1848 early pioneers had started two trails west of Santa Fe -- the old Gila Trail and the old Spanish Trail -- both had serious drawbacks. A third, or "middle" trail, not yet so well explored or known, was on the eve of becoming the most favored of all -- for foot, for horse, for wagon and later for railroad. Its mileage was right. Its condition was good and its scenery beautiful. Parts of this trail were known as the "Zuni Trail", or later on as the Albuquerque or "middle" route; to the Army Topographic Corps it was the 35th parallel route and this was the path chosen for the modern Santa Fe Railway "Trail of Steel." Here are a few facts and descriptions of each of the three trails beyond Santa Fe in 1848. This northernmost of the three New Mexico-California routes followed the path of Escalante and Dominguez to their crossing of the Green River, then turned southwest to the Virgin River, traversed the Mohave Desert, and arrived at Los Angeles through Cajon Pass -- the pass which the Santa Fe Railway follows today. A variation of this route lay further east of the Cajon Pass and turned northward to the San Joaquin Valley, then cut through the mountains by either the Tehachapi or Tejon Passes. Jedediah Smith, that famous pathfinder, had traveled the western half of The Old Spanish Trail in 1826, as Escalante and Dominquez had the eastern division a half century before. In 1828, James Ohio Pattie, trapper, adventurer and "tall tale" teller, covered the western end, but it remained for William Wolfskill to become the first American recorded who traveled The Old Spanish Trail completely. He led a company of trappers over it between New Mexico and California in 1830-31. What were the advantages and disadvantages of these two westward paths? The Old Spanish Trail went far enough north to avoid the Apaches, whose name meaning either "enemy" or "robber" bespeaks their constant threat. And there was water along The Old Spanish Trail. But is was by far the longest way to California. The Old Gila Trail was much more direct than the Spanish Trail, but it ran through the dangerous Apache Country and water holes were few and far between. Palmer's most famous railroad work, however, was building the Denver and Rio Grande and fighting for its interests against all comers. Later, during the eighties, he was identified with the construction of Mexican railroads. One line which he built to El Paso ultimately was absorbed by the Nickerson interests. It was more than evident that quick, sure means of communications and paths of travel should link the west and east. The Government began to take an active interest in routes to the Pacific, as a railroad enterprise to the West Coast could no longer be ignored. Thus, early in the decade of the 50's, a transcontinental railroad project received support in both branches of the Congress. On March 31, 1853, an act was passed entrusting the War Department to "make such explorations and surveys as it might deem advisable in order to ascertain the most practicable and economical route for a railroad from the Mississippi to the Pacific Ocean." Last, but not least, the War Department was granted the necessary appropriations. It was first intended only to make a reconnaissance of the Southern route and the one through South Pass, but later the Secretary of War, Jefferson Davis, added the northern route. Davis made a report, December 1, 1853, explaining the routes to be examined and added copies of the instructions to the various engineers selected. The actual routes reconnoitered were know as those of the 32nd, 35th and 47th parallels. To Lieutenant A. W. Whipple, Corps of Topographical Engineers, was entrusted the first official survey of the 35th parallel route, although Francois Xavier Aubry, a private trader, had been first to examine it in its entirety from New Mexico to California in 1852. Even before Aubry, Lieutenant James H. Simpson, in 1849, had explored from Albuquerque to Zuni. And two years after Simpson's exploration, Captain Lorenzo Sitgreaves, in 1851, had gone over the western section of the route, from Zuni to the Colorado. Lieutenant Whipple's party started from Napoleon at the mouth of the Arkansas River, June 24, 1853, and proceeded via Little Rock to Anton Chico, past Tucumcari to Albuquerque, through central New Mexico to the Mohave Villages, then up the Mohave River and over the Cajon Pass to Los Angeles and terminated in San Pedro the following spring. In Lieutenant Whipple's Report of Explorations for a Railway Route Near the 35th Parallel of Latitude, from the MIssissippi River to the Pacific Ocean, we read: "Nearly all the known passes are concentrated near the latitude of 35°, where the interference of the Coast Range with the Sierra Nevada had produced a succession of low broken ridges with valleys between ... a great portion of the route followed natural channels." Francois Xavier Aubry was starting eastward with a party on another of his several explorations for a road from California to New Mexico in the same month of 1853 that Lt. Whipple was heading westward along the 35th, or "Santa Fe", parallel of latitude. Mr. Aubry made notes of his trip and his diary entry of September 10, 1853 sums up the purpose of his trip: "September 10, At Albuquerque, New Mexico ... I set out ... upon this journey simply to gratify my own curiosity as to the practicability of one of the much talked-of routes for the contemplated Atlantic and Pacific railroad. Having previously traveled the Southern, or Gila, route, I felt anxious to compare it with the Albuquerque, or middle route. Although I conceive the former to be every way practicable, I now give it as my opinion that the latter is equally so, whilst it has the additional advantage of being more central and serviceable to the Union." It was this survey which marked out for the first time a practicable highway along the 35th parallel that has been used from that day to this. (For more than half a century the Santa Fe Railway has rolled its trains along this one-time Wagon Road.) Of this road, General Beale wrote: "... It is the shortest (route) from our western frontier by 300 miles, being nearly directly west. It is the most level, our wagons only double-teaming once in the entire distance, and that at a short hill, and over a surface heretofore unbroken by wheels or trail on any kind. It is well-watered! Our greatest distance without water at any time being twenty miles ... It crosses the great desert (which must be crossed by any road to California) at its narrowest point. It passes through a country abounding in game, and but little infested with Indians." And to prove that the route was as good in winter as in summer, Beale retraced it in 1858, going from the Colorado to Zuni in twenty-four days during January and February. It was on the westbound 1857 trek that Beale took the famous Camel Corps. The idea of using camels came to him while on a much earlier exploring trip in Death Valley with Kit Carson, as Beale in later years told his son. Beale never traveled so light that he did not have at least one good book in his pack, and during the Death Valley exploration, he chanced to be reading Abbe Huc's Travels in China and Tartary, which had a lot to say about the usefulness of camels in Asia. Beale was convinced that the introduction of these famous beasts of burden could rob the Arizona desert of half its terrors. David Dixon Porter was sent in 1855 to Tunis to "study camels." He also visited the Crimea where he met some English officers who reported enthusiastically on the service camels had rendered to General Napier. That was enough. Porter hurried to Alexandria and Smyrna, purchased 33 camels. All but one of them were landed safely at Indianola, Texas, in April 1856. Porter was immediately sent back to Asia Minor for 44 more which were debarked later that summer very seasick but alive. Some time afterward, the camels being acclimated and ready, General Beale, his men and his Camel Corps set out for Fort Defiance, -- there to begin his famous 1857 Wagon Road Survey to the Colorado River, which has been mentioned previously. "An important part in all our operations has been acted by the camels. Without the aid of this noble and useful brute, many hardships which we have been spared would have fallen to our lot; and our admiration for them has increased day by day, as some new hardships, endured patiently, more fully developed their entire adaption and usefulness in the exploration of the wilderness." Yes, Beale was enthusiastic about his Camel Corps, but others, unfortunately perhaps, were not. Two native cameleers had been imported with the camels, and as one old-timer put it, "he didn't know which smelled worse, them drivers or them animals." At any rate, the natives refused to accompany the surveying trip, the American muleteers never learned to respect the animals. So after a few years of vicissitudes, the Camel Corps was broken up -- auctioned off, let loose, disbanded. For some time, says his son, General Beale kept a few of the camels at his Rancho Tejon near Bakersfield. He remembers that it was one of his great pleasures as a boy to drive with his father from Tejon to Los Angeles in a sulky behind a tandem team of camels with whom the General could carry on a conversation in Syrian if the occasion arose. Some travelers, too, of a later year, shocked, surprised and scared to see what looked mighty like a camel wandering lonesomely in southwestern deserts have decided they saw a mirage, or perhaps indulged in a little too much "Taos Lightning" or other western firewater. But they could have seen real camels, at least until 1899, when it was estimated that the last survivor of the one-time Camel Corps had gone to join his ancestors. "A much larger area of cultivable lands, and a great frequency and extent of forest growth, exist between the Rio Grande and Colorado, on the 35th parallel, than on any other latitude throughout the Western States." Many historians, as well as engineering experts, firmly believe that had not the Civil War come when it did the first transcontinental railroad, instead of being constructed over the historic but mountainous "Overland Route" (38th parallel), would have been laid down farther south, perhaps along the 35th parallel, below the barrier of winter snows and basically around, not over, the Rocky Mountains. After the Civil War, however, money for construction was in the North, and it was considered imperative to have the first railroad completely in "Union" territory. When you look out of your Santa Fe train window and watch the land fly by, you are looking at historic ground: There the Conquistador marched, the padre walked, the mountain man trapped, the ox-team strained, the soldier campaigned, the emigrant toiled, the engineers surveyed; and over the footprints of them all was built the Santa Fe! |Burros loaded with firewood passing the 300 year old Governor's Palace, a landmark of early days in Old Santa Fe.|
<urn:uuid:1c6f8ae0-93b7-4f84-b2c1-9b4f5e539fdb>
CC-MAIN-2021-43
http://www.titchenal.com/atsf/ayw1946/paths.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00030.warc.gz
en
0.976931
3,437
3.671875
4
| Part of the series on| Popping is a street dance and one of the original funk styles that came from Fresno California during the late 1960s–1970s. The dance, first seen in the 1968 movie ''Chitty Chitty Bang Bang'' during a scene where actress and singer Sally Anne Howes acts as a music box doll, is based on the technique of quickly contracting and relaxing muscles to cause a jerk in the dancer's body, referred to as a pop or a hit. This is done continuously to the rhythm of a song in combination with various movements and poses. Closely related illusionary dance styles and techniques are often integrated into popping to create a more varied performance. These dance styles include the robot, waving and tutting. However, popping is distinct from breaking and locking, with which it is often confused. A popping dancer is commonly referred to as a popper. As one of the earliest funk styles, popping is closely related to hip hop dancing. It is often performed in battles, where participants try to outperform each other in front of a crowd, giving room for improvisation and freestyle moves that are seldom seen in shows and performances, such as interaction with other dancers and spectators. Popping and related styles such as waving and tutting have also been incorporated into the electronica dance scene to some extent, influencing new styles such as liquid and digits and turfing. As stated earlier, popping has become an umbrella term for a group of closely related styles and techniques that have often been combined or danced together with popping, some of which are seldom seen outside of popping contexts. However, the use of popping as an umbrella term has been criticized on the grounds that its many related styles must be clearly separated so that those who specialize in more specific styles aren't classified as poppers (ex: a waver, a tutter, a strober). It is often assumed that popping is a style of breakdance. This is due in large part to the movies Breakin' and Breakin 2: Electric Boogaloo. In these movies all styles of dance represented, (breaking and the funk styles: popping, locking, and electric boogaloo) were put under the "breakdance" label, causing a naming confusion. This caused the media to associate funk styles with hip hop music and assume that popping and electric boogaloo were the same as breaking. The difference between the two is that breaking originated in the Bronx, New York and is danced a lot on the floor while popping and boogaloo developed in various places in California and are danced almost entirely standing up. Popping is centered around the technique of popping, which means to quickly contract and relax muscles to create a jerking effect (a pop or hit) in the body. Popping can be concentrated to specific body parts, creating variants such as arm pops, leg pops, chest pops and neck pops. They also can vary in explosiveness. Stronger pops normally involve popping both the lower and upper body simultaneously. Normally, pops (or hits) are performed at regular intervals timed to the beat of the music, but the popper can also choose to pop to other elements of the song, or pop at twice or half the speed of the beat. To transition between poses, most poppers use a technique called dime stopping, common in robot dancing, which basically means to end a movement with an abrupt halt (thus "stopping on a dime"), after which a pop normally occurs. To create variation, poppers often mix in other styles as well, such as waving or tutting, which creates a sharp contrast to the popping itself. Poses in popping make heavy use of angles, mime style movements and sometimes facial expressions. The lower body has many ways to move around from basic walking and stepping to the more complex and gravity defying styles of floating and electric boogaloo. Movements and techniques used in popping are generally focused on sharp contrasts and extremes, being either robotic and rigid or very loose and flowing. As opposed to breaking and its floor-oriented moves, popping is almost always performed standing up, except in rare cases when the dancer goes down on the knees or to the floor to perform a special move. Having its roots in the late 1970s funk era, popping is commonly danced to funk and disco music. Popular artists include Zapp, Dayton, Dazz Band and Cameo. During the 1980s, many poppers also utilized electro music, with artists such as Kraftwerk, Yellow Magic Orchestra, Egyptian Lover and World Class Wrecking Crew. More mainstream hip hop music was also employed by poppers during the 1980s, including Afrika Bambaataa, Kurtis Blow, Whodini and Run DMC. Today, it is common to see popping danced to more current music genres such as modern hip hop (often abstract/instrumental hip hop) and various forms of electronic dance music such as dubstep. Songs that are generally favored have a straight and steady beat at around 90-120 beats per minute, a 4/4 time signature and a strong emphasis on the back beat, normally by a snare drum or a drum machine. The pops performed by the popper normally occur on every beat or on the distinct back beats. The popper can also choose to follow the music more freely such as by timing the pops to the rhythm of a melody or other rhythmic elements. There are a number of dance styles that are commonly mixed with popping to enhance the dancer's performance and create a more varied show, many of which are seldom seen outside of popping contexts. They can be seen as separate styles related to popping or as a part of popping when using it as an umbrella term. - A style and a technique where you imitate film characters being animated by stop motion. The technique of moving rigidly and jerky by tensing muscles and using techniques similar to strobing and the robot makes it appear as if the dancer has been animated frame by frame. This style was heavily inspired by the dynamation films created by Ray Harryhausen, such as The Seventh Voyage of Sinbad (1958). - A style that imitates animatronic robots. Related to the robot style, but adds a hit or bounce at the end of each movement. - Boogaloo or boog style is a loose and fluid dance style trying to give the impression of a body lacking bones, partly inspired by animated movies and cartoons. It utilizes circular rolls of various body parts, such as the hips, knees and head, as well as isolation and sectioning, like separating the rib cage from the hip. It also makes heavy use of angles and various steps and transitions to get from one spot to the next. It was developed in 1975 by Boogaloo Sam. In the original boogaloo you do not pop, but combined with popping it becomes the electric boogaloo, the signature style of The Electric Boogaloos (the dance crew). - A style of popping in which the chest is isolated by being pushed out and brought back while flexing the chest muscles. As this movement is performed to the beat the popper can incorporate different moves in between the chest bop. When practiced the chest bop can be done at a double-time interval adding a unique effect to the move. - Crazy legs - A leg-oriented style focusing on fast moving legs, knee rolls and twisting feet. Developed in 1980-81 by Popin' Pete, originally inspired by the fast and agitated style of breaking by Crazy Legs from Rock Steady Crew. - Dime stopping - A technique of moving at a steady pace and then abruptly coming to a halt, as if attempting to stop on a dime. This is often combined with a pop at the beginning and/or end of the movement. - Floating, gliding and sliding - A set of footwork-oriented techniques that attempt to create the illusion that the dancer's body is floating smoothly across the floor, or that the legs are walking while the dancer travels in unexpected directions. This style encompasses moves such as the backslide, aka the moonwalk, which was made famous by Michael Jackson. - A ground move where the dancer imitates a lowrider car. The dancer drops to the ground with his/her knees inward (reverse indian style) and feet outward. He or she would move up, down, and around imitating the hydraulic movements of a lowrider auto. - Performing techniques of traditional miming to the beat of a song. Most commonly practiced are various movements with the hands as if one could hold on to air and pull their body in any possibly direction. Miming can also be used to allow a popper to tell a story through his or her dance. This style is often used in battles to show the opponent how they can defeat them. - A style imitating a puppet or marionette tied to strings. Normally performed alone or with a partner acting as the puppet master pulling the strings. - A style imitating the scarecrow character of The Wizard of Oz. This style is supposedly pioneered by Boogaloo Sam in 1977. Focuses on outstretched arms and rigid poses contrasted with loose hands and legs. - A style of popping that gives the impression that the dancer is moving within a strobe light. To produce this effect, a dancer will take any ordinary movement (such as waving hello to someone) in conjunction with quick, short stop-and-go movements to make a strobing motion. Mastering strobing requires perfect timing and distance between each movement. - Struttin is a dance style originating out of the City of San Francisco, CA in the 1970s. - A way of popping where the dancer pops at smaller intervals, generally twice as fast as normal. - Based on action figures such as G.I. Joe and Major Matt Mason, developed by an old member of the Electric Boogaloos called Toyman Skeet. Goes between straight arms and right angles to simulate limited joint movement. - Tutting/King Tut - Inspired by the art of Ancient Egypt (the name derived from the Egyptian pharaoh Tutankhamun, colloquially known as "King Tut"), tutting exploits the body's ability to create geometric positions (such as boxes) and movements, predominantly with the use of right angles. It generally focuses on the arms and hands, and includes sub-styles such as finger tutting. - Waving is composed of a series of fluid movements that give the appearance that a wave is traveling through the dancer's body. It is often mixed with liquid dancing. - A variety of intricate moves that create the illusion of separating, or isolating, parts of the body from the rest of the body. The most common types of isolation that poppers perform are head isolations, in which they seem to take their head out of place from the rest of their body and move it back in place in creative ways. - Salah (dancer) - Jonathan "Bionic Man - Ticking Tiny from the world famous LA breakers who dominated the west coast - Michael "Boogaloo Shrimp" Chambers - Steffan "Mr. Wiggles" Clemente - Suga Pop - Nam "Poppin'" Hyun Joon - Stephen "Skeeter Rabbit" Nichols - Boogaloo Sam - Popping John - AJ Megaman |Look up popping in Wiktionary, the free dictionary.| References and notes - Electric Boogaloos. ""Funk Styles" History & Knowledge". Retrieved 2007-05-15. - The popping category generally centers around the technique of popping, but much variation involving closely related styles is allowed. - "Popping Information". - Mr. Wiggles. "Move Lessons". Dance Lessons. Retrieved 2007-05-16. - The Book of Dance 2012 - Page 129 1409322378 "Tutting was originally inspired by Egyptian hieroglyphics – the name is an abbreviation for the Egyptian pharaoh Tutankhamun. A form of popping, tutting is all about creating right angles using the arms ..." - Full cast for Breakin' at IMDB. Accessed 2009-08-03. - Article on Mr. Wiggles. Sedgewick Ave. Accessed 2009-08-03. - Henderson, April K. "Dancing Between Islands: Hip Hop and the Samoan Diaspora." In The Vinyl Ain't Final: Hip Hop and the Globalization of Black Popular Culture, ed. by Dipannita Basu and Sidney J. Lemelle, 180–199. London; Ann Arbor, MI: Pluto Press, 200 - "From street to stage, Korean B boys rise to the nation's pride". Yonhap news. Accessed 2009-08-03. - HanBooks |Over the Rainbow. Accessed 2009-08-03. - D, Davey (May 15, 2012), "LA Loses Its Third Hip Hop Legend in a Month", ThugLifeArmy.com, retrieved November 8, 2013
<urn:uuid:e9d015ab-86ef-4afc-848e-f1a5c50aeb76>
CC-MAIN-2021-43
https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Popping.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00430.warc.gz
en
0.951144
2,711
2.796875
3
Discuss how John’s prologue sets the agenda for the rest of John’s narrative. 2) Purpose of John’s Gospel 3) Identity and character of Jesus: The Word and Wisdom of God 4) Light and Darkness 5) Introduction to John the Baptist 6) Relationship between Jesus and God John’s prologue consists of 18 verses which introduce the key themes of his gospel. ‘Numerous commentators see it functioning rather as an overture to an opera’. This suggests its importance in foreshadowing what is to come and drawing the reading in for the rest of John’s narrative. The key themes that we will be exploring are: the identity of Jesus, introduction of John the Baptist, light and darkness and the glorification of Christ. Purpose of John’s Gospel John makes it clear what the purpose of the overall gospel is from the start. ‘To believe’ … Identity of Jesus One of the main themes that the prologue addresses is the identity of Jesus. In the very first sentence of the Prologue, John introduces a new way of viewing God: ‘the Word’. ‘In the beginning was the word’(1:1) The other gospels start with Jesus’ birth when he is incarnate but ‘John takes us back into the mists of eternity’. John reminds the readers from the start that he existed before time, mirroring Genesis 1. However, he goes even further and introduces the concept of God being the word ‘and the word was God’. ‘The word was with God’ is referring directly to when God says ‘let there be light’ in Genesis and it connects God, the creator of all things with Jesus saviour of the world. Context? The Greek for “word” is “logos” used as a Christological title has connotations with Greek Philosophy. ‘In Stoic thought, Logos was Reason, the impersonal rational principle governing the universe’. However as readers today we can assume that John is referring to Jesus rather than just a concept. ‘Logos’ is repeated a total of three times in the first sentence. This emphasises God’s desire to speak literally through his word and the power to speak to humankind as shown later on throughout John. For example, Peter says to Jesus ‘You have the words of eternal life’ (John 6:68). When John says ‘the word became flesh’, he makes it clear that ‘the word’ is Jesus. John does not refer directly to ‘the word again later in the gospel. However, ‘The theology of the word dominates the gospel’. We see ‘the word’ being lived out through the stories he tells of Jesus, which are important to his character and identity. STORY EXAMPLE Not only is there a focus on Jesus’ character but also on his identity and the way he views himself. ‘It reveals the Word of God not merely as an attribute of God, but as a distinct Person within the Godhead’ This is unique to John as the other gospels focus on just his character. Jesus’ identity is very much at the centre of the gospel and John makes this clear. One example of this is through the well-known ‘I am sayings. This is something particularly unique about John, the seven statements that Jesus makes about himself t, known as the ‘I am’ sayings’. Such as ‘I am the living bread’ and ‘I am the door’. Not only is Jesus telling us directly how he views himself but ‘all the sayings in one way or another make it clear that Jesus is the way of life’ reiterating verse 4 of the prologue: ‘In him was life’. It is also important as Jesus identifies himself as God which reiterates what is says about the word, God and Jesus being interchangeable in the prologue as well as his purpose. O’Grady goes further and says ‘Jesus is not only the Word of God Incarnate but also the Wisdom of God incarnate’. This concept is related to Proverbs which glorifies God’s wisdom. ‘For the Lord gives wisdom; from his mouth come knowledge and understanding.’ This verse shows how we can see ‘the word’, ie Jesus, revealing the wisdom of God. His wisdom is shown throughout the gospel. An example of this in John is when Jesus says “everything that l learnt from my Father I have made known to you.” (15:15) Light and Darkness ‘John’s great prologue… repeatedly applies the mystical language of light to Christ’ This suggests the significance of the word ‘Light’ representing God through it being repeated a total of seven times in the prologue alone. One key example that demonstrates this is ‘The true light that gives light to everyone was coming into the world’ (1:9). In this verse and throughout the prologue we see Jesus presented as the light entering the world. This sets the agenda for the rest of John’s narrative as this language is used throughout it. A key example of this can be seen in one of the “I am sayings” when Jesus says “I am the light of the world. Whoever follows me will never walk in darkness but will have the light of life.” (8:12) This particularly significant as Jesus directly tells us that he is the light of the world, so we know that it is not John’s own words. This occurs again when Jesus says, “I have come as light into the word, that whoever believes in me may not remain in the darkness” (12:46). Community of Light? Introduction to John the Baptist John set’s out in the prologue the significance of John the Baptist being a key character. ‘There was a man sent from God whose name was John.’ (1:6) The fact John was ‘sent from God’ (1:1) suggests his authority and calling and his ultimate purpose of preparing the way for Jesus. However, despite his importance, John makes it clear in verse 7 and 8, that he came only as a ‘witness’ to the light (Jesus). He makes this point twice emphasising that Jesus is central to the prologue as well as the whole gospel. John’s testimony starts straight after the prologue from verse 19. ‘The story reads as if the Baptist were telling this in retrospect and John the author is letting the Baptist now have the spotlight’. Although the focus turns to John it is on in order for him to prepare the way for Jesus, so the listeners can focus on Jesus through hearing his testimony. Throughout John’s gospel we hear of people’s testimonies. They are important in pointing people to Jesus as ‘witness establishes the truth’. A key example of this in John’s testimony is when he proclaimed ‘Behold, the Lamb of God, who takes away the sins of the word!’ (1:29). He speaks with authority and is confident of Jesus’ identity. When John says ‘Look, the Lamb of God!’(1:35), two of his disciples follow Jesus and become Jesus’ first converts. This shows the impact John the Baptist had in leading others to Jesus. And throughout the rest of the gospel we see him lifting up Jesus. For example, when he says ‘He must become greater; I must become less’ (3:30). He makes it clear that it is not about himself but rather making Jesus’ name known and instead humbling himself before Jesus. Bruner explains it like this: ‘John is the law of God in person; Jesus is the gospel of God in person. This portrays the importance of John the Baptist in preparing the way for Jesus and lifting him up and also highlights Jesus’ role in lifting up God his father, similar to as John does. This is a particularly interesting perspective which leads us onto our next theme: Jesus the son of God and the relationship he has with his father. Jesus the Son of God in close relationship with his Father This theme is seen in the prologue with John referring to Jesus as ‘the one and only Son’ on two occasions. John also makes the connection between God and Jesus clear. He explains that Jesus‘…came from the father’ (1:14) and that he ‘… is himself God’ (1:18). This sets the agenda for when Jesus is referred to being the son of God in other places in the gospel. However, not only does John explain their physical relationship but he goes further when he says that Jesus ‘is in closet relationship with the Father.’(1:18) We can see this being demonstrated throughout John’s gospel. ‘The Son of can do nothing of his own accord, but only what he sees the Father doing…’(5:19). This example ‘stress(es) the dependence of the Son upon the Father, but other texts imply equality’. This suggests that they are interchangeable and connected. One example of this is ‘the son, like Father, gives life’ (5.21). This suggest the tight bond between the son and the father, not only reiterating verse 18 of the prologue but also verse 1 with the ‘logos’ concept. We see Jesus communicating to God through prayer on several occasions portraying their close relationship. Jesus says to God “Father, glorify your name!” Then a voice came from heaven, “I have glorified it, and will glorify it again.” (12:28). Jesus proclaiming this to God suggests the trust and belief that he has in him. The fact Jesus responds almost immediately again emphasises their close connection. As the time draws closer to Jesus death he looks towards heaven and prays. “Father, the hour has come. Glorify your Son, that your Son may glorify you”. (17:1) The yet again portrays their relationship and shows that he intends the ultimate sacrifice of Jesus dying on the cross to glorify Jesus in order for God to be glorified. This prayer also ‘contributes to the climax of the movement that brings Christ back to God’ Pointing to the time that Jesus and God will be reunited after he raises from the dead. Glorification of Christ Another theme in the prologue which sets the agenda for the rest of the Gospel is the glorification of Jesus. Unlike the other gospels rather than giving us a family tree of Jesus’ ancestors, ‘John begins with a divine genealogy’ This draws the reader towards God’s glory and remind us that Jesus is divine as opposed to focusing on his human ancestors. John’s use of the word ‘flesh’ meaning ‘sarx’ in greek in verse 14, not only suggests that the ‘word’ entered the world in human form through Jesus, but carries a ‘sacrificial undertone.’ This is later revealed through the sacrifice and offering of Jesus Christ on the cross as shown in chapter 19 leading to his resurrection in chapter 20 portraying his glory. - Milne, B, The Message of John: Inter-Varsity Press: 1993 - Tasker, R.V.G. Tyndale New Testament Commentaries: The Gospel According to St. John: Inter-Varsity Press: 1960 - Köstenberger, A J. Encountering John: The Gospel in Historical, Literary and Theological Perspective: Baker Academic: 1999 - Burge, G M, The NIV Application Commentary: From Biblical text… to contempary life: Zondervan Publishing House, 2000 - O’Grady, J, According to John: The witness of the Beloved Disciple: Paulest Press - Maclaren, A, The Gospel According to St. John, Issue 3 Ryken, L, Wilhoit, J C, Longman III, T: Dictionary of Biblical imagery: InterVasity Press: 1998 - Fredrick Dale Bruner, The Christbook: A Historical/Theological Commentary (Waco, Texas: Word Books, 1987), vol. 1, - Carson, D.A, The Gospel According to John: Apollos: 1991 - Morris, L, The New International Commentary on The New Testament: The Gospel According to John: William B. Eerdmans Publishing Complany: 1971 - Wenham, D. and Walton, S. (n.d.). Exploring the New Testament: Volume 1: 2001 Bruce, John, 29 A, Maclaren, Gospel, 1 Kostenberger, A, John, 51 O’Grady, According to John, 9 Tasker, Tyndale, 42 Wenham & Walton, NT, 251 O’Grady, John, 57 Ryken, Wilhit, Longman, Biblical imagery Burge, Application, 71 Morris, John, 90 Bruner, Christbook, 70 O’Grady, According to John, 10 Carson, John, 551 Ryken, Wilhit, Longman, Biblical imagery, 456 O’Grady, According to John, 64
<urn:uuid:df496456-6cac-4076-aec3-9b80a7c3dff8>
CC-MAIN-2021-43
http://paper-market.com/free-essays/analysis-of-johns-gospel-narrative/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00510.warc.gz
en
0.950265
2,863
3.453125
3
However,the term shearing by itself refers to a specific cutting process that produces straight line cuts to separate a piece of sheet metal.Most commonly,shearing is used to cut a sheet parallel to an existing edge which is held square,but angled cuts can be made as well.For this reason,shearing is primarily used to cut sheet stock into smaller sizes in preparation for other processes.Shearing has the following capabilities Sheet results for this questionWhat is metal shearing machine?What is metal shearing machine?A metal Shearing machine is used for cutting sheet metal to size out of a larger flat stock or from roll stock.The metal to be cut is held in place with hold-downs.90° cuts are positioned by a squaring arm with scale on it or with a back gauge.Other angles are possible with an angle gauge.Sheet Metal Shear Hydraulic Sheet Metal Shear American results for this questionWhat is shearing sheet?What is shearing sheet?Metal shearing can be performed on sheet,strip,bar,plate,and even angle stock.Bar and angle materials can only be cut to length.However,many shapes can be produced by shearing sheet and plate.Materials that are commonly sheared include The shearing process uses three types of tool systems.They are used for shearing:Metal fabrication : shearing metals process results for this questionWhat materials are used for metal shearing?What materials are used for metal shearing?Metal shearing can be performed on sheet,strip,bar,plate,and even angle stock.Bar and angle materials can only be cut to length.However,many shapes can be produced by shearing sheet and plate.Materials that are commonly sheared include Steel.Brass.Bronze.Metal fabrication : shearing metals process sheet metal cutting tools Metal Shears Aviation Snip Set 3 Pack Tin Snips Cutters - Left,Right and Straight Metal Cutting Shears - Snips for Sheet Metal -Cutting Pliers Snip.4.4 out of 5 stars 186.$27.99 $ 27.99 $32.90 $32.90.6% coupon applied at checkout Save 6% with coupon.Get it as soon as Tomorrow,Apr 8. sheet metal shearApr 07,2021 sheet metal shearWEN 3650 4.0-Amp Corded Variable Speed Swivel Head Electric Metal Cutter ShearDEWALT Metal Shears Attachment,Impact Ready (DWASHRIR)Metal Shears Aviation Snip Set 3 Pack Tin Snips Cutters - Left,Right and Straight Metal Cutting SheKAKA Industrial Q01-5216,52-Inch Foot Stomp Shear,Solid Construction,High Precision Sheet MetSee a full list on amazonPlate metal shearing plate cuts up to 8 metres lengthPlate metal shearing is a simple and cost-effective method for cutting plate metal quickly.Our shearing machines can cut sheet metal plates up to 8 meters in total length.The machines develop a maximum pressing force of up to 600 Metric Tons.Shears are equipped with two blades each,with the lower blade being fixed to the machine table.The Dec 05,2020·4.Ingersoll Rand EC300 Air Nibbler.4.5/5.$$$.Sheet Metal Nibbler Reviews.1.Makita JN1601 Sheet Metal Nibbler Review.If youre looking for professional nibbler to cut deck plate,keystone plate,and other corrugated sheet materials,then Makita JN1601 is perfect for you.As you know Makita is one of the most popular power tools Author L.ChenPublish Year 2006Sheet Metal Cutting - ManufacturingSlitting is a shearing process in which the sheet metal is cut by two opposing circular blades,like a can opener.Slitting can be performed in a straight line or on a curved path.The circular sheet metal cutters can be driven,or the work may be pulled through idle cutters.Slitting usually produces a burrChina 3 Roller Sheet Metal Plate Hydraulic Roll Bending Rolling Machine,Plate Rolling Machine,Hydraulic Rolling Machine manufacturer / supplier in China,offering 3 Roller Sheet Metal Plate Hydraulic Roll Bending Machine W11-6*3200,Mechanical Shearing Machine,Q11 Series Metal Sheet Cutting Machine,Electric Shears From China Manufacturer,Hot Sale Auto Nc Hydraulic Sheet Guillotine Metal Shearing Machine Steel Cutter Plate Cutting Machine and China Sheet Metal Fabrication manufacturers - Select 2021 high quality Sheet Metal Fabrication products in best price from certified Chinese Metal Part,Sheet Metal suppliers,wholesalers and factory on ,page 3Get rid of shearing defects in 4 steps Gasparini IndustriesShearing is a metal fabricating process used to cut straight lines on sheet metal.Material is cut (sheared) between the edges of two opposed cutting tools.It works by first clamping the material with blank holders.During the shearing process,a moving blade comes down across a fixed blade with the gap between them determined by a required offset.Guillotine sheet metal shears 1/4'' X 10' - Hydraulic shearsThe guillotine sheet metal shears 1/4 can cut mild steel of 0.24 inches thick over a 10 feet cutting length.These hydraulic shears are robust,fast and with high level of shearing precision.The Holland Delem DAC360 CNC controller is included.This CNC controls shearing angle,distance and stroke.Pneumatic sheet support can be added as an option.Shearing Thickness capacity.Steel 30Kg/mm2 0.31 Mild Steel The small cutting machines by Fabri-cut are high performing.These machines are suitable for cutting ferrous metal sheets and plates in straight direction with oxy-acytylene or oxy-fuel gases.The only difference for both gases is of nozzles.This machine is moved on an aluminum track specially designed for this purpose.TheMetal Shearing Machines JMTUSAEquipmentAdvantagesBenefitsWhen it comes to shearing metal,JMT offers a selection of high quality,high production metal shearing machines for sale that are a cut above the competition.With our superior designed hydraulic shearing machine lineup,all possible sheet metal and plate cutting jobs and requirements will be met,by one or the other of our three series of shearing machines.Whether it be a need for a hydraulic shear consider our Variable Rake GuilloSee more on jmtusaSWE Subweld Engineering Fabrication Pte LtdCutting of standard length automatically once processing is done and leave operations easy to handle and manage.Punch holes on Plates,Angle,Brackets,'C' Channel,'I' Beam etc.Use for stub welding using Nelson Stud that are mainly use for cast-in embeds.Tthe most economical way of processing sheets to exact size.Metal Shearing Services - Sheet Metal Cutting Lapham Metal shearing is ideal for customers needing a quick way to size material before it moves to another process or when needing custom-sized pieces of sheet metal.Benefits of the shearing process include the ability to cut small lengths of material at any time since the metal shearing blades can be mounted at an angle to reduce the required The shearing process performs only fundamental straight-line cutting but any geometrical shape with a straight line cut can usually be produced on a shear.Metal shearing can be performed on sheet,strip,bar,plate,and even angle stock.Bar and angle materials can only be cut to length.People also askCan you cut sheet metal with shearing force?Can you cut sheet metal with shearing force?As mentioned above,several cutting processes exist that utilize shearing force to cut sheet metal.However,the term shearing by itself refers to a specific cutting process that produces straight line cuts to separate a piece of sheet metal.Sheet Metal Cutting (Shearing) - CustomPart.NetPlate Shear Cutting Services Penn StainlessShearing Setup for Cutting Stainless Steel Plate.Shear cutting utilizes a blade to cut through stainless steel plate and sheet.A shearing set up typically consists of back gauges,clamps,a lower blade,an upper blade.The back gauges and clamps hold the cutting material in place horizontally.The lower blade is fixed in place. The TruTool C 160 cuts sheet thicknesses of up to 0.06 inches in mild steel and 0.047 inches in stainless steel.pneumatic shear RR-8110 for sheet metal hand-held The RED ROOSTER RR-8110 sheet metal shears remove a narrow strip from the sheet to be cut.Pneumatic shear - All industrial manufacturers - VideosCompare this product Remove from comparison tool.pneumatic shear S 16-320Y,S 20-180Y.for metal sheets strap hand-held.pneumatic shear.S 16-320Y,S 20-180Y.Plate thickness 1 mm - 3 mm.The Deprag pneumatic sheet metal shears deliver high performance and reliability for cutting materials up toShearing Metal Cutting in Los Angeles,CA1/2 x 20'-0 Shear.Click to enlarge.Hansen Steel is based in the Los Angeles area of California and provides metal cutting and other steel processing services throughout the Western United States.Precision high speed shearing and squaring is completed on steel,aluminum,and stainless in a cutting edge,California metal shop. While shearing primarily is the process of cutting straight lines,we can cut your sheet metal to your unique specifications.This includes most geometric shapes.Even small lengths of material are no problem for us.To provide you with the exact metal product you need,shearingShearing Metal SupermarketsShearing is a standard service offered by Metal Supermarkets.Visit,call or email one of our 80+ worldwide locations to learn more about our shearing services.Please note all items require tolerances.On sheared sheet and plate,our tolerance is +/- 0.125 (all dimensions).On saw-cut items,our tolerance is -0/+0.125 (length only).Shearing of Sheet,Strip,and Plate Metalworking Sheet Shearing is a method for cutting a material piece into smaller pieces using a shear knife to force the material past an opposition shear knife in a pr. Shearing of Sheet,Strip,and Plate,Metalworking Sheet Forming,Vol 14B,ASM Handbook, Steel Heat Treating Technologies. Cordless Seam Metal Shear AK 3514-2 Ni-MH.up to 2.0 mm,or 4x0.9 mm steel,or 6x0.75 mm light metal.Cold cutting without flying sparks! This shear is ideal for cutting spiral ducts,roof seams,cross seams,ridge seams and folds of sheet iron,aluminum,lead,steel,zinc or stainless steel.Shears MetFab EquipmentThe Rebel and PB Series of metal shearing machines are manufactured using components that are recognized for their superior quality and are easily accessible on the open market.Our industrial hydraulic shears with pneumatic plate support allow the cutting of metal from ¼ to 1 and a length of 8 20 feet.After the welding,the frame is then processed by CNC Heptahedron at one time to ensure the rigidity and accuracy of the shears.Shears from your one-stop provider - KNUTHFor very long and thick materials,we recommend our hydraulic swing-beam plate shears that can handle cutting lengths up to 4000 mm (157) and steel plate thicknesses (st42) up to 16 mm (0.6).In addition to hydraulic plate shears,KNUTH offers motorized plate shears that offer a cost-effective alternative when working with smaller plate thicknesses. New Hydraulic Sheet Metal Shear machines To cut sheet metal and steel plate.Also Corner notcher and edge notching machines.A metal Shearing machine is used for cutting sheet metal to size out of a larger flat stock or from roll stock.The metal to be cut is held in place with hold-downs.Sheet Metal Shears - Shop Appliances,Tools,Clothing KAKA Industrial MS-20 20mm Sheet Metal Shears and Versatility Rebar and Rod Cutter Steel Cutter Flat Bar Steel Round Steel Metal. does not apply Manual Multi-purpose Sheet Metal Shear for Cutting Thin Metal Plates (8 Blade) Sold by Tekcom Shop USA.$35.67 $24.88.MD Hobby Craft M-D 57324 M-D 1 Ft.x 2 Ft.x .020 In.Cloverleaf Metal Sheet Metal Shears Northern ToolShop 13 Metal Shears at Northern Tool + Equipment.Browse a variety of top brands in Metal Shears such as Klutch,Jet,and Ironton from the product experts. Klutch Deluxe Foot Sheet Metal Shear 52in.L SHOP FOX Plate Shear 12in.,Model# M1041 Only $ 170.99 Portable 14 x 20 Gauge FINGER BRAKE BENDER Bending Sheet Metal.$385.99.Free shipping.or Best Offer.NICE!! HELLER MACHINERY 1/4 x 10' MECHANICAL POWER SHEAR.$11,499.00.or Best Offer.15 watching.Sheet Metal at LowesHillman 24-in x 24-in Cold Rolled Steel Solid Sheet Metal.These solid plain metal sheets have applications for gutter repair,auto repair,fireplaces,flashing,and duct work.These sheets have a hot-rolled steel finish making them weldable and ideal for repairs or fabrication.Tip gauge indicates metal thickness.View MoreSome results are removed in response to a notice of local law requirement.For more information,please see here.12345Next All material can be offered in stock sizes or cut to size.Grades of Plate and Sheet stocked 310 / 310S,321,304L,316L,Duplex,Super Duplex,Nickel Alloys.Other grades are also available upon request.Cutting Methods Available Plasma Cut,Laser Cut,Waterjet Cut,Saw Cut,Shearing/Guillotine.hydraulic sheet metal cutting machine - Metal Cutting Hydraulic Machine Sheet Metal Sheet Shearing Machine QC12K 6*3200 Elite Metal Cutting Machine/cnc Hydraulic Shear Machine For Metal Sheet / Plate /stainless Steelintl slmt cnc hydraulic aluminum plate shearing machine intl slmt cnc hydraulic aluminum plate shearing machine /hydraulic shearing machine parts .If you have any questions or good suggestions on our products and site,or if you want to know more information about our products,please write them and send to
<urn:uuid:d702df77-0b87-4e14-9fcf-250f33f99f26>
CC-MAIN-2021-43
http://www.amazonasmagazine.es/plate/5a6bd7775a55907.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00350.warc.gz
en
0.823378
3,060
3.40625
3
Rock Drawings in Valcamonica The Rock Drawings in Valcamonica comprises one of the largest collections of prehistoric rock art in the world. It holds approximately 250,000 petroglyphs, divided over 6 separate locations in a valley in the Italian Alps. The rock art was created over a long period of time, starting around 8,000 BC with nomadic hunters. The greatest number was drawn by members of the Camunni tribe in the first millennium BC. Cosmological, figurative, and cartographic motifs are featured, in some locations forming monumental hunting and ritual 'scenes'. Among the most famous symbols found in Valcamonica is the so-called "Rosa camuna" (Camunian rose), which was adopted as the official symbol of the region of Lombardy. The drawings were first documented in 1909 by Walter Laeng, a Brescian geographer. In 1979, the site became the first Italian WHS. Map of Rock Drawings in ValcamonicaLoad map Visit December 2013 Valcamonica was my first stop on a very full day ticking off WHS in the Italian Alps. This valley lies just over an hour to the north of Brescia, and can be reached via a major road that passes numerous tunnels. I opted to visit the Naquane Park in Capo di Ponte, as it seemed to be the most worthwhile among the 6 locations. The park lies on a hill above the town, so you have to leave your car in the center (the closest parking is at the cemetery behind the train station) and walk up. The path is signposted. It's a climb of about 15 minutes. Of course I was the only visitor at the gate. The park distinguishes itself among Italian attractions as it has long opening hours all through the year. It's only closed on Mondays, but otherwise open each day from 8.30 a.m. into the late afternoon. I don't know how they handle it if there's a lot of snow - the path up and through the park would be too slippery I guess. And they would have to clear the snow from the rocks to keep the drawings visible. There's an entrance fee of 4 EUR, and I was welcomed with a leaflet in English and explanations about which route to take to see the major rock drawings. The most impressive ones are the numbers 50 (right at the beginning) and 1 (the biggest and earliest discovered). The Camunni appear to have been obsessed by deer with elaborate antlers, men with giant penises and shovels. At least these are the main objects they carved into the rocks. The 'shovels' (spades) are a recurring theme among other rock art sites, however their meaning is not clear. Since Paul's visit as described below, the site has improved on explanations. Each of the main groups of drawings now has an information panel, with text in both Italian and English and sketches of the drawings so you know what to look for. To make it even easier, in the top right corners of each panel there is a 'map' of the particular rock, with the drawings marked with red dots. The drawings are very clearly visible on rocks 1 and 50, but much less so on other rocks. For example rocks number 70 and 73 are said to display a god and houses, but I did not see anything. A couple of the rocks are also disturbed by footprints made by modern humans with shoes. I don't know when that happened, but they seemed very recent to me. I visited this WHS in July 2019 for the first time in my life (despite of living not so far from it - about 2 hrs by car). If you have Lombardia Carta Musei, Parco Nazionale delle Incisioni Rupestri di Capo di Ponte is included in your card (otherwise, ticket is 6€). Parco Archeologico Nazionale dei Massi di Cemmo has a free entrance. I was very lucky, since - by chance - that day there was a special night opening of Massi di Cemmo and Parco Archeologico Comunale di Seradina-Bedolina, with a guided tour with the Park director, so we could get very interesting explanations. Also, drawings are better seen with raking lights (i.e. early morning, sunset, or night with a raking lamp). So, we started with Massi di Cemmo, they're the easiest to see, the park is in the centre of the town (I guess you can also walk there from the station), they are 2 big vertical (drawings are on the vertical side) stones so this saved them from centuries of damages from snows, people walking and so on. Then we started walking with the director and a group to the Seradina Bedolina Archeological area. He guided us to the most important rocks (big horizontal rocks, so carvings are very thin) and talked about the different carvings, he also told that carvings here have different subjects from carvings at Naquane. Then, on Sunday morning, we went to Naquane (we walked there from our B&B, but there is a small parking lot a few hundred mts from the park entrance). We went there in the morning and I guess we were the first visitors, when we left there were many visitors and some groups. Carvings here, during the day, are more difficult to see than the ones in Massi di Cemmo and the ones seen during the night, here we have horizontal rocks again. The park is big, with good signs and some wood track to get near to some of the drawings you couldn't reach on your own (it's forbidden to walk on rocks). Then, we took our car and tried to see Parco Archeologico Coren delle Fate, it's not well signedposted, and it's not clear if you can drive along the road or where the car road ends and you have to park the car an walk. Anyway, at the beginning we ended up in a private backyard, the owner was very kind and gave us explanations to reach the correct place, but warned us that carvings were very light to see (and he was right). Then we tried to visit the Parco Comunale di Sellero, where we were supposed to see rocks and mines. Unfortunately, signs sent us to nowhere (on a very narrow road, until where we found a "private road, no trespassing" sing, and a "no park" sign). So we had to turn back and give up. Later, somebody we met in a local shop, told us you're allowed to visit that park only with guided tour on request (and on payment). (the official site doesn't mention it, since it writes the park is always free and open). So we came back to Capo di Ponte and visited the MUPRE museum http://www.mupre.capodiponte.beniculturali.it/ (it's included in the Naquane ticket, but it has shorter opening hours) where we could see more rocks with very good lighting, and items found in graves or archeological diggings. My opinion, about what I could visit, a very interesting site, but very poor communication (i.e. we discovered about the special night opening only because our B&B owner told us, the museum just wrote it locally on some leaflets, nothing on their webpage). (sorry for my poor English) Judy Steele Parolini April 2016 I visited Cape di Ponte on my University research project- an ethnographhic research on the area. In January 2016, I sent an email to the centre asking to purchase a visitors brochure before I came- (EU10) and received the reply in September 2016! April 2016 i went with my husband to visit the site and was told that I could not take a monopod- could not take video, but could take photographs. I returned to the car and put back the monopod, and set the camera on photo. Despite the attendant having received all my all my ID, including viewing my Italian passport he followed us around the park to 'watch' what I did! Very uncomfortable feeling- and how sad that visiting scholars should be treated as vandals. I visited this WHS in May 2014. After reading several reviews, I decided to visit the rock drawings of Naquane in Capo di Ponte. The park is open from 8:30am which was perfect to be able to check out the orange route first which takes about an hour. On a sunny day the first sun rays are visible on the most famous series of rock drawings Rock 50 and the Big Rock. There are reindeer, hunters, shovels, animals, geometric signs, hunting scenes, horses, carts, etc. Unlike the rock art I saw in Alta, Norway, the rock drawings of Valcamonica are not filled in with red ocre so the different engravings and iron tool marks are more visible in the main rock drawings. Some rock drawings are completely covered with vegetation or overgrowth or simply with mud. After the orange route which gives you a very good overview of rock drawings, another 45 min trail uphill takes you to the red route to see other rock drawings with one of the most famous being the running athlete drawing. However, an extra effort through a 35 minute downhill trail will lead you to my favourite rock drawings of Valcamonica - a series of pile dwellings - which was the next WHS I was going to visit later on. I travelled by train from Brescia to Capo di Ponte, and from the station climbed up to the park of Naquane as has been described below. Afterwards I returned to the village, crossed the river and found two further areas of rock engravings. The Massi di Cemmo comprises two large engraved rocks in a field. The Seradina-Bedolina area is much larger and involves much scrambling over rocks. But the engraved rocks are marked and easy to find. I have been once in Valcamonica, where are a prehistoric complex, realized in a period of 8000 years, extended in a 70 km long area and not yet completely explored, of 2400 rocks with 140000 engraved symbols and figures, that evoke scenes of navigation, dance, war, agriculture and magic. I have been to the sites of Foppe di Nadro, near Ceto, in the Regional Park of the Rock Drawings of Ceto, Cimbergo and Paspardo, and to the National Park of the Rock Drawings of Naquane, founded in 1955 near Capo di Ponte as the most important in Valcamonica, 30 ha large. In Naquane are 104 engraved rocks going from the Neolithic period to the early Christian age. The most important rocks are: the Big Rock 1, the most important in the valley, with about 1000 drawings with a labyrinth, armed men, sun symbols, cult objects, foots, birds, shovels, carts, warriors, riders, priests and scenes of hunting of deers, war, weaving and women's initiations from the Neolithic period to the Iron Age, the Rock 47, with a cart with wheels drawn by ox, fighting warriors, shovels and mythological figures, the rock 50 with armed men, riders, cart constructers, boats, birds, inscriptions, foots, temples, buildings, shovels, sun symbols and scenes from the Iron Age of a sun cult, the rock 35, with a small village, a smith, a runner with a plumed headdress, scenes of duels between armed men, the rock 32 with a female initiation scene from the Eneolithic Age, the rock 44 with axes, deers, warriors and shovels, the rock 6 with lances, the rock 99 with a Latin inscription and scenes of duels, the rocks 70 to 75 with huts, sun symbols, temples and an ancient god, the rock 23 with a cart with four wheels, the rock 60 with a rose and the rock 57 with deers and a plough. In Foppe di Nadro the most important rocks are: the rock 1 with scenes of a sun cult with prayers, armed men, duellists and medieval crosses, the rocks 4, 22 and 23 with halberds, daggers, axes, symbols and scenes of dancing and ploughing, the rock 6 with huts, stars, foots, duellists, warriors and symbols, the rock 24, with huts, dances, flautists, duellists, warriors, animals and symbols, the rocks 26 and 27 with duellists, warriors, riders, prayers, huts, temples, idols, dogs, inscription and symbols, the rock 35 with stars and the rock 36 with a bowman. This site is one of the most beautiful places I have ever seen because of the beauty and the antiquity of the drawings. It's absolutely worth to be visited (it's quite hard to get to the valley and you must walk a lot in woods for visit many drawings) because it is the most important complex of rock art in the world and justifies the inscription. Photo: Naquane - Big Rock 1 Having already visited the Scandinavian WHS Rock art sites at Tanum and Alta we didn’t want to miss the Italian site at Valcamonica during a visit to N Italy (though we still have to visit Coa Valley in Portuagl). We found information about it somewhat lacking/muddled however. The UNESCO Web site was (as usual – why don’t they have better directions/location details!) singularly uninformative about exactly where the site was and its “link” to http://www.rupestre.net/ seemed of more use to specialists/ “rock art buffs” than to passing tourists! We eventually traced the site to an area north of Brescia concentrated around the town of Capo di Ponte. It was clear also that there were several/numerous sites in the area so we just drove there and took our chances. On arrival in the Capo di Ponte area the first signposts led us to a museum “The Regional Reserve of Ceto-Cimbergo-Paspardo” at Nadro. On closer investigation this appeared to be a joint venture with a tour company called Kernunos Viaggi who wanted to take/guide us (at a cost!) to various nearby locations as per the museum title. We gave up on this and followed another sign which took us to the nearby entrance of the “Parco Archeologico Nazionale delle Incisioni Rupestri di Naquane”. We found the views of the rock art at this site adequate for our purposes – though no doubt if we had given the area a whole day then trips to other locations would have yielded some interest (but we wanted to get to the Opera at Verona that evening!) At Naquane you can follow various short footpaths among a series of rocky outcrops and, in an hour or so, see a reasonable variety of inscriptions. Unfortunately we don’t speak Italian and the site was not well presented for speakers of other European languages – we could only buy a guide in Italian for instance. The guide has rock art locations numbered from 1 to 103 (but many numbers are missing) – and perhaps 15/20 of these are worth spending time at – some of the numbered sites on the guide map also proved rather difficult to trace on the ground!. Ladders and platforms are provided to enable one to see awkwardly positioned carvings. Unlike the carvings at Tanum and Alta these are not highlighted by dye and, depending on the light at the time, some can be quite difficult to make out. The Valcamonica area is said to have around 140000 carvings done over a period of 8000 years from Neolithic to Iron Age (but over 80% are Iron Age i.e. from 800BC). Those at Alta (around 5000) are estimated as having been done from 4200BC to 500 BC (or even later) and those at Tanum (around 1500) from 1800 - 500BC i.e. primarily Bronze Age. Comparisons on such matters are invidious but we felt that the Valcamonica site was the least rewarding of the 3. It could have been so much better presented considering that they charge an entrance fee. - Walter Tarquinio_Superbo Craig Harder Stanislaw Warwas Coppi : - Antonio J. : - Christoph Argo David Berlanda Juropa Daniel C-Hazard Riccardo Quaranta : - Zoë Sheng Wojciech Fedoruk Peter Lööv Lucio Gorla JobStopar : - Els Slots Ivan Rucek Nan Randi Thomsen Hubert : - Clyde Jeffrey Chai Ran Philipp Peterer Philipp Martina Ruckova Svein Elias : - Solivagant Szucs Tamas Caspar Dechmann : The site has 6 locations The site has 10 connections 121 Community Members have visited.
<urn:uuid:038ef376-278c-440f-acbf-44f9d30f7e9c>
CC-MAIN-2021-43
https://www.worldheritagesite.org/new/list/Rock+Drawings+in+Valcamonica
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00110.warc.gz
en
0.963935
3,601
2.953125
3
Small chronic swellings that appear in the junction of the middle and anterior thirds of the vocal fold. These swellings, or nodules (nodes), are vibratory injuries caused by vocal overuse. The most obvious symptom of medium-to-large nodules tends to be hoarseness. The top symptoms for nodules of any size may include: 1) difficulty with high, soft singing; 2) day-to-day variability of vocal capability and clarity; 3) a sense of increased effort to produce voice, especially for singing; 4) reduced endurance, so that the voice becomes husky or “tired” after less voice use than formerly; and 5) phonatory onset delays, when there is a slight hiss of air before the voice “pops in.” How nodules happen: When you overuse your voice, your body tries to cushion the vocal cords by pooling together edema (fluid) beneath the vocal cord mucosa (the surface layer of the cords); this pooled edema is like a small, low-profile blister on your finger. If after a few days you stop overusing your voice, the edema disperses readily, within a few days, and this “blister” on the vocal cords vanishes. If, however, the amount or manner of voice use remains excessive for many weeks or months, then more chronic swelling materials (no longer just edema fluid) are laid down by the body, and the vocal cords develop true nodules. Why nodes affect the voice: In either case (acute swellings or chronic nodules), this injury to the mucosa can impair the voice in two ways: it reduces the vibratory flexibility of the mucosa, and it interferes with the accurate match of the cords when they come together to produce voice. This impairment causes the voice to be hoarse or, more subtly, to suffer from onset delays, difficulty with high notes, and other similar problems. Nodules will often dissipate, with the help of rest and perhaps speech/voice therapy, over a period of weeks or months. Sometimes, the swellings are so stubborn that surgery is required. Audio with photos: Vocal nodules’ effect on the voice, BEFORE surgical removal (see this patient’s photos just below): Same patient, seven weeks AFTER surgical removal of the vocal nodules: Vocal nodules (1 of 6) Strobe light, phonation, open phase of vibration, at the pitch D5 (~587 Hz). There are vocal nodules on both vocal cords, of very long duration, even after voice rest and speech therapy. Compare with photos 3 and 5. Vocal nodules (2 of 6) Same as photo 1, but during the closed phase of vibration. The nodules keep the vocal cords from coming together completely (as seen here), making the patient’s voice breathy. Compare with photos 4 and 6. Vocal nodules: 1 week after surgery (3 of 6) One week after surgical removal of the vocal nodules. Strobe light, phonation, open phase of vibration, at the pitch B5 (~988 Hz). (The small “blob” seen at the midpoint of the cords is just incidental mucus.) Vocal nodules: 1 week after surgery (4 of 6) Same as photo 3, but during the closed phase of vibration. Vocal nodules: 7 weeks after surgery (5 of 6) Seven weeks after surgical removal of the nodules. Strobe light, phonation, open phase of vibration, at the pitch C#6 (~1109 Hz). (Incidental mucus is obscuring the posterior end of the vocal cords.) Polypoid vocal nodules (1 of 4) Polypoid vocal nodules in a “vocal overdoer” with phenomenology typical for a mucosal injury. Narrow band illumination (blue-green light) makes vasculature more prominent. Note also the fusiform (long, low-profile) swelling, best seen on the left cord (right of image). Polypoid vocal nodules (2 of 4) Phonation, strobe light, at the beginning of the closed phase of vibration; one can see that closure will be incomplete due to early contact of the polypoid nodules. Polypoid vocal nodules (3 of 4) Phonation, strobe light, closed phase of vibration, with persistent gaps anterior and posterior to the polypoid nodules. Vocal nodules, leukoplakia, and capillary ectasia (1 of 4) Abducted breathing position, standard light. Notice not only the margin swellings (nodules) but also the ectatic capillaries and the roughened leukoplakia. This person illustrates well the idea that vibratory injury can be manifested differently. Many express the injury more in the form of sub-epithelial edema and other changes; this person also has considerable epithelial change. Vocal nodules, leukoplakia, and capillary ectasia: 6 months later (3 of 4) Partial resolution of mucosal injury as a result of behavioral changes directed by a speech pathologist. Strobe light, open phase of vibration. Vocal nodules (3 of 10) Closed phase of vibration, with notable translucency of the right vocal cord (left of image), which is often a predictor of chronicity and only partial response to speech (voice) therapy. Vocal nodules: 1 week after surgery (7 of 10) Closed phase of vibration, strobe light, showing tiny margin elevations, bilaterally. Vocal nodules: 10 weeks after surgery (8 of 10) Prephonatory instant shows recurrent swelling due to persistent vocal overuse, despite careful preoperative preparation for surgery by a voice-qualified speech pathologist. Patients must know “we are only operating on your vocal cords, not your personality, occupation, friend group, social life, etc.” Vocal nodules (1 of 4) Vocal nodules, moderately large, seen with cords in abducted (breathing) position. Vocal nodules (2 of 4) Phonation, showing early contact of the nodules, and large gaps anterior and posterior to the nodules. Vocal nodules: after surgery (3 of 4) Phonatory position, after surgical removal. Note the straightened vocal cord margins. Vocal nodules (1 of 4) Vocal nodules, with cords in abducted (breathing) position. Note also a thin layer of mucus. Vocal nodules (1 of 3) Open phase of vibration just as vocal cords are also parting (to assume breathing position), strobe light. Note the small “spicule” nodules; these are at the other end of the continuum from “fusiform” or “broad-based” nodules. Vocal nodules (2 of 3) During phonation, strobe light, high-pitched voice, showing early contact of the spicule-form nodules. Vocal nodules (2 of 4) As the vocal cords approach each other to produce voice, note the pointed shape of these small nodules. Vocal nodules (3 of 4) Phonation, under strobe light, closed phase of vibration, at the pitch C4 (~262 Hz). This patient's voice is notably impaired. Vocal nodules, before surgery (1 of 4) Young sociable woman in sales, with chronic hoarseness due to broad-based “polypoid nodules.” Breathing position, standard light. Vocal nodules, before surgery (2 of 4) Making voice at C5 (~523 Hz), showing large swelling on the right cord (left of photo), and lower-profile one on the opposite cord. Vocal nodules, after surgery (3 of 4) Seven days after vocal cord microsurgery; breathing position, standard light. Although there is mild residual post-surgical inflammation of the left cord margin (right of photo), the voice is already markedly improved and normal-sounding. Compare with photo 1. Fibrotic nodules (1 of 5) This patient, a physical education instructor, has through vocal overuse developed the broad-based, rounded swellings seen here on each vocal cord. These swellings lack the watery or translucent appearance associated with edema swelling, because they are stiffer and more fibrotic. Fibrotic nodules: full-length vibration at low pitch (2 of 5) Phonation at low pitch, under strobe light, at moment of vocal cord contact (closed phase of vibration). At this pitch, the vocal cords are vibrating along their full length. (Ignore the small amount of whitish mucus.) Fibrotic nodules: full-length vibration at low pitch (3 of 5) Phonation at low pitch again, but now at the open phase of vibration. Fibrotic nodules: segmental vibration at high pitch (4 of 5) Phonation, very high pitch, closed phase of vibration. Now only the segment of the vocal cords indicated by the dotted lines is vibrating. Vocal nodules (1 of 2) Note that the nodules are not seen well during breathing (abducted position). Polypoid nodule, open phase (1 of 8) Man in mid-30’s with chronic hoarseness due to boisterous personality, and work voice demands. Open phase vibration, low pitch shows large left cord (right of photo) polypoid nodule. Polypoid nodule, right and left cords (3 of 8) Still at same low pitch, the early ‘closing’ phase shows the right sided (left of photo) polypoid nodule, and the larger left-sided lesion (right of photo). Segmental vibration (4 of 8) At high pitch, vibration is damped in mid and posterior cords, and only the anterior segment vibrates at arrows. Post-surgery wounds (5 of 8) Two hours after microsurgical removal of the lesions, the fresh, 3-mm “wounds” are seen at close range. Post-surgery, closed phase (6 of 8) View while making voice shows straight-line match of the vocal cord margins, and equal bilateral blurring, preliminarily suggesting preserved vibratory ability. Compare with photos 1-4. Post-surgery, open phase (7 of 8) Open phase of vibration at E-flat 4 (311 Hz). The patient was unable to make this pitch just 2 hours earlier. Compare with photo 3. Bilateral polypoid nodules (1 of 8) Bilateral polypoid nodules in a person who has used his voice extremely strenuously for many years. Note the whitish “surround” of the polyps, due to a broader area of submucosal fibrosis. The area of fibrosis is indicated by black dotted lines. Narrow-band lighting (2 of 8) At greater magnification, and also under narrow-band light. The area of fibrosis is more clearly seen, now without the dotted lines. Two weeks after surgery (5 of 8) Less than two weeks after surgical removal of the polyps. The faint white zone of margin fibrosis is again seen. Compare with photo 1. Phonation (6 of 8) Phonation under standard light shows that vocal cord margins now match, and both margins blur; suggesting vibratory flexibility. Margin fibrosis ( 7 of 8) Closed phase of vibration, at ~ A4 (440 Hz), as seen under strobe light. Margin fibrosis seen best here, indicated by the black dotted line. Compare with photo 3. Vocal nodules (1 of 8) Semi-professional high soprano with grossly impaired upper voice due to polypoid (fusiform) vocal nodule. Muscular tension dysphonia (2 of 8) Phonatory view shows a degree of muscular tension dysphonia (separated vocal processes), too. Post-op, one week (5 of 8) A week after surgical removal of the nodules, at the prephonatory instant, D5, showing margin irregularity. Open phase (7 of 8) Open phase of vibration (strobe light), at D5 (587 Hz). Irregular margins will iron out across time. Large vocal nodules (1 of 8) Bilateral large vocal nodules in band singer that does close harmony musical styles. One week after surgery (5 of 8) A week after surgery, the “wounds” measure about 3mm long (at arrows). Open phase (1 of 4) In a young pop-style singer, the open phase of vibration under strobe light at C#5 (554 Hz). This magnified view is best to see the large fusiform nodules. Closed phase (2 of 4) Closed phase of vibration at the same pitch shows touch closure—that is, that the nodules barely come into contact. Segmental vibration (3 of 4) Even when patients are grossly impaired in the upper voice as is the case here, the clinician always requests an attempt to produce voice above G5 (784 Hz), in order to detect segmental vibration. Here, the pitch suddenly breaks to a tiny, crystal-clear D6 (1175 Hz) Only the anterior segment (arrows) vibrates. Posterior commissure (4 of 4) A more panoramic view that intentionally includes the posterior commissure to show that the vocal processes, covered by the more ‘grey’ mucosa (arrows), do not come into contact. This failure to close posteriorly is a primary visual finding of muscular tension dysphonia posturing abnormality. Tonsils enlarged (1 of 3) A singer with very large tonsils seen on either side of the photo as she sings A3 (220 Hz). The line of sight is looking straight down from the nasopharynx. Higher pitch (2 of 3) At an octave above, A4 (440 Hz), a slight pharynx contraction brings the tonsils closer together. Tonsils in contact (3 of 3) At nearly an octave higher again, G5 (784 Hz), the pharynx has contracted more (upper arrows), causing the tonsils to come into contact just out of the view (lower arrows)--hence the term “kissing tonsils.” This phenomenon can often be seen by looking at the tonsils through the mouth on an “ah” vowel. Obvious mucosal injury (1 of 3) This young woman is hoarse, but two examinations elsewhere returned no significant findings. Her upper voice limitations during vocal capability testing already tell us “for certain” that there is mucosal injury, even before we look at the larynx. In this mid-range view, we can see early contact at the mid-cords, but the full extent and nature of the injuries are seen in the closer views that follow. Vocal nodules (2 of 3) At a more appropriate level of magnification, the vocal nodules are seen. But we want to know more… Vocal cord swelling and mucosa (1 of 4) This young “dramatic” soprano is also a bona fide vocal overdoer. Her vocal capabilities have been diminishing for over two years. In this medium-range view, note the rounded swelling of the right cord (left of photo), but more significantly as we shall see, the increased vascularity and mottled appearance of the mucosa. Same view under strobe light (2 of 4) Under strobe light, at open phase of vibration at C#5 (523 Hz), we see a projecting, polypoid swelling of the right vocal cord, but not yet the more difficult problem. Closed phase (3 of 4) Closed phase of vibration, at the same pitch of C#5 shows the mismatch of the vocal cord margins. Is this the entire explanation for this patient's hoarseness? Read on. Glottic sulcus is visible (4 of 4) At close range and high magnification, the open mouth of a right-sided glottis sulcus is seen. This side can be operated safely due to the excess, thick mucosa and would be expected to improve the margin match. On the left (right of photo), a sulcus is also seen, but the thinner mucosa makes successful surgery on the left more challenging.
<urn:uuid:354072ba-655d-4fb6-a852-b5476272ba9f>
CC-MAIN-2021-43
https://laryngopedia.com/vocal-nodules/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00670.warc.gz
en
0.90486
3,579
2.875
3
Working Towards Year-round Vegetable Production This is just one of a number of Greenhouse and Processing Crops Research Centre projects, as researchers continue to provide growers across Canada with new tools to improve their profitability and move into new markets. By Dave Harrison June 2015 – Ask just about any greenhouse vegetable grower in Canada for their wish list, and “year-round production” would at the top of most pages. Retailers like the idea of a single supplier, and consumers throughout North America are favouring locally grown foods. In Canada, low winter light levels are the challenge. And it’s not as simple as simply saying supplemental lighting is the answer. It has to be cost-effective, and it has to take into account how the lighting affects the plants’ nutritional needs and pest/biological interactions, among other considerations. Researchers at the Greenhouse and Processing Crops Research Centre in Harrow, Ont., have undertaken an ambitious five-year project to develop year-round production models for tomatoes, cucumbers and peppers. The foundation for this research was laid more than a decade ago with projects related to extending the growing season. The report will be completed in 2018. This project was just one of many discussed during our GPCRC tour earlier this year. As another example, and this time on the disease front, a major study is looking at a potential new fusarium threat for pepper growers, with Harrow researchers taking a very close look at the problem and identifying possible remedies. Year-round greenhouse vegetable production in Canada is being advanced by a five-year research program now underway at the Greenhouse and Processing Crops Research Centre in Harrow, Ontario. Heading the study team is Dr. Xiuming Hao. The project is running through to 2018. “Greenhouses using supplemental lighting will allow growers to produce vegetables year-round,” he explained. This is a very competitive market, and retailers are looking for year-round fresh produce supplies. “If we don’t have year-round production,” said Hao, “we risk losing some market share.” Hao and his team have already done considerable work with lighting and season extension. From 2004 to 2008, for example, they worked with year-round English cucumber and mini-cucumber production systems. They used HPS (high pressure sodium) lighting, and increased annual cucumber yields by 100 to 150 per cent in comparison to conventional systems not equipped with supplemental lighting. From 2009 to 2013, they worked on hybrid lighting systems that placed HPS lights above the crop, and LEDs within the crop canopy – a vertical lighting strategy. The LED bulbs don’t get hot and can be placed close to the plants. This system ensured optimal – and more uniform – light distribution throughout the crop. In particular, the inter-lighting LEDs allowed the plant to maintain its vigour, resulting in higher fruit yield late in the season in comparison to use of HPS lighting only. Plant growth, fruit yield and fruit quality are not only affected by the quantity of light, but also by its quality, or spectrum composition, said Hao. LEDs allow researchers to fine-tune that spectrum composition. Spectrum composition also affects certain compounds of the fruit. Hao and his team are working with Dr. Ron Cao and his colleagues at AAFC’s Guelph Food Research Centre to analyze how different light spectrum compositions impact plant growth, fruit yields and antioxidant levels. The research has already identified the proper ratio of HPS lighting on top to LED inter-lighting for both mini-cucumber and tomato crops. Another part of the research is looking at longer photoperiod schedules. Tomatoes, for example, grow best with up to 16 or 17 hours of light. Any longer than that and growers risk leaf chlorosis and the plant no longer responds. However, supplemental lighting is a major capital expense, a fixed cost. Whether they use them for 17 or 20 hours, growers have still paid the same amount. If growers can use lighting for longer periods of time and have the plant respond accordingly without triggering leaf chlorosis, they can improve their yields and quality. Hao is applying Dynamic Temperature Integration (TI) scheduling to accomplish this goal. TI involves a pre-morning or pre-night temperature drop. How do the leaves and fruit respond? The larger surface area of the leaf means it will have a quick drop in temperature, while the fruit temperature won’t drop as much or as quickly because it has a larger volume. For example, the leaf may drop to 15 C, while the fruit drops only to 17 C, a 2 C difference. Since the fruit is warmer and the leaf is cooler, it shifts the growth balance towards fruit growth, hence higher yields. Dynamic Temperature Integration is showing the potential to improve plant response during longer photoperiod. This temperature dip also allows growers to increase energy efficiency, reduce energy use, and because photoperiod injury is reduced, the plant responds much better to supplemental lighting. The research so far has just looked at tomatoes because they have the biggest problem with an extended photoperiod. Peppers and cucumbers will be next. Supplemental lighting will also mean different crop nutritional requirements and more irrigation. The entire system of fertilizing and irrigation will have to be adjusted and optimized, said Hao, something the current five-year study will address. “Fertigation requirements under lighting are different from those under ambient light.” In another study, Hao and his team are involved in a five-year study of new greenhouse cover materials. They will be comparing regular coverings now in use with diffused polyethelene and polycarbonate materials, along with other new coverings. Diffused covering materials offer more even light distribution, deeper light penetration to the lower canopy, and less plant stress in summer. “Cover material has a great effect on the bottom line for growers,” Hao noted. The study will look at the light transmission and energy conservation properties of the various coverings. “If you walk into a greenhouse with diffused glass or diffused poly, you’ll notice there is very little shadow,” Hao explained. “It offers better light distribution throughout the crop, top to bottom.” Diffused light is more uniform over the vertical profile of the plant. It also means more uniform leaf temperatures, and will reduce the incidence of fruiting disorders caused by high temperatures. “The microclimate is so much better,” said Hao. Both of these five-year projects are being funded by AAFC; the AgriInnovation Programs of Growing Forward 2 from AAFC; and the Ontario Greenhouse Vegetable Growers. NEW FUSARIUM THREAT There’s a new disease threat of greenhouse pepper crops, and plant pathologist Dr. Ray Cerkauskas is conducting multi-faceted research into it. He has been studying the etiology and control of Fusarium oxysporum. It was first detected in a few greenhouses in the fall of 2013. Gillian Ferguson, the recently retired greenhouse vegetable IPM specialist with the Ontario Ministry of Agriculture, Food and Rural Affairs, brought in some infected plants. “When a problem like this comes in,” Cerkauskas explained, “you’re not sure if this is an existing disease or something new.” The symptoms include a canker at the base of the stem, foliar chlorosis in the early stages, extensive foliar wilt, stunting of the plant, and foliar necrosis in the later stages. The researchers found it to be a slow moving disease during the course of their study. They evaluated the plants in four ways – overall plant, stem, roots and crown – with a 0 to 5 scale for each part. They found that if the plant was diseased, the rockwool block pulls away very easily because there are few roots. There was also a dark discoloration of these roots compared to healthy plants. The four ratings were combined to provide an overall disease severity index. The first task was to identify the disease and the causal agent applying Koch’s postulates. (Robert Koch was an early microbiologist.) Koch’s postulates include four criteria designed to establish a causal relationship between a microbe and a disease. “You look at the plant and you look at the symptoms, and then do isolations from that plant tissue,” said Cerkauskas. A number of organisms were detected. Each was multiplied in culture to determine its cultural features. These cultures were then used to make a suspension that was utilized to inoculate healthy plants to see what kind of symptoms would be expressed. Once the same symptoms were observed on the newly infected plant, the targeted organism was then re-isolated from the diseased tissue to see if the same cultural features would occur. It has also been recently found in the Almeria region of Spain. That was the first report of this disease on greenhouse sweet pepper. The Harrow researchers are also testing a number of popular pepper varieties to determine how susceptible they are to the disease. Is there a pepper that may be more susceptible or more resistant to this disease? The disease severity index has been applied to each of them. The next question was with the host range of this fungus, i.e., are greenhouse tomatoes and cucumbers also susceptible to the fungus? Growers who had a pepper crop with a fusarium problem would want to know if it could affect different crops planted the next season, such as tomatoes or cucumbers. Growers are used to completely cleaning and disinfecting the greenhouse after the pepper crop is pulled out. As well, all infected debris should be removed from anywhere near the greenhouse. However, if some of the fungal material remained despite all the precautions, what happens to the new tomato or cucumber crops? In this study, Cerkauskas found there were no visible symptoms with cucumbers, and only some minor symptoms on the roots and crown of a few tomato plants. “Tomato is in the same family as pepper, so you might expect to see a little symptom development with them.” Cucumbers and tomatoes can be symptomless hosts, it was found, but the fungus is still present and colonizing the roots. Without a thorough end-of-season cleanup and the removal of all plant debris from those crops, it could lead to a subsequent infection of the next pepper crop. In another study, Cerkauskas is looking to see if the problem could be controlled with reduced-risk materials, including the newer class of fungicides and biologicals. Because this is a new disease, these products are not yet registered for control of it. And in a fourth study, the researchers are using molecular means to identify the exact type of fusarium causing the problem. The DNA is being extracted at Harrow to be analyzed at the Robarts Research Institute in London, Ont. This project is funded by the Ontario Greenhouse Vegetable Growers, and Agriculture and Agri-Food Canada. There were 230 acres of new greenhouse structures built in Ontario last year, and the majority were constructed in the Essex, Chatham/Kent region. Of last year’s expansion, 43 per cent was in tomatoes, 31 per cent was in peppers, and 26 per cent was in cucumbers. The province has 2,397 acres (959 hectares) in greenhouse vegetable production. Of that, 2,078 acres (841 ha) is in the Leamington area. The major crops include tomatoes (38 per cent), peppers (34) and cucumbers (27), along with lettuce and eggplants (one per cent). About two-thirds of total greenhouse vegetable acreage in Canada is in Ontario, followed by British Columbia (22), Quebec (seven) and Alberta (four). The past year was quite cold, noted OMAFRA greenhouse vegetable crop specialist Shalin Khosla, and growers had to deal with high heating bills. Similarly the 2014-2015 winter was cold. However, the 2014 production season was quite good for the three major crops. One industry challenge is with low prices due to the influence of other producing regions, such as Mexico, the southern U.S., and Europe. Growers are continuing to become more efficient to counter those pricing pressures. The average greenhouse vegetable operation in the province is 11.2 acres, maintaining a trend of increasingly larger greenhouses. “On average, pepper growers are larger than tomato growers,” said Khosla. “There are fewer pepper growers, but they have larger operations.” About 80 per cent of new greenhouses constructed in 2014 were built with glass. The new structures are tall, ranging from 21 feet to 25 feet. More growers are using hot water heating, and most are using at least one energy curtain and some have two. “It provides for a better energy management system.” The use of biomass as a fuel has stabilized. While the price of natural gas has dropped quite a bit in recent years, so too has the price of wood biomass. While it’s trickier to heat with biomass, there are still a number of growers using it because they’ve made the investment in these systems. Growers are increasingly energy conscious, said Khosla, spending more time and money optimizing its usage. The Ontario industry has dropped its energy utilization from 2.5 gigajoules per square metre to about two gigajoules per square metre. “Some growers are even lower than that.” Considerable attention is paid to water management, with growers quite aggressive in recycling their nutrient solutions. More and more growers are completely recycling their nutrient solutions. “Growers are working hard to ensure every drop of water is utilized by the plant,” he said. “They’re making sure the fertility program is perfect for the crop so there is minimum amount of wastage.” Ontario recently passed its Nutrient Management Act, in which growers are now able to apply used greenhouse nutrient solution to farmland. There are a number of regulations that have to be followed. “It’s a good option available to them to remove nutrient solutions from the greenhouse.” More automation is being utilized in the packing area and in moving materials throughout the greenhouse. Included are driverless cart systems following tracks from the greenhouse to the packing area, and robotic tipping, stacking and de-stacking units in the packing areas. Research on robotic harvesting is making headway, but it may still be a few years down the road, said Khosla. “There are several research projects underway, but it’s a very big task.” Further work on greenhouse automation is underway. “Greenhouse vegetable growers continue to strive to improve efficiency through innovation and improved management practices.” Cara McCreary is the newest member of the GPCRC greenhouse team. She took over in January from Gillian Ferguson, who retired earlier this year. McCreary was most recently at the Ridgetown Campus of the University of Guelph, where she served as a research associate in the edible bean program since 2012. She has a master of science degree in Environmental Biology from the University of Guelph, a bachelor of commerce degree in Business Administration from the University of Windsor, and an associate diploma in Horticulture from the University of Guelph. McCreary has several years experience as a greenhouse scout, work she began while attending Ridgetown. She first began scouting in a 52-acre tomato greenhouse operation. “That was quite an introduction to the industry,” she recalled during our interview. “One of the most spectacular things I’ve ever seen was when I walked into such a large greenhouse for the first time. It was pretty amazing.” She then worked on a number of research programs at the University of Guelph and at Ridgetown for a couple of years while taking science courses before embarking on her master’s degree studies at U of G. There she studied the life cycle, temperature-dependent development and economic impact of an agricultural pest, the bean leaf beetle. She’s already spent considerable time meeting with growers to better understand their pest management challenges. She has even found time to conduct some preliminary research focusing on the impact of supplemental lighting on predatory mites and pests. “There are a few other projects I’m hoping I can start later this year or next year.”
<urn:uuid:9d6a0554-243c-4ba5-904a-af447dc70c82>
CC-MAIN-2021-43
https://www.greenhousecanada.com/working-towards-year-round-vegetable-production-30394/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00190.warc.gz
en
0.958986
3,494
2.71875
3
Intergovernmental Panel on Climate Change (IPCC) Special Report 2018[i] The IPCC’s Special Report on Climate Change was formally approved and accepted at Incheon, Korea last Saturday. The Report presents alarming findings on global risks of continued inadequate response to climate change.Despite the shabby figures released on Grand Final eve as to Australia’s rising greenhouse gas emissions, Australian scientists played important roles in the IPCC’s report. The IPPC’s Vice-Chair is Professor Mark Howden from ANU’s Climate Change Institute. Other drafting authors include eminent Australian scientists from across Australia and within CSIRO[ii]. The Report, moderate and technical in its approach, has dramatic implications. The Summary for Policymakers Report contains 4 sections: A. Understanding Global Warming of 1.5˚C; B. Project Climate Change, Potential Impacts and Associated Risks; C. Emission Pathways and System Transitions Consistent with 1.5˚C Global Warming; and D. Strengthening the Global Response in the Context of Sustainable Development and Efforts to eradicate Poverty. We overview each section below. A. Understanding Global Warming of 1.5˚C The Report states that, at current rates, 1.5˚C warming will be reached in ~2030-2052, ie when our babies now are teenagers or, at latest, young adults. The climate-related risks are higher in a global warming scenario of 1.5˚C than they are at present but they are significantly lower than if there were to be a 2˚C warming. The Report notes that many land and ocean ecosystems, and some of the services they provide, have already changed due to global warming and that such loss may now be long-lasting or irreversible. The Report states that upscaling and accelerating far-reaching, multi-level and cross-sectoral climate mitigation would reduce future climate-related risks, as would both incremental and transformational adaption. B. Project Climate Change, Potential Impacts and Associated Risks Climate models project 1.5˚C – 2˚C. Sea level rise will persist beyond 2100, but with a lower rise if global warming is 1.5˚C, rather than 2˚C, with consequent impact on terrestrial, freshwater and coastal ecosystems and fisheries. The Report clearly warns that “climate-related risks to health, livelihoods, food security, water supply, human security, and economic growth are projected to increase even with global warming of 1.5 ˚C and will increase much further with 2 ˚C.[iv]” This is sober reading indeed. The Report makes the obvious point that adaptation options, health risks and overall community cost will be reduced if warming is retained at 1.5 ˚C. At higher levels this adaptation itself becomes more challenging. Adaptations include coastal defence, irrigation, social safety nets, disaster management, water use, ecosystem restoration, biodiversity management and sustainable aquaculture. C. Emission Pathways and System Transitions Consistent with 1.5˚C Global Warming Reaching 1.5 ˚C involves “deep reductions in emissions of methane and black carbon (35% or more by 2050 relative to 2010)[v]” and “also reduction of most of the cooling aerosols. In addition, targeted non-CO2 mitigation would involve reduction of nitrous oxide from agriculture, methane from the waste sector and sources of black carbon and hydrofluorocarbons[vi]”. To achieve this “requires limiting the total cumulative global anthropogenic emissions of CO2 since the preindustrial period” involving “rapid and bar-reaching transitions in energy, land, urban and infrastructure (including transport and buildings), and industrial systems…[vii]” Such system changes “have occurred within specific sectors, technologies and spatial contexts, but there is no documented historic precedent for their scale”[viii] (emphasis added). They require 75-90% reduction in industry emissions[ix]. Renewables ought to supply 70-85% energy[x]. Changed land and urban planning practices and deeper emission reductions in transport and buildings are required immediately[xi]. Vastly increased investment in mitigation is required[xii]. Now that presents a serious challenge. Who in Australia is ‘on’ to this challenge right now – today? Victoria’s Climate Change Act 2017 (CCA) provided a long-term reductions target in Victoria of net zero greenhouse gas emissions by 2020 (s6 CCA), with the Premier and the relevant Minister responsible for ensuring this target is met (s8 CCA). The Australian Government, as a signatory to the Paris Agreement, agreed to pursue the global goal to hold to well below 2˚C, preferably below 1.5˚C. However, government response to this IPCC Report has been anything but supportive. The Prime Minister deferred responsibility for the Report’s demands by stating that “there are a lot bigger players out there”.[xiii] The deputy Prime Minister stated that the government will not change its policy because of “some sort of report”[xiv]. The Federal Environmental Minister stated that it is “drawing a long bow” to call for coal power to phase out by 2050.[xv] If there is an overshoot beyond a rise of 1.5 ˚C, the ultimate level of carbon dioxide removal that will then be required increases significantly. The Report, horrifyingly, notes that “understanding is still limited about the effectiveness of net negative emissions to reduce temperatures after they peak”[xvi]. It notes the obvious point that a return to 1.5 ˚C after any overshoot would be limited by the speed, scale and societal acceptability of carbon dioxide removal deployment[xvii]. It points to possible competition between required measures and the need for effective governance to limit and manage required trade offs[xviii]. Upscaling carbon dioxide removal with an overshoot of 0.2˚C may actually prove impossible[xix]. The Report warns that avoiding overshoot can only be achieved if CO2 levels start to decline before 2030, ie well before the babies become teenagers[xx]. D. Strengthening the Global Response in the Context of Sustainable Development and Efforts to Eradicate Poverty The Report articulates the link between climate change impacts and achievement of sustainable development, balancing social well-being, economic prosperity and environmental protection[xxi]. It notes the need for consideration of equity and ethics to address unevenly distributed adverse impacts[xxii]. Underpinning the 1.5 ˚C warming target there need to be enabling conditions including strengthened multi-level governance, institutional capacity, policy instruments, technological innovation, finance, changed lifestyles and human behaviour[xxiii]. The Report points to education, information, and community approaches, including those informed by Indigenous and local knowledge, as mechanisms to accelerate progress, particularly when combined with policies and tailored to specific actors and contexts[xxiv]. It sees international cooperation providing an enabling environment and as a ‘critical enabler’ for developing countries and vulnerable regions[xxv]. Public-private partnerships and multi-level governance across industry, society and scientific institutions will ensure transparency, participation capacity building and learning[xxvi]. The Report is alarming. Clearly, a sluggish response is unacceptable. A modulated sense of panic permeates the careful technical contents of this most important report. Copyright © Kellehers Australia Pty Ltd 2018 Liability limited by a scheme approved under Professional Standards Legislation. This fact sheet is intended only to provide a summary and general overview on matters of interest. It does not constitute legal advice. You should always seek legal and other professional advice which takes account of your individual circumstances. [i] Intergovernmental Panel on Climate Change, Global Warming of 1.5˚C: an IPCC special report on the impacts of global warming of 1.5˚C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty, Summary for Policymakers (8 October 2018). Retrieved from http://report.ipcc.ch/sr15/pdf/sr15_spm_final.pdf. The full Report can be found at http://www.ipcc.ch/report/sr15/. [ii] Professor Howden was a Review Editor of the Report. Drafting Authors included Professor Ove Hoegh-Guldberg, inaugural chair of Global Change Institute and Professor of Marine Science at University of Queensland, Dr Elvira Poloczanska, CSIRO’s Marine and Atmospheric Research Division and Professor Petra Tschakert, Faculty of Science, University of Western Australia School of Agriculture and Environment. [iii] Australian Government Department of the Environment and Energy, ‘Quarterly Update of Australia’s National Greenhouse Gas Inventory: December 2017: Incorporating NEM Electricity emissions up to March 2018’, revised 18 May 2018. Retrieved from http://www.environment.gov.au/system/files/resources/7b9824b8-49cc-4c96-b5d6-f03911e9a01d/files/nggi-quarterly-update-dec-2017-revised.pdf . [iv] Report, B5. [v] Report, C1.2. [vi] Report, C1.2. [vii] Report, C1.3. [viii] Report, C2.1. [ix] Report, C2.3. [x] Report, C2.2. [xi] Report, C2.4. and C2.5. [xii] Report, C.2.6. [xvi] Report, C3.3. [xvii] Report, C3.3. [xviii] Report, C3.4. [xix] Report, D1.2. The challenges include ‘cost escalation, lock-in in carbon-emitting infrastructure, stranded assets, and reduced flexibility in future response options…’ D1.3. [xx] Report, D1. [xxi] Report, D2.1. [xxii] Report, D2.2. [xxiii] Report, C2.3. [xxiv] Report, D5.6. [xxv] Report, D7. [xxvi] Report, D7, D7.1., D7.2., D7.3. and D7.4. Download PDF Here
<urn:uuid:a9b273bf-263d-41ea-b8fe-fc54c073d6aa>
CC-MAIN-2021-43
https://kellehers.com.au/latest-news/climate-change-2018-u-n-special-report/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00590.warc.gz
en
0.893212
2,281
3.15625
3
From 1940 to 1945, an estimated 1.3 million people were deported to Auschwitz, the largest complex of Nazi concentration camps. More than four out of five of those people—at least 1.1 million people—were murdered there. On January 27, 1945, Soviet forces liberated the final prisoners from these camps—7,000 people, most of whom were sick or dying. Those of us with a decent public education are familiar with at least a few names of Nazi extermination facilities—Auschwitz, Dachau, Bergen-Belsen—but these are merely a few of the thousands (yes, thousands) of concentration camps, sub camps, and ghettos spread across Europe where Jews and other targets of Hitler's regime were persecuted, tortured, and killed by the millions. The scale of the atrocity is unfathomable. Like slavery, the Holocaust is a piece of history where the more you learn the more horrifying it becomes. The inhumane depravity of the perpetrators and the gut-wrenching suffering of the victims defies description. It almost becomes too much for the mind and heart to take in, but it's vital that we push through that resistance. The liberation of the Nazi camps marked the end of Hitler's attempt at ethnic cleansing, and the beginning of humanity's awareness about how such a heinous chapter in human history took place. The farther we get from that chapter, the more important it is to focus on the lessons it taught us, lest we ignore the signs of history repeating itself. Lesson 1: Unspeakable evil can be institutionalized on a massive scale Perhaps the most jarring thing about the Holocaust is how systematized it was. We're not talking about humans slaying other humans in a fit of rage or a small number of twisted individuals torturing people in a basement someplace—this was a structured, calculated, disciplined, and meticulously planned and carried out effort to exterminate masses of people. The Nazi regime built a well-oiled killing machine the size of half a continent, and it worked exactly as intended. We often cite the number of people killed, but the number of people who partook in the systematic torture and destruction of millions of people is just as harrowing. It has now come out that Allied forces knew about the mass killing of Jews as early as 1942—three years before the end of the war. And obviously, there were reports from individuals of what was happening from the very beginning. People often ask why more wasn't done earlier on if people knew, and there are undoubtedly political reasons for that. But we also have the benefit of hindsight in asking that question. I can imagine most people simply disbelieving what was actually taking place because it sounds so utterly unbelievable. The lesson here is that we have to question our tendency to disbelieve things that sound too horrible to be true. We have evidence that the worst things imaginable on a scale that seems unfathomable are totally plausible. Lesson 2: Atrocity can happen right under our noses as we go about our daily lives One thing that struck me as I was reading about the liberation of Auschwitz is that it was a mere 37 miles from Krakow, one of the largest cities in Poland. This camp where an average of 500 people a day were killed, where bodies were piled up like corded wood, where men, women, and children were herded into gas chambers—and it was not that far from a major population center. And that was just one set of camps. We now know that there were thousands of locations where the Nazis carried out their "final solution," and it's not like they always did it way out in the middle of nowhere. A New York Times report on how many more camps there were than scholars originally thought describes what was happening to Jews and marginalized people as the average person went about their daily lives: "The documented camps include not only 'killing centers' but also thousands of forced labor camps, where prisoners manufactured war supplies; prisoner-of-war camps; sites euphemistically named 'care' centers, where pregnant women were forced to have abortions or their babies were killed after birth; and brothels, where women were coerced into having sex with German military personnel." Whether or not the average person knew the full extent of what was happening is unclear. But surely there were reports. And we know how the average person responds to reports, even today in our own country. How many news stories have we seen of abuses and inhumane conditions inside U.S. immigrant detention camps? What is our reaction when the United Nations human rights chief visits our detention facilities and comes away "appalled"? It's a natural tendency to assume things simply can't be that bad—that's undoubtedly what millions of Germans thought as well when stories leaked through the propaganda. Lesson 3: Propaganda works incredibly well Propaganda has always been a part of governance, as leaders try to sway the general populace to support whatever they are doing. But the Nazis perfected the art and science of propaganda, shamelessly playing on people's prejudices and fears and flooding the public with mountains of it. Hermann Goering, one of Hitler's top political and military figures, explained in an interview late in his life that such manipulation of the masses isn't even that hard. "The people can always be brought to the bidding of the leaders," he said. "That is easy. All you have to do is tell them they are being attacked and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same way in any country." Terrifyingly true, isn't it? This is why we have to stay vigilant in the face of fear-mongering rhetoric coming from our leaders. When an entire religion or nationality or ethnic group is painted as "dangerous" or "criminal" or "terrorists," we have to recognize that we are being exposed to the same propaganda used to convince Germans that the Nazis were just trying to protect them. Safety and security are powerful human desires that make it easy to justify horrible acts. Hitler was also great at playing the victim. While marching through Europe, conquering countries and rounding up millions of innocent people to exterminate, he claimed that Germany was the one under attack. Blatant anti-Semitic rhetoric surely fired up Hitler's core supporters, but the message to the average German was that this was all being done in the name of protecting the homeland, rather than a quest for a world-dominating master race. Lesson 4: Most of us are in greater danger of committing a holocaust than being a victim of one I had to pause when this realization hit me one day. As fairly average white American, I am in the majority in my country. And as strange as it is to say, that means I have more in common with the Germans who either committed heinous acts or capitulated to the Nazis than I do with the Jews and other targets of the Nazi party. That isn't to say that I would easily go along with mass genocide, but who's to say that I could fully resist the combination of systematic dehumanization, propaganda, and terrorism that led to the Holocaust? We all like to think we'd be the brave heroes hiding the Anne Franks of the world in our secret cupboards, but the truth is we don't really know what we would have done. Check out what this Army Captain who helped liberate a Nazi camp said about his bafflement at what the Germans, "a cultured people" allowed to happen: "I had studied German literature while an undergraduate at Harvard College. I knew about the culture of the German people and I could not, could not really believe that this was happening in this day and age; that in the twentieth century a cultured people like the Germans would undertake something like this. It was just beyond our imagination... – Captain (Dr.) Philip Leif - 3rd Auxiliary Surgical Group, First Army Some say that we can gauge what we would have done by examining what we're doing right now, and perhaps they are right. Are we speaking out against our government's cruel family separations that traumatize innocent children? Do we justify travel bans from entire countries because we trust that it's simply our leadership trying to keep us safe? Do we buy into the "Muslims are terrorists" and "undocumented immigrants are criminals" rhetoric? While it's wise to be wary of comparing current events to the Holocaust, it's also wise to recognize that the Holocaust didn't start with gas chambers. It started with "othering," scapegoating, and fear-mongering. We have to be watchful not only for signs of atrocity, but for the signs leading up to it. Lesson 5: Teaching full and accurate history matters There are people who deny that the Holocaust even happened, which is mind-boggling. But there are far more people who are ignorant to the true horrors of it. Reading first-hand accounts of both the people who survived the camps and those who liberated them is perhaps the best way to begin to grasp the scope of what happened. One small example is Supreme Allied Commander Dwight D. Eisenhower's attempt to describe what he saw when he visited Ohrdruf, a sub-camp of Buchenwald: "The things I saw beggar description. While I was touring the camp I encountered three men who had been inmates and by one ruse or another had made their escape. I interviewed them through an interpreter. The visual evidence and the verbal testimony of starvation, cruelty and bestiality were so overpowering as to leave me a bit sick. In one room, where they were piled up twenty or thirty naked men, killed by starvation, George Patton would not even enter. He said that he would get sick if he did so. I made the visit deliberately, in order to be in a position to give first-hand evidence of these things if ever, in the future, there develops a tendency to charge these allegations merely to 'propaganda.'" And of course, the most important narratives to read and try to digest are the accounts of those who survived the camps. Today, 200 survivors of Auschwitz gathered to commemorate the 75th anniversary of its liberation. They warned about the rise in anti-Semitism in the world and how we must not let prejudice and hatred fester. Imagine having to make such a warning seven decades after watching family and friends being slaughtered in front of you. Let's use this anniversary as an opportunity to dive deeper into what circumstances and environment enabled millions of people to be killed by one country's leadership. Let's learn the lessons the Holocaust has to teach us about human nature and our place in the creation of history. And let's make darn sure we do everything in our power to fend off the forces that threaten to lead us down a similarly perilous path.
<urn:uuid:388de889-61a6-499a-bdd2-9d297b6f7a1b>
CC-MAIN-2021-43
https://www.upworthy.com/lessons-we-all-should-have-learned-from-the-liberation-of-auschwitz-and-other-nazi-camps
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00311.warc.gz
en
0.979145
2,221
3.59375
4
Who's Behind Listverse? Jamie founded Listverse due to an insatiable desire to share fascinating, obscure, and bizarre facts. He has been a guest speaker on numerous national radio and television stations and is a five time published author.More About Us 10 Greatest Native American Chiefs And Leaders If you live in the the United States (and even if you don’t) you’ve probably heard about a number of the country’s prominent historical figures. But what about the history of those who were there before? Even many Americans know very little of Native American history. One of many overlooked aspects of Native American history is the long list of exceptional men who led various tribes as chiefs or war leaders. Just as noble and brave as anyone on the Mexican, British, or American sides, many of them have been swept into the dustbin of history. Here are ten of the greatest Native American chiefs and leaders. A member of the Apache tribe, Victorio was also the chief of his particular band, the Chiricahua. He was born in what is now New Mexico in 1809, when the land was still under Mexican control. For decades, the United States had been taking Native American lands, and Victorio grew up in turbulent times for his people. Because of that experience, he became a fearsome warrior and leader, commanding a relatively small band of fighters on innumerable raids. For more than ten years, Victorio and his men managed to evade the pursuing US forces before he finally surrendered in 1869. Unfortunately, the land he accepted as the spot for their reservation was basically inhospitable and unsuitable for farming. (It’s known as Hell’s Forty Acres.) He quickly decided to move his people and became an outlaw once again. In 1880, in the Tres Castillos Mountains of Mexico, Victorio was finally surrounded and killed by Mexican troops. (Some sources, especially Apache sources, say he actually took his own life.) Perhaps more interesting than Victorio was his younger sister, Lozen. She was said to have participated in a special Apache puberty rite which was purported to have given her the ability to sense her enemies. Her hands would tingle when she was facing the direction of her foes, with the strength of the feeling telling how close they were. 9 Chief Cornstalk More popularly known by the English translation of his Shawnee name Hokolesqua, Chief Cornstalk was born sometime around 1720, probably in Pennsylvania. Like much of the Shawnee people, he resettled to Ohio in the 1730s as a result of continuous conflict with invading white settlers (especially over the alcohol they brought with them). Tradition holds that Cornstalk got his first taste of battle during the French and Indian War, in which his tribe sided with the French. A lesser-known conflict called Lord Dunmore’s War took place in 1774, and Cornstalk was thrust into fighting once again. However, the colonists quickly routed the Shawnee and their allies, compelling the Native Americans to sign a treaty, ceding all land east and south of the Ohio River. Though Cornstalk would abide by the agreement until his death, many other Shawnee bristled at the idea of losing their territory and plotted to attack once again. In 1777, Cornstalk went to an American fort to warn them of an impending siege. However, he was taken prisoner and later murdered by vengeance-seeking colonists. Cornstalk’s longest-lasting legacy has nothing to do with his actions in life. After his death, when reports of a flying creature later dubbed the “Mothman” began to surface in West Virginia, its appearance was purported to have come about because of a supposed curse which Cornstalk had laid on the land after the treachery that resulted in his death. 8 Black Hawk A member and eventual war leader of the Sauk tribe, Black Hawk was born in Virginia in 1767. Relatively little is known about him until he joined the British side during the War of 1812, leading to some to refer to Black Hawk and his followers as the “British Band.” (He was also a subordinate of Tecumseh, another Native American leader on this list.) A rival Sauk leader signed a treaty with the United States, perhaps because he was tricked, which ceded much of their land, and Black Hawk refused to honor the document, leading to decades of conflict between the two parties. In 1832, after having been forcibly resettled two years earlier, Black Hawk led between 1,000 and 1,500 Native Americans back to a disputed area in Illinois. That move instigated the Black Hawk War, which only lasted 15 weeks, after which around two-thirds of the Sauk who came to Illinois had perished. Black Hawk himself avoided capture until 1833, though he was released in a relatively short amount of time. Disgraced among his people, he lived out the last five years of his life in Iowa. A few years before his death, he dictated his autobiography to an interpreter and became somewhat of a celebrity to the US public. Another Shawnee war leader, Tecumseh was born in the Ohio Valley sometime around 1768. Around the age of 20, he began going on raids with an older brother, traveling to various frontier towns in Kentucky and Tennessee. After a number of Native American defeats, he left to Indiana, raising a band of young warriors and becoming a respected war chief. One of his younger brothers underwent a series of visions and became a religious prophet, going so far as to accurately predict a solar eclipse. Using his brother’s abilities to his advantage, Tecumseh quickly began to unify a number of different peoples into a settlement known as Prophetstown, better known in the United States as Tippecanoe. One day, while Tecumseh was away on a recruiting trip, future US president William Henry Harrison launched a surprise attack and burned it to the ground, killing nearly everyone. Still angered at his people’s treatment at the hands of the US, Tecumseh joined forces with Great Britain when the War of 1812 began. However, he died at the Battle of the Thames on October 5, 1813. Though he was a constant enemy to them, Americans quickly turned Tecumseh into a folk hero, valuing his impressive oratory skills and the bravery of his spirit. Perhaps the most famous Native American leader of all time, Geronimo was a medicine man in the Bedonkohe band of the Chiricahua. Born in June 1829, he was quickly acclimated to the Apache way of life. As a young boy, he swallowed the heart of his first successful hunting kill and had already led four separate raids before he turned 18. Like many of his people, he suffered greatly at the hands of the “civilized” people around him. The Mexicans, who still controlled the land, killed his wife and three young children. (Though he hated Americans, he maintained a deep-seated abhorrence for Mexicans until his dying day.) In 1848, Mexico ceded control of vast swaths of land, including Apache territory, in the Treaty of Guadalupe Hidalgo. This preceded near-constant conflict between the new American settlers and the tribes which lived on the land. Eventually, Geronimo and his people were moved off their ancestors’ land and placed in a reservation in a barren part of Arizona, something the great leader deeply resented. Over the course of the next ten years, he led a number of successful breakouts, hounded persistently by the US Army. In addition, he became a celebrity for his daring escapes, playing on the public’s love of the Wild West. He finally surrendered for the last time on September 4, 1886, followed by a number of different imprisonments. Shortly before his death, Geronimo pleaded his case before President Theodore Roosevelt, failing to convince the American leader to allow his people to return home. He took his last breath in 1909, following an accident on his horse. On his deathbed, he was said to have stated: “I should never have surrendered; I should have fought until I was the last man alive.” 5 Crazy Horse A fearsome warrior and leader of the Oglala Sioux, Crazy Horse was born around 1840 in present-day South Dakota. One story about his name says that he was given it by his father after displaying his skills as a fighter. Tensions between Americans and the Sioux had been increasing since his birth, but they boiled over when he was a young teenager. In August 1854, a Sioux chief named Conquering Bear was killed by a white soldier. In retaliation, the Sioux killed the lieutenant in command along with all 30 of his men in what is now known as the Grattan Massacre. Utilizing his knowledge as a guerilla fighter, Crazy Horse was a thorn in the side of the US Army, which would stop at nothing to force his people onto reservations. The most memorable battle in which Crazy Horse participated was the Battle of the Little Bighorn, the fight in which Custer and his men were defeated. However, by the next year, Crazy Horse had surrendered. The scorched-earth policy of the US Army had proven to be too much for his people to bear. While in captivity, he was stabbed to death with a bayonet, allegedly planning to escape. 4 Chief Seattle Born in 1790, Chief Seattle lived in present-day Washington state, taking up residence along the Puget Sound. A chief of two different tribes thanks to his parents, he was initially quite welcoming to the settlers who began to arrive in the 1850s, as were they to him. In fact, they established a colony on Elliot Bay and named it after the great chief. However, some of the other local tribes resented the encroachment of the Americans, and violent conflicts began to rise up from time to time, resulting in an attack on the small settlement of Seattle. Chief Seattle felt his people would eventually be driven out of every place by these new settlers but argued that violence would only speed up the process, a sentiment which seemed to cool tempers. The close, and peaceful, contact which followed led him to convert to Christianity, becoming a devout follower for the rest of his days. In a nod to the chief’s traditional religion, the people of Seattle paid a small tax to use his name for the city. (Seattle’s people believed the mention of a deceased person’s name kept him from resting peacefully.) Fun fact: The speech most people associate with Chief Seattle, in which he puts a heavy emphasis on mankind’s need to care for the environment, is completely fabricated. It was written by a man named Dr. Henry A. Smith in 1887. Almost nothing is known about the childhood of one of the greatest Apache chiefs in history. In fact, no one is even sure when he was born. Relatively tall for his day, he was said to have stood at least 183 centimeters (6′), cutting a very imposing figure. A leader of the Chiricahua tribe, Cochise led his people on a number of raids, sometimes against Mexicans and sometimes against Americans. However, it was his attacks on the US which led to his demise. In 1861, a raiding party of a different Apache tribe kidnapped a child, and Cochise’s tribe was accused of the act by a relatively inexperienced US Army officer. Though they were innocent, an attempt at arresting the Native Americans, who had come to talk, ended in violence, with one shot to death and Cochise escaping the meeting tent by cutting a hole in the side and fleeing. Various acts of torture and execution by both sides followed, and it seemed to have no end. But the US Civil War had begun, and Arizona was left to the Apache. Less than a year later, however, the Army was back, armed with howitzers, and they began to destroy the tribes still fighting. For nearly ten years, Cochise and a small band of fighters hid among the mountains, raiding when necessary and evading capture. In the end, Cochise was offered a huge part of Arizona as a reservation. His reply: “The white man and the Indian are to drink of the same water, eat of the same bread, and be at peace.” Unfortunately for Cochise, he didn’t get to experience the fruits of his labor for long, as he became seriously ill and died in 1874. 2 Sitting Bull A chief and holy man of the Hunkpapa Lakota, Sitting Bull was born in 1831, somewhere in present-day South Dakota. In his youth, he was an ardent warrior, going on his first raid at only 14. His first violent encounter with US troops was in 1863. It was this bravery which led to him becoming the head of all the Lakota in 1868. Though small conflicts between the Lakota and the US would continue for the decade, it wasn’t until 1874 that full-scale war began. The reason: Gold had been found in the sacred Black Hills of South Dakota. (The land had been off-limits thanks to an earlier treaty, but the US discarded it when attempts to buy the land were unsuccessful.) The violence culminated in a Native American coalition facing off against US troops led by Custer at the aforementioned Battle of the Little Bighorn. Afterward, many more troops came pouring into the area, and chief after chief was forced to surrender, with Sitting Bull escaping to Canada. His people’s starvation eventually led to an agreement with the US, whereupon they were moved to a reservation. After fears were raised that Sitting Bull would join in a religious movement known as the Ghost Dance, a ceremony which purported to rid the land of white people, his arrest was ordered. A gunfight between police and his supporters soon erupted, and Sitting Bull was shot in the head and killed. 1 Mangas Coloradas The father-in-law to Cochise and one of the most influential chiefs of the 1800s, Mangas Coloradas was a member of the Apache. Born just before the turn of the century, he was said to be unusually tall and became the leader of his band in 1837, after his predecessor and many of their band were killed. They died because Mexico was offering money for Native American scalps—no questions asked. Determined to not let that go unpunished, Mangas Coloradas and his warriors began wreaking havoc, even killing all the citizens of the town of Santa Rita. When the US declared war on Mexico, Mangas Coloradas saw them as his people’s saviors, signing a treaty with the Americans allowing soldiers passage through Apache lands. However, as was usually the case, when gold and silver were found in the area, the treaty was discarded. By 1863, the US was flying a flag of truce, allegedly trying to come to a peace agreement with the great chief. However, he was betrayed, killed under the false pretense that he was trying to escape, and then mutilated after death. Asa Daklugie, a nephew of Geronimo, later said this was the last straw for the Apache, who would began mutilating those who had the bad luck to fall into their hands. Read more on the (unfortunately dark) history of Native Americans on 10 Horrific Native American Massacres and 10 Atrocities Committed Against Native Americans In Recent History.
<urn:uuid:d93f2b88-9a73-4031-8aeb-06ba10878515>
CC-MAIN-2021-43
https://listverse.com/2017/10/16/10-greatest-native-american-chiefs-and-leaders/?utm_source=more&utm_medium=link&utm_campaign=direct
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00670.warc.gz
en
0.990441
3,231
3.1875
3
The constitution declares the country to be a secular state but defines secularism as “protection of the age-old religion and culture and religious and cultural freedom.” The constitution stipulates every person has the right to profess, practice, and protect his or her religion. While exercising this right, the constitution bans individuals from engaging in any acts “contrary to public health, decency, and morality” or that “disturb the public law and order situation.” It also prohibits converting “another person from one religion to another or any act or conduct that may jeopardize other’s religion,” and states that violations are punishable by law. The criminal code sets five years’ imprisonment as the punishment for converting, or encouraging the conversion of, another person via coercion or inducement (which officials commonly refer to as “forced conversion”) or for engaging in any act, including the propagating of religion, that undermines the religion, faith, or belief of any caste or ethnic group. It stipulates a fine of up to 50,000 Nepali rupees ($430) and subjects foreign nationals convicted of these crimes to deportation. The criminal code also imposes punishments of up to two years’ imprisonment and a fine of up to 20,000 rupees ($170) for “harming the religious sentiment” of any caste, ethnic community, or class, either in speech or in writing. The law does not provide for registration or official recognition of religious organizations as religious institutions, except for Buddhist monasteries. It is not mandatory for Buddhist monasteries to register with the government, although doing so is a prerequisite for receiving government funding for maintenance of facilities, skills training for monks, and study tours. A monastery development committee under the Ministry of Culture, Tourism, and Civil Aviation oversees the registration process. Requirements for registration include providing a recommendation from a local government body, information on the members of the monastery’s management committee, a land ownership certificate, and photographs of the premises. Except for Buddhist monasteries, all religious groups must register as NGOs or nonprofit organizations to own land or other property, operate legally as institutions, or gain eligibility for public service-related government grants and partnerships. Religious organizations follow the same registration process as other NGOs and nonprofit organizations, including preparing a constitution and furnishing information on the organization’s objectives as well as details on its executive committee members. To renew the registration, which must be completed annually, organizations must submit annual financial audits and activity progress reports. The law prohibits the killing or harming of cattle. Violators are subject to a maximum sentence of three years in prison for killing cattle and six months’ imprisonment and a fine of up to 50,000 rupees ($430) for harming cattle. The law requires the government to provide protection for religious groups carrying out funeral rites in the exercise of their constitutional right to practice their religion, but it also states the government is not obligated to provide land grants for this purpose. There is no law specifically addressing the funeral practices of religious groups. The constitution establishes the government’s authority to “make laws to operate and protect a religious place or religious trust and to manage trust property and regulate land management.” The law does not require religiously affiliated schools to register, but Hindu, Buddhist, and Islamic religious schools must register as religious educational institutions with local district education offices (under the Ministry of Education, Science, and Technology) and supply information about their funding sources to receive funding at the same levels as nonreligious public/community schools. Religious public/community schools follow the same registration procedure as nonreligious public/community schools. Catholic and Protestant groups must register as NGOs to operate private schools. The law does not allow Christian schools to register as public/community schools, and they are not eligible for government funding. Hindu, Buddhist, and Muslim groups may also register as NGOs to operate private schools, but they too are not eligible to receive government funding. The law criminalizes acts of caste-based discrimination in places of worship. Penalties for violations are three months’ to three years’ imprisonment and a fine of 50,000 to 200,000 rupees ($430 to $1,700). The country is a party to the International Covenant on Civil and Political Rights. According to members of civil society groups, on August 27, one man was killed by police in Jhapa during a confrontation between police and the Muslim community after two persons were arrested for slaughtering cows. Police clashed with approximately 1,000 protestors on September 3 when they gathered in Lalitpur District to celebrate the Buddhist festival of Rato Machindranath in contravention of the government’s COVID-19 restrictions against festivals, large gatherings, or any nonessential activities. According to media reports, the crowd began to throw rocks and debris and to fire slingshots when police tried to stop them from pulling a five-story high ceremonial chariot through the streets. Approximately 650 Nepal Police and Armed Police Force officers responded with water cannons and tear gas and arrested nine protestors. The two sides clashed for four hours until community leaders and the Lalitpur Chief District Officer agreed on a compromise. District authorities imposed a day-long curfew enforced by armed police on September 5, the first unrest-related curfew in the Kathmandu Valley since November 2009. On March 23, according to media reports and religious groups, police in Pokhara arrested Christian preacher Keshav Raj Acharya for spreading misinformation about COVID-19. A February 21 YouTube video showed Acharya praying to “damn” the virus and stating that those who follow Christ would not become infected. The Kaski District Administration Office released Acharya with a 5,000-rupee ($43) fine for the COVID-19 related charges, but police kept him in jail and subsequently charged him with religious conversion and offending religious sensibilities. On April 19, the administration office set bail for these charges at 500,000 rupees ($4,300). On May 13, when Acharya was released on bail, he was immediately rearrested at the courthouse and transferred 400 miles to Dolpa District to face additional charges of religious conversion. On June 30, Acharya was released on 300,000 rupees bail ($2,600). Multiple religious groups stated that local police prejudice continued to factor heavily in the selective enforcement of the vague criminal code provision against “forced conversion.” In a July 18 letter to Nepal’s Attorney General, the International Religious Freedom Roundtable described Acharya’s arrest as “arbitrary” and “discriminatory” and called for charges against him to be dropped. According to media reports, police arrested two pastors on March 28, charging them with holding worship services in violation of COVID-19 restrictions. In the first case, Pastor Mohan Gurung was arrested in the Surkhet District of Karnali Province while he was talking with family members and assistant pastors who lived on church property with him. Gurung said “police jumped over the church gate, barged inside the premises, and accused [him] of holding a worship service” while he “was having family time, chatting, and studying the Bible.” In the second case, Pastor Prem Bahadur Bishwakarma was arrested in his church building, also in Surkhet District, while telling members of his congregation not to gather because of pandemic restrictions and showing them pictures depicting COVID-19 health precautions. Bahadur told the media that police officers using lathis (clubs) “charged at us” before arresting him. The two pastors were charged with violating the lockdown, disturbing the peace, and putting public health at risk. Both were released on bail on March 29. According to a Christian news portal, in February, the government deported two Japanese and three Taiwanese individuals for spreading Christianity on tourist visas. The local NGO INSEC (Informal Sector Service Center) stated that four Japanese and two Taiwanese were transferred to the Department of Immigration in Kathmandu in late February, but it could not confirm their deportations. According to civil society sources, during the year police arrested seven Jehovah’s Witnesses on two separate occasions in Pokhara for proselytizing. Two were U.S. citizens and five were Nepali citizens. The Nepali citizens were arrested on February 1 and released February 27 on 200,000 rupees ($1,700) bail per person. The U.S. citizens were arrested on March 17 and charged with religious conversion while they were visiting the house of friends, who were also Jehovah’s Witnesses. They were detained in police custody pending investigation for 11 days. On March 27, police released them due to COVID-19 protocols on 230,000 rupees ($2,000) bail each. On April 24, police recalled them and detained them until April 26, when the district court released them on an additional 200,000 rupees ($1,700) bail each, pending trial. The original 230,000 rupee bail was refunded to the U.S. citizens after they paid the second bail. As of the end of the year, their case was pending in Kaski District Court. According to the Society for Humanism Nepal, 35 individuals were arrested for cow slaughter in nine separate incidents through October. These arrests took place in eight different districts throughout the country. The government continued deepened restrictions on Tibetans’ ability to publicly celebrate the Dalai Lama’s birthday on July 6, stating the religious celebrations represented “anti-China” activities. Although authorities allowed small private celebrations of the Dalai Lama’s birthday in July, security personnel around these events outnumbered the Tibetan attendees. Similarly, Tibetans could only conduct other ceremonies with cultural and religious significance in private, such as Losar, the Tibetan New Year, and World Peace Day, which commemorates the Dalai Lama receiving the Nobel Peace Prize. Tibetan leaders urged Tibetans to respect government-imposed restrictions on public gatherings to combat the spread of COVID-19 by celebrating days of religious significance in private. Tibetan leaders organized small “official” commemorations of these occasions, which were subjected to heightened scrutiny from security personnel despite compliance with government-imposed COVID-19 restrictions. Civil society organizations said this scrutiny was the result of the government’s policy to treat all religious programs associated with the Dalai Lama as constituting “anti-China activities.” Abbots of Buddhist monasteries reported that monasteries and their related social welfare projects generally continued to operate without government interference, but they and other monks said police surveillance and questioning increased significantly during the year. Police continued to gather information from a 2019 circular sent to Tibetan institutes about Tibetan refugees studying in monasteries and nunneries. Tibetan Buddhist business owners also reported what they termed unwarranted police questioning about religious and social affiliations in their businesses and homes. Human rights lawyers and leaders of religious minorities continued to express concern that the constitution’s and criminal code’s conversion bans could make religious minorities subject to legal prosecution for actions carried out in the normal course of their religious practices, and also vulnerable to prosecution for preaching, public displays of faith, and distribution of religious materials in contravention of constitutional assurances of freedom of speech and expression. Human rights experts continued to express concern that a provision in the criminal code prohibiting speech or writing harmful to others’ religious sentiments could be misused to settle personal scores or target religious minorities arbitrarily. According to numerous civil society and international community legal experts, some provisions in the law restricting conversion could be invoked against a wide range of expressions of religion or belief, including the charitable activities of religious groups or merely speaking about one’s faith. Media and academic analysts continued to state that discussions on prohibiting conversion had entered into religious spheres in the country and that those seeking political advantage manipulated the issue, prompting religious groups to restrict some activities. According to legal experts and leaders of religious minority groups, the constitutional language on protecting the “age-old religion” and the prohibition on conversion was intended by the drafters to mandate the protection of Hinduism. Christian religious leaders continued to state that the emphasis of politicians in the RPP on re-establishing the country as a Hindu state continued to negatively affect public perception of Christians and Christianity. The RPP currently holds one seat in Parliament and civil society sources stated that it uses anti-Christian sentiment to garner populist support. (The country was a Hindu monarchy until 2007, when the interim constitution established a secular democracy.) Leaders of the RPP outside of Parliament continued their calls for the reestablishment of Hindu statehood and advocated strong legal action against those accused of killing cows. Kamal Thapa, chairman of the RPP, tweeted praise for the Prime Minister’s efforts to control conversion, criticized the government for not doing more, and likened conversion to an epidemic. Civil society leaders said pressure from India’s ruling party, the Bharatiya Janata Party (BJP), and other Hindu groups in India continued to push politicians in Nepal, particularly from the RPP, to support reversion to a Hindu state. Civil society leaders said what they characterized as right-wing religious groups associated with the BJP in India continued to provide money to influential politicians of all parties to advocate for Hindu statehood. According to NGOs and Christian leaders, small numbers of Hindutva (Hindu nationalist) supporters were endeavoring to create an unfriendly environment for Christians on social media and occasionally at small political rallies and encouraging “upper-caste” Hindus to enforce caste-based discrimination. Religious leaders said the requirement for NGOs to register annually with local government authorities placed their organizations at political risk, and one source reported their religious group was denied reregistration. Christian leaders expressed fears that changing obligations could potentially limit the establishment of churches, which must be registered as NGOs. As in recent years, the government did not recognize Christmas as a public holiday. The government, however, allowed Christians and Muslims time off from work to celebrate major holidays such as Christmas and Eid al-Adha, and continued to recognize Buddha’s birthday as public holiday. Christian leaders said the government-funded Pashupati Area Development Trust continued to prevent Christian burials in a common cemetery behind the Pashupati Hindu Temple in Kathmandu while allowing burials of individuals from other non-Hindu indigenous faiths. According to Christian leaders, the government continued its inconsistent enforcement of a court ruling requiring protection of congregations carrying out burials. Protestant churches continued to report difficulties gaining access to land they bought several years prior for burials in the Kathmandu Valley under the names of individual church members. According to the churches, local communities continued to oppose burial by groups perceived to be outsiders but were more open to burials conducted by Christian members of their own communities. As a result, they reported, some Protestants in the Kathmandu Valley continued to travel to the countryside to conduct burials in unpopulated areas. Catholic leaders reported that despite their general preference for burials, almost all Catholic parishioners continued to choose cremation due to past difficulties with burials. Many Christian communities outside the Kathmandu Valley said they continued to be able to buy land for cemeteries, conduct burials in public forests, or use land belonging to indigenous communities for burials. They also said they continued to be able to use public land for this purpose. Muslim groups stated Muslim individuals in the Kathmandu Valley continued to be able to buy land for cemeteries but said they sometimes faced opposition from local communities. According to Hindu, Buddhist, and Muslim groups, the government continued to permit them to establish and operate their own community schools. The government provided the same level of funding for both registered religious schools and public schools, but private Christian schools were not legally able to register as community schools. Although religious education is not part of the curriculum in public schools, some public schools displayed a statue of Saraswati, the Hindu goddess of learning, on their grounds. According to the Center for Education and Human Resource Development (previously the Department of Education), which is under the Ministry of Education, Science, and Technology, the number of gumbas (Buddhist centers of learning) registered rose from 111 in 2019 to 114. The department had 104 gurukhuls (Hindu centers of learning) registered during the year, up one from 2019. According to the Center for Education and Human Resource Development, 911 madrassahs were registered with district education offices, representing an increase of four from the previous year. Some Muslim leaders stated that as many as 2,500 to 3,000 full-time madrassahs continued to be unregistered. They again expressed apprehension that some unregistered madrassahs were promoting the spread of less tolerant interpretations of Islam. According to religious leaders, many madrassahs, as well as full-time Buddhist and Hindu schools, continued to operate as unregistered entities because school operators hoped to avoid government auditing and having to use the Center for Education and Human Resource Development’s established curriculum. They said some school operators also wished to avoid the registration process, which they characterized as cumbersome. Many foreign Christian organizations had direct ties to local churches and continued to sponsor clergy for religious training abroad.
<urn:uuid:b2c553fd-6d7d-4c53-8d81-37b2e1a8be59>
CC-MAIN-2021-43
https://www.state.gov/reports/2020-report-on-international-religious-freedom/nepal/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00150.warc.gz
en
0.965438
3,576
2.84375
3
Identify Invasive Plants in Gardens and Lawns: An invasive plant is any plant growing in places you don’t want it to and it grows so in a way making it quite difficult to control. An invasive plant doesn’t necessarily have to be a common garden weed and also remember that invasive species aren’t always ugly. Remember that invasive plant species aren’t necessarily ugly and unattractive. Many were introduced because they were beautiful or because of their rapid growth and effectiveness as groundcovers. Identification of invasive species is further challenged by the fact that many plants that are invasive in some regions are completely well-behaved in other regions. In this post you can learn about - 1 Most Common Invasive Plants Species in Gardens - 2 Resources for Identifying Invasive Plant Species How to Tell if a Plant Species is Invasive? A lot depends on the garden that we are looking at. Bittersweet vines, for example, may look extremely attractive and desirable in some situations, but if they start dominating over your home garden, they can be quite a problem. And other plants, like the obedient plant (Physostegia), begin as perfectly attractive landscape species that you intentionally plant, only to prove their invasive character after a few months when you realize their uncontrolled growth qualities. Some of the invasive species on the list are extremely attractive to look at. Consider the flaming bush (Euonymus alatus), an exotic (or “foreign”) plant native to Asia. Few bushes put on a greater fall leaf show than this one. The vine, delicious autumn clematis, is another fall standout (Clematis terniflora). Scotch broom is a summertime favorite (Cytisus scoparius). Most Common Invasive Plants Species in Gardens Here is a list of the most commonly occurring invasive plants and weeds in lawns and gardens. Purple loosestrife is an invasive weed that has taken over marshes, swamps, and wetlands. Many individuals who have no idea what the plant’s name is have seen it numerous times and commented on its beauty. In reality, it is a gorgeous plant, but when massed together, they propagate and grow rapidly. Purple loosestrife is supposed to have arrived in North America in the early nineteenth century as its seeds in soil used for ballast in sailing ships. It is now present in almost every region of the United States, with the exception of Hawaii and Alaska. These plants colonize wetlands by producing thick root mats which suffocate native plants, thus reducing animal habitat. The Japanese spirea is a tiny shrub that is endemic to Japan, Korea, as well as China. This invasive plant has naturalized most of the North American region. Its spread has grown so out of control in certain areas that it is deemed invasive, and many are questioning how to control its spread. Managing Japanese spirea and other techniques of spirea control rely on understanding how this plant spreads and propagates. If you wanted to drive out weeds in your garden, you’d be thrilled to learn about English ivy, a strong, beautiful ground cover which tolerates shade. That exactly describes English ivy. But there’s a catch, English ivy is overly aggressive, earning it a position on the list of the worst invasive species. It can easily escape from landscape cultivation and therefore is recognized as a severe invader, particularly in the Pacific Northwest region of the USA. There are three “bittersweets” species to be aware of: the Bittersweet Nightshade (Solanum dulcamara), the American bittersweet (Celastrus scandens), and the Oriental bittersweet (Celastrus orbiculatus). The Oriental bittersweet vine is certain to appear on most lists of the worst invasive plant species in North America. The other varieties can be invasive as well, although not as much as oriental bittersweet. American bittersweet has beautiful red or orange berries which are frequently used in ornamental arrangements. Wisteria is similar to bittersweet in this regard: The gardener in North America must differentiate amongst the American wisteria vines (Wisteria frutescens) versus their Chinese equivalents (Wisteria sinensis var.). While both varieties of wisteria are vigorous plants, he Chinese wisteria is highly invasive down the USDA hardiness zone 4 regions. Sweet Autumn Clematis Sweet autumn clematis, like the previous three vines, is one of those “good-looking” species that may overpower a landscape. It is particularly troublesome in the East as well as lower Midwest. While this plant has a really lovely odor, it is the only thing it has going for it. Clematis paniculata is also known as sweet autumn clematis, however it is a less invasive vine native to New Zealand. You must use caution while dealing with Clematis terniflora. Another beautiful, sweet-smelling plant which turns out to be a dangerous adversary is Japanese honeysuckle. This aggressive, fast-growing twining vine grows to 30 feet and has fragrant yellow blooms that bloom from June to October. It is utilized as ground cover when planted intentionally, although it is regarded as an exotic invasive species throughout the USA Midwest. If you plant it in the garden, take special care to keep this plant in control, including vigorously pruning it back on a regular basis. When this plant escapes, its enormous weight can break tree limbs and destroy trees and shrubs by weighing them down with its thick vines. Kudzu vine belongs to the pea family. For a long time, K udzu has been used as animal fodder. However, this Asian perennial vine is among the worst invasive plants of all time, and therefore is frequently referred to as “the vine which ate the South.” It is a massive issue in the Southern states of the USA. Originally planted to provide shade to porches on Southern plantations, this plant soon expanded to adjacent area, where it now consumes virtually anything it comes into contact with. It thrives in both sun and shade and is severely invasive throughout the South, Southeast, as well as up and along the Atlantic coastline. A recent weed management attempt involves introducing goats into kudzu-infested regions and releasing them to graze their fill. Another common ground cover that may become invasive is the mat-forming ajuga, commonly known as bugleweed. Because of its attractive purple flowers and capacity to suppress weeds, ajuga is frequently used as a ground cover in gloomy regions. However, as it starts to take over the garden or lawn, many homeowners learn to detest it. Ajuga is particularly troublesome in warmer and hotter regions where there is little to no winter frost each year. Barberry bushes have attacked North America in two directions. Berberis thunbergii is from Far East Asia, while Berberis vulgaris is from Europe. These invasive plants are armed with thorns which have made them so valuable in many a hedge. Barberry thunbergii, sometimes known as the Japanese barberry, is really invasive that it has been placed on a list of critically invasive species throughout most of the Midwest, implying that it shouldn’t be planted at all! Burning bush offers a beautiful sight in fall, with its crimson or pinkish-red foliage. The beautiful foliage is complemented by reddish-orange berries. So, why is the burning bush regarded as one of the most despised alien plants by gardeners? This shrub is regarded as severely invasive over most of the northern United States, from Maine through Minnesota, and in the Southeast. Lantana is a broadleaf annual shrub native to the tropical regions that has become a significant invasive in Florida, Georgia, and California. However, it is not a threat in cooler areas north of zone 9, in which it is commonly used in hanging baskets in gardens. However, in warm climates, it may readily leave gardens and naturalise in hazardous numbers. Butterfly bush is one of the most troublesome invasive plant in the Pacific Northwest region, where the growing conditions are similar to those found in its natural environment. It is also an invasive issue in the Southeast. It is less of an issue in regions colder than zone 6, because the plant dies down to ground level every winter. Butterfly weed (Asclepias tuberosa) is another plant that may be grown to attract butterflies. Butterfly bush is so named because it attracts butterflies and other pollinators, yet the plant has an unpleasant odor to humans. Privets, just like the Barberry are a common sight and this familiarity might make it difficult for some people to write them off as invasive plants. But the fact that common privet is on the government’s list of most invasive plant species is something to think about. Privet is considered outright invasive in much of the USA Midwest and Northeast from Pennsylvania to Maine. Privet’s appeal comes from the fact that it can be trimmed well and can withstand the pollution that often afflicts plants in urban areas. However, because privet bushes grow so quickly, they may readily escape cultivation and become naturalized in the wild. Japanese knotweed is a clustering perennial plant with little landscaping value. Probably the best that can be stated about its looks is that it blooms in early fall with a fluffy-looking blossom (thus its alternate name, the “fleece flower”). Regardless of the views of 19th-century plant collectors, many 21st-century Westerners agree on one thing: Japanese knotweed is an unsightly annoyance and an easy selection as one of the worst invasive species. It is regarded invasive in all states, but especially so in its native hardiness zones of 5 to 9. Full-grown trees can also be invasive, as in the case of Norway maple, which is deemed invasive in most of the Northeast and potentially harmful in Maine, Vermont, New Hampshire, as well as Massachusetts. Initially introduced as landscape trees, its seeds are easily dispersed by wind and have naturalized in different habitats. Unlike the other invasive plants mentioned in the above list, Tansy is a herb and it’s quite harmful. Tansy is a toxic herb but it has a long history of being used as a medicinal plant. Apart from being toxic, these Tansy plants are also highly invasive and can spread through their seeds as well as rhizomes. Resources for Identifying Invasive Plant Species Using your own gardening experience is one of the best ways to recognize common invasive garden plants. If you are unsure about recognizing an invasive species, take a photo and contact specialists at the local cooperative extension office to assist you. You can also refer to the resources from, - Invasive Plant Atlas of the United States - EU Commission: Environment (in Europe) - U.S. Department of Agriculture - U.S. Forest Service - Center for Invasive Species and Ecosystem Health Experts may also be found in organizations like the Soil and Water Conservation, as well as Wildlife Departments, Forestry, and Agriculture Agencies. Most counties, particularly in agricultural areas, have weed control offices.
<urn:uuid:aea5f6db-f6b9-4443-8340-934868fec553>
CC-MAIN-2021-43
https://www.wssajournals.org/invasive-plants-in-gardens-lawns/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00630.warc.gz
en
0.957094
2,335
3.265625
3
The article presents the basic concepts of non-atopic eosinophilic asthma separately and endotypes and phenotypes of asthma in children and adults; Until recently, bronchial asthma (AD) in children and adults was defined as allergic (or atopic) and non-allergic (or non-atopic). The main difference between these two types of AD is based on the presence/absence of clinical symptoms of an allergic reaction and sensitization according to allergic diagnostics (skin tests and determination of specific IgE (sIgE) antibodies to various allergens). However, thanks to the development and implementation of methods for sampling from the respiratory tract (induced sputum, cytology of nasal mucus), new analytical approaches for biological samples (evaluation of the microbiome of the respiratory tract and epigenetics), and new biostatistical methods, scientists managed to go beyond the old classification of atopic / non-atopic AD and eosinophilic asthma / non-eosinophilic AD [1, 2]. Are we known today about eosinophilic asthma? It would seem that a detailed history of the disease, including concomitant (comorbid) diseases, a study of the function of external respiration with a bronchodilator test, a general blood test, the detection of IgE antibodies, and counting peripheral blood eosinophils is information that is enough for the doctor to diagnose at the first stage patient allergic or non-allergic asthma. The clinical symptoms for both types of the disease are the same. (wheezing, shortness of breath, the sensation of pressure in the chest, and cough), which vary in time and intensity of manifestations and reversibility of bronchial obstruction. However, even ten years ago, scientists began to distinguish various subgroups of patients with AD based on the immunological features of the course of the disease, biomarker data, response to specific pharmacotherapy, and long-term prognosis. These are the so-called phenotypes – unique clinical characteristics or subtypes of AD. The study of pathophysiological processes in various asthma variants (phenotypes) gave rise to the concept of “endotypes” – “an asthma subtype characterized by a specific functional or pathophysiological mechanism” of development. Indeed, the results of cluster analysis revealed several heterogeneous asthmatic subgroups with different pathophysiology and a different response to treatment. Subgroups were classified into “endotypes” based on various traits, including sIgE levels, the number of eosinophils in induced sputum, and fractional expired nitrogen oxide (FeNO) characterized by a specific functional or pathophysiological mechanism of development.” Indeed, the results of cluster analysis revealed several heterogeneous asthmatic subgroups with different pathophysiology and different responses to treatment. Subgroups were classified into “endotypes” based on various traits, including sIgE levels, the number of eosinophils in induced sputum, and fractional expired nitrogen oxide (FeNO). They were characterized by a specific functional or pathophysiological mechanism of “development.” Indeed, the results of cluster analysis revealed several heterogeneous asthmatic subgroups with different pathophysiology and a different response to treatment. Subgroups were classified into “endotypes” based on various traits, including sIgE levels, the number of eosinophils in induced sputum, and fractional expired nitrogen oxide (FeNO) [3–8]. As further studies have shown, “Th2-type inflammation in asthma is present in most, but absent in many” . In other words, it became clear that not one, but several endotypes can participate in the formation of a non-atopic BA. Distinguish the Th2-endotype of AD, which includes allergic asthma – closely associated with atopy, sIgE production and eosinophilic inflammation. Several subgenotypes may exist within this endotype (with high levels of IL-5, high levels of IL-13, or high levels of total IgE) . The non-Th2 endotype is typical for AD patients who do not have atopy and allergy symptoms; that is, this endotype determines non-atopic asthma [1, 10, 11]. At the same time, this is also a heterogeneous group, which is associated more with neutrophilic inflammation in the airways and such cytokines like IL-17, IL-1b, TNF-α , and the chemokine receptor (CXCR2) [12, 13]. Neutrophilic inflammation is associated with the development of bronchial hypersensitivity and airway remodeling, especially in patients with non-atopic AD. This type of inflammation with a predominance of Th1 / neutrophils is accompanied by a decrease in sensitivity to steroids, and neutralization of TNF-α restores it . Obviously, with this endotype, the atopic component of inflammation is also unlikely. Mixed Endotype Th2 / Th17 Such an inflammation mechanism has been described relatively recently in AD and implies the differentiation of Th2 cells into double-positive Th2 / Th17 cells . This mixed endotype has been little studied. Thus, as can be seen from the above, most AD endotypes correspond to non-atopic asthma. As for AD phenotypes, previously accepted criteria (severe, mild, therapy-resistant asthma, etc.) also underwent substantial changes . An attempt to unravel the asthma-allergic associations It is known that almost 50% of children experience shortness of breath in the first year of life, although only 20% will have symptoms of shortness of breath in later childhood [18, 19]. In some children, the “wheezing” phenotype (wheezing, wheezing) continues until late childhood, while in others, adolescence and adulthood. In 2008, PRACTALL experts proposed the following AD phenotypes in children: virus-induced; allergen-induced; unresolved asthma; asthma of physical stress. Only in this document do scientists refer to persistent AD in children without appropriate allergic sensitization as “unresolved asthma”. It is important to note at the same time that, according to the analysis of bronchial biopsy. Samples obtained in children with wheezing (average age five years; range 2–10 years), pathomorphological changes (thickened basement membrane, increased number of eosinophils, and cytokine expression) did not differ in children with non-atopic and atopic wheezing. Phenotypes can change over time due to differences in the severity of symptoms and risk factors. Undoubtedly, therapeutic intervention can also change the course of the disease over time. According to A. Boudier et al., after 10 years of monitoring adult patients with AD (n = 3,320). The phenotype persisted in 78% of the study participants. The treatment of patients with non-atopic asthma does not differ from the treatment of allergic asthma. It includes inhaled corticosteroids with the addition of long-acting β-agonists. To achieve disease control, additional therapy consists of increasing the dose of ICS, adding anti leukotriene drugs, or theophylline . The response to drugs can vary significantly because it is not clear whether AD includes a combination of different conditions, or is it a single state with several mechanisms and phenotypes . The heterogeneity of phenotypes and the different response to anti-asthma drugs, especially in young children and patients with severe AD. Confirm the importance of personalizing therapy in each case. So, Fitzpatrick et al., in a recent study, showed that in preschool children with persistent AD. A weak short-term response to the treatment of inhaled corticosteroids in patients. Non-atopic asthma associate with the fact that these drugs primarily intend to suppress eosinophilic inflammation. Patients with severe asthma with a Th1 endotype and neutrophilia in induced sputum may benefit from macrolide therapy. In particular, a recent study once again showed that azithromycin reduces asthma exacerbations—both severe eosinophilic and non-eosinophilic asthma, which indicates the immunomodulating effect of macrolides. Recently, omalizumab is also prescribed for non-atopic asthma, since in such patients, the level of total IgE is often increased, including at the level of bronchial tissue . Recent advances in the treatment of patients with non-atopic asthma, but with clear signs of high Th2 response, relate to drugs such as mepolizumab or reslizumab that block IL-5 [42, 43]. Approaches to the treatment of various asthma endotypes Patient I., 14 years old, consulted an allergist with an unspecified diagnosis: “Chronic bronchitis of unknown etiology.” Anamnesis of life. The heredity of allergic and other chronic diseases of the respiratory tract burden. The mother had atopic dermatitis in her childhood; there were no further complaints. A girl from the first physiological pregnancy, the first independent birth. Apgar score at birth 8/9 points. Of the diseases – rare acute respiratory infections, chickenpox. For the first time, coughing without wheezing appeared at the age of 8 years. The cough associate with any provoking factors (contact with allergens, cold, physical activity). From the age of 13, nasal congestion appeared. With the preservation of smell and without other symptoms of allergic rhinitis (itchy nose, runny nose, sneezing). according to x-ray and CT of the lungs – there is no data for chronic pathology; Ultrasound of the abdominal organs – without pathology; results of an allergological examination (skin tests, determination of sIgE for inhaled allergens – were negative); there are no data for gastroesophageal reflux. In a general blood test – without pathological changes. On examination, changes from the internal organs detect. When examining the function of external respiration in a child, all “curve-flow” indicators are standard. However, the test with bronchodilator salbutamol 200 mcg is sharply positive (FEV 1 + 22%). After two months, according to the mother, the child has coughing attacks less frequently, although they have not entirely stopped. Communication with triggers is also missing.
<urn:uuid:0a577a0b-9418-4db9-a283-fc2041f7640f>
CC-MAIN-2021-43
https://www.technewsera.com/eosinophilic-asthma/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00070.warc.gz
en
0.934444
2,224
3.21875
3
|Origins of the Chinese Educational Mission| Break with Tradition Between 1872 and 1875, one hundred and twenty Chinese youths set sail for America to acquire a Western education and vocational training. They were sent there under a government-funded scheme known as the Chinese Educational Mission (CEM). Nothing like it had happened before in China's history. In a broader sense, the CEM turned a new page in China's relations with the West as well as in its ideas about education. Traditionally, China's relations with foreigners were driven by a deep-rooted belief in its own superiority. As Zhongguo 中国, "the Central Kingdom," and also the "Celestial Kingdom," whose Emperor claimed to be the Son of Heaven, China regarded itself as the center of civilization. Envoys from other nations were considered representatives of inferior barbarians who had to come kowtowing to the Emperor and bringing tribute to signify their submission. In past centuries, to acquire Chinese culture and learning, Japan, Korea and other countries sent thither their cohorts of scholars who were called liuxuesheng 留学生. The term roughly means "foreign-educated students," and previously only referred to foreign students coming to China. Thus, the CEM students represented the first liuxuesheng sent abroad by the Chinese Government. Because of their young age, they were known as liumei youtong 留美幼童, "the American-educated youngsters". The CEM also marked a break with the traditional Chinese curriculum and with the method by which candidates were usually selected for government posts. For centuries, the syllabus never deviated from the Confucian Classics - memorizing them and writing formal essays on their texts and commentaries. The "standard route" of entry to a government career consisted in passing the public examinations based on this curriculum at the local, provincial and national levels. However, under the CEM scheme, after completing their training in America, the successful graduates would be given junior ranks in the civil service. The origins of this radical scheme lay, firstly, in the conditions facing China, both externally and internally, and secondly, in the single-minded dedication of Yung Wing 容闳 (Rong Hong in Mandarin) - the man who conceived the scheme and brought it to fruition. Country in Crisis During the latter half of the 18th century, while the European powers were inventing new technologies and expanding their trade and empires abroad, China largely withdrew behind its walls and age-old traditions. Yet, it still considered Westerners barbarians and permitted only limited trade with them in Canton, under highly restrictive regulations. Its repeated refusal to open diplomatic relations with Britain and to recognize its Chief Superintendent of Trade caused increasing resentment. By the early 1800s, the introduction of cheap opium from British plantations in India had created huge demand and a spike in addiction amongst the Chinese populace. To get around the Chinese ban on importing opium, the merchants of the British East India Company, with the connivance of their government, resorted to smuggling the narcotic substance. When the Imperial Commissioner dispatched to Canton to crack down on the illegal trade ordered the destruction of a shipment of opium, his action touched off the so-called Opium War. Pressured by the powerful business lobby, agitated by the English press and spurred by unauthorized actions already taken by British agents on site, the U.K. Parliament dispatched a sizeable naval force to invade a country which only wanted to protect its own citizens and stamp out harmful contraband. Predictably, China's first armed conflict with the West proved disastrous. In the Treaty of Nanjing (1842), which ended the Opium War, followed by the Treaty of Tianjin (1858), which ended the Second Anglo-Chinese War, the victors imposed punishing terms. Besides owing some 30 million Mexican dollars in reparation, China had to cede Hong Kong and Kowloon to Britain, and open up 16 Treaty Ports to British merchants. British nationals and missionaries gained the right to live and work where they pleased, while enjoying the protection of British law on Chinese soil. The United States, France and other Western countries quickly followed and required China to sign bilateral treaties giving them similar privileges, thereby seriously eroding Chinese sovereignty. In 1860, when a combined French and British force invaded Peking, the Royal Court fled in panic to safety in Manchu territory. After trashing the Summer Palace and looting its priceless treasures with the intent of punishing the Court itself, the invaders burnt it to the ground. This great calamity exposed the Qing Government to be weak and helpless in the face of the political, commercial and cultural imperialism of the "foreign devils." At the same time, internally, China was being racked by severe social unrest which resulted in the Taiping Rebellion (1850 -1864) in the south, the Nian rebellion (1851-1868) in the northeast and the Muslim revolts (1855 -1873) in the northwest. Though they were eventually defeated, the resulting loss of life, property and security was enormous and the hold of the Qing regime was considerably weakened. Ironically, mercenaries led by Western officers and the use of steam gunboats and Western arms played a significant part in the final defeat of the Taiping rebels. This double threat facing the country impelled the more progressive officials to advocate a program of "self-strengthening". They believed that only by adopting Western techniques for making modern armaments and institutional changes could China defend itself against Western aggressors. During the 1860s, under the leadership of Prince Gong, Zeng Guofan 曾国藩, the Governor-General of Liangjiang, Li Hongzhang 李鸿章, his protégé and others, reforms slowly gained ground. The new Zongli Yamen 总理衙门 (Bureau of Foreign Affairs), was opened in 1861, as well as the College of Foreign Languages in 1862 and a school of Western languages and science in 1863. Foreign works on science, mathematics, mechanics, geography, history and international law were translated to spread the knowledge of Western techniques. Zeng Guofan played a central role in these reforms, which included the creation of shipbuilding and armaments factories. In 1863, Zeng summoned the foreign-educated Yung Wing for an interview and this fateful meeting opened the way for Yung to eventually present his scheme of sending youths to be educated in America. First Foreign University Graduate Yung Wing (1828 -1912) came from a poor Cantonese peasant family in Nanping, near the Portuguese outpost Macao. At the age of seven he was taken into a missionary-run boarding school in Macao, which he attended for four years. After the Opium War, he resumed studies in the newly-formed Morrison Education Society School (named after Dr. Robert Morrison, 1782-1834, the first Protestant missionary to China), which in 1842 relocated to Hong Kong, the freshly-acquired British colony. The School's principal and missionary teacher was Dr. Samuel Robbins Brown (1810-1880), a graduate (1832) of Yale College. Yung distinguished himself as a bright student with considerable initiative. In 1847 Dr. Brown brought Yung and two other Chinese students back with him to complete their secondary education in America. After graduating from Monson Academy in Massachusetts, Yung entered Yale College and graduated in 1854, being the first Chinese to do so from a foreign university. At Monson and Yale, Yung Wing received the best liberal education that America could offer. Boarding with Brown's relations at first, Yung got involved in campus and church life; he became highly Americanized, being a devout Christian, as well as an ardent believer in Western liberal thought. Nevertheless, he keenly felt for "the lamentable condition of China". During his final year, he pledged: "I was determined that the rising generation of China should enjoy the same educational advantages that I had enjoyed; that through western education China might be regenerated, become enlightened and powerful. To accomplish that object became the guiding star of my ambition."1 Following his interview with Zeng Guofan, Yung entered government service and in 1864 returned to the U.S. to purchase machinery for Zeng's new Jiangnan Arsenal. Soon after this endeavour, Yung's educational scheme for sending youths to America gained Zeng's strong support. Due to unforeseen setbacks, the memorial to recommend the scheme, jointly signed by Zeng and Li Hongzhang, could not be presented to the Throne until 1871. Unfortunately, Zeng died in March 1872, two months before official approval was secured, leaving Li Hongzhang to oversee its implementation. The Government's choice of the United States as the destination was largely due to the Burlingame-Seward Treaty of 1868. This was a pact signed by the American diplomat Anson Burlingame (1820-1870) serving as the plenipotentiary for China, and William Seward (1801-1872) for the USA. Unlike other treaties with the Western powers, this treaty put China and the U.S. on equal footing and thus assured the Qing Government of American goodwill. In particular, Article VII had a specific provision allowing the citizens of each country the reciprocal right to "enjoy all the privileges of the public educational institutions" in the other country. The final plan of the CEM contained these main elements: Despite the generous terms of the Government's offer, the wealthier families in the major cities of China showed little interest. This was probably owing to the entrenched prejudice against foreign countries and also to the 15-year commitment required. Perhaps for this reason, Zeng Guofan appointed Xu Run 徐润 (1838-1911), a Cantonese entrepreneur born in Xiangshan, and an early developer of modern industries, to oversee the recruitment of youths from the southern coastal areas.3. The population there, having had some prolonged contact with Westerners, was more aware of the advantages of a Western education, and from those communities the Chinese had traditionally emigrated overseas. Starting his career as an apprentice bookkeeper and comprador in the British firm, Dent and Company, and subsequently becoming a business magnate, Xu Run was a living exemplar of the benefits of Western training and work experience. Given the initial lack of response from the public, Yung Wing himself toured Hong Kong, Macau and Guangdong towns and villages, looking for recruits. With his many connections in the Pearl River Delta region, Xu Run would have done the same thing, even though Yung was strangely silent in his memoirs about Xu's recruitment effort. To be eligible, candidates aged between 12 and 15 had to be from good families, must pass a medical examination and a test of their ability in reading and writing Chinese, and―for those who had studied English―a test of attainment in that language as well.4 By one account, the boys were also given an interview, and a test of their manual and practical skills.5 The tests seemed to have been held in Hong Kong and other centres. The final tally of the 120 students by province was: In summer 1871, the CEM set up a preparatory school in Shanghai. There the students were taught both English and Chinese, tested and screened, and only the best were selected to go abroad. Yung Wing travelled some weeks ahead of them to make preparations for their placement in New England schools. On August 11, 1872, the first detachment of 30 boys departed from Shanghai on their epoch-making voyage. 1. Yung Wing (1909) , 40- 41. 3. Xu Run's Xu Yuzhai Zixu Nianpu 《徐愚斎自叙年谱》 (Autobiographical Chronicle of Xu Yuzhai); Documents of Recent Chinese History Series, No. 51, Taipei: Wenhai Publishing (1978) contained the earliest-known roster of the CEM's 120 recruits arranged in their four detachments. 4. Yung Wing (1909), 184.
<urn:uuid:89d22416-8e00-4453-9f08-53621bea9336>
CC-MAIN-2021-43
https://cemconnections.org/index2.php?option=com_content&task=view&id=29&pop=1&page=0
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00390.warc.gz
en
0.967102
2,531
3.28125
3
Giordano Bruno (1548 – February 17, 1600) was an Italian philosopher, priest, cosmologist, and occultist. He is known for his system of mnemonics based on organized knowledge, his ideas on extrasolar planets and extraterrestrial life, and his support of Nicolaus Copernicus's heliocentric model of the solar system. Like other early thinkers seeking a more reasonable view of the universe, Bruno adopted a model of the world comprising some aspects that have been incorporated into the modern scientific model and others, such as his animistic cosmology and disdain for mathematics, which are inconsistent with the modern scientific model. Due to his highly unorthodox and strongly-held views, Bruno left the Dominican priesthood and Italy in search of a stable academic position in other European countries. Aside from seven productive years in France, however, he was unsuccessful in finding an academic institution willing to permit him to teach his ideas. Returning to Italy he continued to promote unorthodox views in the face of the then-strong Roman Inquisition, which jailed him for six years, convicted him of heresy, and burned him at the stake, hanging upside-down, gagged, and naked on February 17, 1600. The Roman Inquisition killed Bruno essentially because his ideas were deemed to be too potentially disruptive of the social order and also because he was too successful in promulgating them. Such ruthless actions were noticeably ineffective in stemming the rising tide of a new worldview whose time had apparently come. Born at Nola (in Campania, then part of the Kingdom of Naples) in 1548; he was originally named Filippo Bruno. His father was Giovanni Bruno, a soldier. At the age of 11 he traveled to Naples to study the Trivium. At 15, Bruno entered the Dominican Order, taking the name of Giordano. He continued his studies, completing his novitiate, and becoming an ordained priest in 1572. He was interested in philosophy and was an expert on the art of memory; he wrote books on the mnemonic technique, which Frances Yates contends may have been disguised Hermetic tracts. The writings attributed to Hermes Trismegistus were, in Bruno's time, recently rediscovered and at that time were thought to date uniformly to the earliest days of ancient Egypt. They are now believed to date mostly from about 300 C.E. and to be associated with Neoplatonism. Bruno embraced a sort of pantheistic hylozoism, rather than orthodox Christian trinitarian belief. Bruno was also heavily influenced by the ideas of Copernicus and by the newly rediscovered ideas of Plato as well as the teachings ascribed to Hermes Trismegistus. Other influences included Thomas Aquinas, Averroes, John Duns Scotus, Marsilio Ficino, and Nicholas of Cusa. In 1576 he left Naples to avoid the attention of the Inquisition. He left Rome for the same reason and abandoned the Dominican order. He traveled to Geneva and briefly joined the Calvinists, before he was excommunicated, ostensibly for his adherence to Copernicanism, and left for France. In 1579 he arrived in Toulouse, where he briefly had a teaching position. At this time, he began to gain fame for his prodigious memory. Bruno's feats of memory were apparently based, at least in part, on an elaborate system of mnemonics, but many of his contemporaries found it easier to attribute them to magical powers. For seven years, he enjoyed the protection of powerful French patrons, including Henry III. During this period, he published 20 books, including several on memory training, Cena de le Ceneri (“The Ash Wednesday Supper,” 1584), and De l'Infinito, Universo e Mondi (“On the Infinite Universe and Worlds,” 1584). In Cena de le Ceneri he defended the theories of Copernicus, albeit rather poorly. In De l'Infinito, Universo e Mondi, he argued that the stars we see at night were just like our sun, that the universe was infinite, with a "Plurality of Worlds," and that all were inhabited by intelligent beings. These two works are jointly known as his "Italian dialogues." In 1582 Bruno penned a play summarizing some of his cosmological positions, titled Il Candelaio ("The Torchbearer"). In 1583, he went to England with letters of recommendation from Henry III of France. There he sought a teaching position at Oxford, but appears to have given offense and was denied a position there (and elsewhere in England). In 1585 he returned to Paris. However, his 120 theses against Aristotelian natural science and his pamphlet against the Catholic mathematician Fabrizio Mordente soon put him in ill favor. In 1586, following a violent quarrel about "a scientific instrument," he left France for Germany. In Germany he failed to obtain a teaching position at Marburg, but was granted permission to teach at Wittenberg, where he lectured on Aristotle for two years. However, with a change of intellectual climate there, he was no longer welcome, and went in 1588 to Prague, where he obtained three hundred taler from Rudolf II, but no teaching position. He went on to serve briefly as a professor in Helmstedt, but had to flee again when the Lutherans excommunicated him, continuing the pattern of Bruno's gaining favor from lay authorities before falling foul of the ecclesiastics of whatever hue. The year 1591 found him in Frankfurt. Apparently, during the Frankfurt Book Fair, he heard of a vacant chair in mathematics at the University of Padua and he also received an invitation to Venice from one Zuane Mocenigo, who wished to be instructed in the art of memory. Apparently believing that the Inquisition might have lost some of its impetus, he returned to Italy. He went first to Padua, where he taught briefly, but the chair he sought went instead to Galileo Galilei, so he went to the University of Venice. For two months he functioned as a tutor to Mocenigo, who probably was an agent of the Venetian Inquisition. When Bruno attempted to leave Venice, Mocenigo denounced him to the Inquisition, which had prepared a total of 130 charges against him. Bruno was arrested May 22, 1592, and given a first trial hearing before being sent for trial in Rome in 1593. Trial and death In Rome he was imprisoned for six years before he was tried, lastly in the Tower of Nona. He tried in vain to obtain a personal audience with Pope Clement VIII, hoping to make peace with the Church through a partial recantation. His trial, when it finally occurred, was overseen by the inquisitor, Cardinal Robert Bellarmine, who demanded a full recantation, which Bruno refused. Consequently, he was declared a heretic, handed over to secular authorities on January 8, 1600. At his trial, he said: "Perhaps you, my judges, pronounce this sentence against me with greater fear than I receive it." A month or so later he was brought to the Campo de' Fiori, a central Roman market square, his tongue in a gag, hung upside-down, naked, and burned at the stake, on February 17, 1600. Since 1889, there has been a monument to Bruno on the site of his execution, erected by Italian Masonic circles. All his works were placed on the Index Librorum Prohibitorum in 1603. Four hundred years after his execution, official expression of "profound sorrow" and acknowledgement of error at Bruno's condemnation to death was made, during the papacy of John Paul II. Attempts were made by a group of professors in the Catholic Theological Faculty at Naples, led by the Nolan Domenico Sorrentino, to obtain a full rehabilitation from the Catholic authorities. The cosmology of Bruno's time In the second half of the sixteenth century, the theories of Copernicus began diffusing through Europe. Although Bruno did not wholly embrace Copernicus's preference for mathematics over speculation, he advocated the Copernican view that the earth was not the center of the universe, and extrapolated some consequences that were radical departures from the cosmology of the time. According to Bruno, Copernicus's theories contradicted the view of a celestial sphere, immutable, incorruptible, and superior to the sublunary sphere or terrestrial region. Bruno went beyond the heliocentric model to envision a universe which, like that of Plotinus in the third century C.E., or like Blaise Pascal's nearly a century after Bruno, had its center everywhere and its circumference nowhere. Few astronomers of Bruno's generation accepted even Copernicus's heliocentric model. Among those who did were the Germans Michael Maestlin (1550-1631), Cristoph Rothmann, and the Englishman Thomas Digges, author of A Perfit Description of the Caelestial Orbes. Galileo (1564-1642) and Johannes Kepler (1571-1630) at the time were still young. Bruno himself was not an astronomer, but was one of the first to embrace Copernicanism as a worldview, rejecting geocentrism. In works published between 1584 and 1591, Bruno enthusiastically supported Copernicanism. According to Aristotle and Plato, the universe was a finite sphere. Its ultimate limit was the primum mobile, whose diurnal rotation was conferred upon it by a transcendental God, not part of the universe, a motionless prime mover and first cause. The fixed stars were part of this celestial sphere, all at the same fixed distance from the immobile earth at the center of the sphere. Ptolemy had numbered these at 1,022, grouped into 48 constellations. The planets were each fixed to a transparent sphere. Copernicus conserved the idea of planets fixed to solid spheres, but considered the apparent motion of the stars to be an actual motion of the earth; he also preserved the notion of an immobile center, but it was the Sun rather than the Earth. He expressed no opinion as to whether the stars were at a uniform distance on a fixed sphere or scattered through an infinite universe. Bruno believed, as is now universally accepted, that the Earth revolves and that the apparent diurnal rotation of the heavens is an illusion caused by the rotation of the Earth around its axis. He also saw no reason to believe that the stellar region was finite, or that all stars were equidistant from a single center of the universe. Furthermore, Bruno also believed that the Sun was at the center of the universe. In these respects, his views were similar to those of Thomas Digges in his A Perfit Description of the Caelestial Orbes (1576). However, Digges considered the infinite region beyond the stars to be the home of God, the angels, and of the holy. He conserved the Ptolemaic notion of the planetary spheres, considered Earth the only possible realm of life and death, and a unique place of imperfection and change, compared against the perfect and changeless heavens. In 1584 Bruno published two important philosophical dialogues, in which he argued against the planetary spheres. Bruno's infinite universe was filled with a substance—a "pure air," aether, or spiritus—that offered no resistance to the heavenly bodies which, in Bruno's view, rather than being fixed, moved under their own impetus. Most dramatically, he completely abandoned the idea of a hierarchical universe. The Earth was just one more heavenly body, as was the Sun. God had no particular relation to one part of the infinite universe more than any other. God, according to Bruno, was as present on Earth as in the Heavens, an immanent God rather than a remote heavenly deity. Bruno also affirmed that the universe was homogeneous, made up everywhere of the four elements (water, earth, fire, and air), rather than having the stars be composed of a separate quintessence. Essentially, the same physical laws would operate everywhere. Space and time were both conceived as infinite. Under this model, the Sun was simply one more star, and the stars all suns, each with its own planets. Bruno saw a solar system of a sun/star with planets as the fundamental unit of the universe. According to Bruno, an infinite God necessarily created an infinite universe that is formed of an infinite number of solar systems separated by vast regions full of aether, because empty space could not exist (Bruno did not arrive at the concept of a galaxy). Comets were part of a synodus ex mundis of stars, and not—as other authors asserted at the time—ephemeral creations, divine instruments, or heavenly messengers. Each comet was a world, a permanent celestial body, formed of the four elements. Bruno's cosmology is marked by infinitude, homogeneity, and isotropy, with planetary systems distributed evenly throughout. Matter follows an active animistic principle: it is intelligent and discontinuous in structure, made up of discrete atoms. The cosmos and its components acted independently with characteristics of living creatures. This animism (and a corresponding disdain for mathematics as a means to understanding) is the most dramatic aspect in which Bruno's cosmology differs from what today passes for a common-sense picture of the universe. - Stephan A. Hoeller, “On the Trail of the Winged God: Hermes and Hermeticism Throughout the Ages,” Gnosis: A Journal of Western Inner Traditions 40 (Summer 1996). Available online. Retrieved August 21, 2007. - Bruno, G. 1584. La Cena de le ceneri, ed. G. Aquilecchia, Turn: Giulio Einaudi, 1955; trans. S.L. Jaki, The Ash Wednesday Supper. La Cena de le Ceneri, The Hague and Paris: Mouton, 1975; trans. E. Gosselin and L. Lerner, The Ash Wednesday Supper: La cena de le ceneri. Reprint edition, 1995. Toronto: University of Toronto Press. ISBN 0802074693 - Gatti, Hilary. 2002. Giordano Bruno and Renaissance Science. Ithaca, NY: Cornell University Press. ISBN 0801487854 - Singer, D. W. 1950. Giordano Bruno: His Life and Thought with Annotated Translation of His Work 'On the Infinite Universe and Worlds’. New York: Henry Schuman. - Yates, Frances A. Giordano Bruno and the Hermetic Tradition. Chicago: University of Chicago Press. New edition, 1991. ISBN 0226950077 All links retrieved June 22, 2017. - Writings of Giordano Bruno (1548-1600) – Twilit Grotto: Archives of Western Esoterica General Philosophy Sources - Stanford Encyclopedia of Philosophy - Paideia Project Online - The Internet Encyclopedia of Philosophy - Project Gutenberg New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia: Note: Some restrictions may apply to use of individual images which are separately licensed.
<urn:uuid:0eadfdff-bfe4-4e50-bff4-4e90833bf0a9>
CC-MAIN-2021-43
https://www.newworldencyclopedia.org/entry/Giordano_Bruno
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00711.warc.gz
en
0.96205
3,287
2.828125
3
A SHORT-COURSE ON THERAPEUTIC TOUCH By Barbara Janelle M.A. I have trained people in Therapeutic Touch for many years (1) and usually teach each of the 3 basic TT levels over a 6 to 8 week period. Upon completion, a person will have 36 to 48 hours of training over 18 to 24 weeks (often longer because of school breaks between courses). Learning TT over time is important because it takes time and experience to develop the abilities of maintaining centering and assessing the field. Anyone wishing to become a practitioner and particularly anyone hoping to take TT into medical settings needs these skills and the confidence that only comes with using TT over a period of time. (2) However, it is possible to learn basic centering and simple unruffling and grounding procedures quickly and use them effectively. This article describes a simple one-day course in Therapeutic Touch that gives these basic skills. INFORMATION ON THERAPEUTIC TOUCH Some basic background information on Therapeutic Touch, followed by some experiential exercises, lays the foundation for the practical work. History. Therapeutic Touch is an energetic discipline that was developed by Dora Kunz and Dolores Krieger in the early 1970’s. Krieger was a professor of nursing at New York University and first taught TT as a graduate level course called Frontiers of Nursing. Research Base. Therapeutic Touch is used in a growing number of hospital and palliative care facilities throughout North America. Over 40 Ph.D. dissertations, a growing number of major studies, and hundreds of clinical studies show that TT induces a very rapid relaxation response, reduces pain, speeds the healing of wounds and broken bones, supports the immune system (in both the receiver and the practitioner) and reduces stress. The Energy Field. Every person and every animal is an open energy field. In good health, energy flows through the field. When there is a problem, that flow is reduced creating congestion and energy deficits in the field. Therapeutic Touch addresses a downward flow of energy through the field. The Steps of Therapeutic Touch are Centering, Assessment, Unruffling and Grounding, Energy Modulation and Ending the treatment. To do Therapeutic Touch, a person must be centered i.e. quiet and in a state of peace. The hands are used to feel the field, to assess the quality of energy flow and to identify areas of congestion and deficit. Long downward unruffling strokes with the hands usually a few inches from the skin, disperse congestion and increase the flow of energy through the field. The TT practitioner visualizes energy in the form of bright light entering through the crown chakra at the top of the head and flowing downward through the field and out the feet (grounding). Energy modulation procedures bring greater balance and even more energy into the field. The treatment is brief, usually ending in two to twenty minutes. Centering, very simple Assessment, Unruffling and Grounding are taught in this short-course. CENTERING AND SENSING THE FIELD The following is a simple exercise in centering and sensing the field: - Close your eyes and listen to all the sounds around you for a minute or two. Notice your breathing and how much of your body moves when you breathe. Pause. Then try to sense a pulse somewhere in your body. This work brings you into the present moment, a necessary state for doing TT. - Open your eyes and raise your hands, and hold them a few inches apart with the palms facing each other. Slowly draw the hands apart and then move them back toward each other in a soft and slow rubber-banding movement. Repeat this a few times. Notice what you feel. Note: you will feel more if your hands are relaxed and you are breathing! The energy field between the hands is often felt as heat, pressure or tingling in the hands. Energy builds because energy is continually flowing out of the hands. At the end of this exercise, it is useful to brush your hands off, or ground the energy by touching wood or the ground, or wash your hands under cool running water. UNRUFFLING AND GROUNDING A lot of energy flows through the field in a healthy person. Where there are problems, that flow is impeded. Congestion may build and ultimately there is a reduction in usable energy in the field. The following grounding and unruffling exercises supports and increases the flow of energy through the field. These exercises are done in pairs with one person receiving as the other works. - The receiver stands balanced over his/her feet with arms hanging at the sides. To demonstrate an effect of the upcoming work, the receiver is asked to say the alphabet A through J out loud and note the tone of voice. - The active partner then touches one of the receiver’s feet and imagines it growing roots into the earth; this may take 5 to 30 seconds. Then touch the other foot and repeat this. Then hold both feet and imagine roots. Then ask the receiver to say just the first few letters of the alphabet again. Note the tone of voice. Switch roles and repeat the exercise. After both partners have been receivers, take a few minutes to discuss the experience. Grounding is an invitation to the spirit to be fully present in the body and it invites more energy to flow in through the field as it supports the normal flow of energy out through the feet. Signs of a relaxation response are evident with changes in the voice, breathing and easing of tension in the face. - After the discussion on grounding, then do a minute of light unruffling: using long, gentle and rhythmic strokes downward from head to toe and a few inches off the skin. This hand movement is supported by visualization of light coming in through the crown of the head and flowing down through the body/field. Again touch the feet and imagine roots. Feelings of relaxation in the receiver usually increase. - Change roles and repeat the exercise. After both partners have worked and received work, take some time to discuss the experience. Note: this exercise is brief and best done with the receiver either standing or lying on a table because the receiver’s awareness of relaxation increases significantly in these positions as the work proceeds. This exercise is done in pairs with one person working, while the other receives. The receiver sits on a chair or stool with feet on the floor and hands separated and resting on the thighs. The active participant stands behind the receiver and gently places hands on the receiver’s shoulders. The hands should not be heavy, and may cup the edges of the shoulders rather than rest downward on the shoulders. The practitioner closes his/her eyes and focuses on breathing: following the breath through inhale and exhale. Attention is then turned to recognizing that it is an honor to work with the receiver and that entering the treatment is entering sacred space. After a brief time, the practitioner steps away and roles are reversed as the exercise is repeated. When both have had a chance to work and to receive, time is provided for a brief discussion. The Heart Chakra is a major energy center that influences the circulatory, respiratory and immune systems as well as the upper chest, arms and hands. As the practitioner moves into a very deep state of peace while gently holding the receiver’s heart, the energy field shifts toward better order. (3) One of the following hand positions may be used: a) holding the receiver’s hand, b) placing a hand on the back between the shoulder blades over the heart, or c) holding the heart in a sandwich with one hand at the front and the other at the back. UNRUFFLING TO DECONGEST A PROBLEM SITE Downward unruffling combined with grounding helps reduce energy build-up over problem sites. Other kinds of unruffling (4) that are very effective for pain and swelling include unruffling the edge of the congestion and/or drawing energy directly off of the site. Finding the edge exercise. A simple exercise that brings awareness of edges in the field is for a person to place a hand about two feet above his/her thigh with the palm facing the thigh. Slowly start to move the hand down toward the thigh and note places where the field is clearly felt. When there is a problem site with significantly congested energy, it is possible to find the edge of it and unruffle either downward or outward at that edge. (5) Combining this with visualizing the congested energy either moving down into the ground or disappearing into the air will move this congestion out of the field. Then follow this with gentle downward unruffling and grounding. Hand-on and hand-off decongesting. A site where there is swelling and pain is often warmer than the surrounding area. Use the back of the hand to check for heat as it is more sensitive than the palm. Then, either pull energy off perpendicularly or use a hand-on, hand-off movement to dissipate energetic congestion and reduce the heat at the site. Place a hand on the area (as long as the skin is not broken) and allow the heat from that area to come into the first layer of tissue in the hand; then take the hand away and shake the heat off. Repeat this movement for a minute or more and then check the temperature again with the back of the hand. If the area is still warm, repeat the work. This hand-on, hand-off form of unruffling removes congested energy and takes heat and swelling down very quickly. ENDING THE TREATMENT The Therapeutic Touch treatment is generally brief with most treatments being between five and ten minutes long. Seldom does a treatment go for as long as twenty minutes. Recognize that the TT treatment starts a move toward greater order and then trust the field to continue the process. The practitioner’s level of centering begins to lighten as the treatment approaches an end. When the question, “I wonder if I should stop now?” arises, it is definitely time to finish the treatment. Ideally, the receiver should rest after the treatment for twenty minutes or more. A SIMPLE TREATMENT The Therapeutic Touch treatment is a conversation with the energy field of the receiver. Centering is both the key and the essence of the treatment. Awareness of breath brings a person into present time. Recognizing that it is an honor to work with the receiver increases connection and compassion. Deepening the state of peace sets up a resonance affect that helps the receiver to relax as it also supports the energy field’s move to greater order. Centering deepens as the treatment progresses and begins to lighten as the time for ending the session approaches. Experienced practitioners recognize that centering alone changes the fields of both practitioner and receiver; indeed a treatment may be done simply by centering with a focus of respect and compassion for the receiver. After consciously entering the centered state, gently unruffle and ground the field. (6) Once again, gently unruffle and ground the field and this time notice any areas that feel different in the field. This kind of assessment is ongoing throughout the treatment. Return to these areas to do more unruffling, either downward or outward to decongest these areas. Always follow this with downward unruffling combined with visualizations of light flowing downward through the field and brightening it to increase energy flow. At the end of each unruffling pass, support the grounding. Pause to do heart support. Then unruffle, access and ground again. Always use soft hand movements, and generally keep the hands moving, except for heart support and grounding. Notice the receiver’s breathing, facial relaxation and level of comfort throughout the treatment. Finish the treatment with grounding and allow the receiver to rest for twenty minutes. Brief and gentle Therapeutic Touch treatments done even by novices effectively increase the receiver’s comfort and support healing. - My background in teaching Therapeutic Touch, until my move to California in August 2000, includes: - Teaching TT at the University of Western Ontario in the Faculty of Continuing Education 1994-2000 - Teaching in UWO’s Regional Palliative Care Level II Institute 1996-2000 - Training nurses and palliative care staff at Four Counties Regional Hospital, Newbury, ON Founding the London Volunteer TT Hospital Team, which sees patients in ICU, CCU and UWO’s Transplant Unit, as well as in regular hospital units. - Giving TT presentations to veterinary students at the Ontario Veterinary College, University of Guelph in 1999 - Training veterinarians in private practice, and animal owners in the use of the work. - Barbara Janelle, “Teaching Therapeutic Touch over Time,” Presentation and Conference Paper to the Ontario Therapeutic Touch Teachers Cooperative, Toronto, ON (October 31, 1997) - Barbara Janelle, “Heart Support,” In Touch, Vol. XII, No. 3, August, 2000, also in Embodiment of Spirit: Learning Through Therapeutic Touch and Interspecies Communication, Self-published, Kitchener, ON: 2003. - Janelle, “Unruffling Action,” In Touch, Vol. XIII, No. 3, Autumn 2001, also in Embodiment of Spirit: Learning Through Therapeutic Touch and Interspecies Communication - Janelle, “Working With the Edge: Scanning and Unruffling,” In Touch, The Therapeutic Touch Network (Ontario), Vol. VIII, No. 2, June, 1996, also in Our Healing Power: Therapeutic Touch for Humans and Animals, Self-published, Kitchener, ON: 1999 - Janelle, “Preparing the Field for Assessment,” In Touch, The Therapeutic Touch Network (Ontario), Vol. VIII, No. 3., September 1996, also in Our Healing Power: Therapeutic Touch for Humans and Animals Dolores Krieger, The Therapeutic Touch: How to Use Your Hands to Help or Heal, Prentice-Hall Press, Englewood Cliffs, NJ: 1979 Dolores Krieger, Accepting Your Power to Heal: The Personal Practice of Therapeutic Touch, Bear & Co., Santa Fe, NM: 1993 Janet Macrae, Therapeutic Touch: A Practical Guide, Alfred A. Knopf, New York: 1987, 1996 Barbara Janelle, Our Healing Power: Therapeutic Touch for Humans and Animals, Self-published, Kitchener, ON: 1999 Barbara Janelle, Embodiment of Spirit: Learning Through Therapeutic Touch and Interspecies Communication, Self-published, Kitchener, ON: 2003.
<urn:uuid:4995e505-84d1-4f00-b51b-608caf0d868b>
CC-MAIN-2021-43
https://www.barbarajanelle.com/a-short-course-on-therapeutic-touch/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00151.warc.gz
en
0.927342
3,108
2.609375
3
There has been great controversy involved with the Iraq war. This article shall analyze the negative sides of the Iraq war and its detrimental consequences to US, its allies, people of Iraq and the rest of the world. The September 11, 2001 terrorist attacks that destroyed Twin Towers, part of the Pentagon and caused death of over 3000 people was the principal initiating cause of the Iraq war. The attack was seen as attack of a medieval and sectarian ideology of terror on the principles of democracy, justice, liberty, freedom humanity and equality that the Twin Towers and ultimately USA represent. Faced with the challenge of safeguarding these ideals as well as necessity of safeguarding its own national security concerns, USA started waged a war to destroy the axis of terrorism and hatred. In this effort Iraq became the second frontier after liberation of Afghanistan in the campaign to root out axis of terror and evil, restore humanitarian values and justice world over (Teson, 2005). The course of war over last four years United States formally declared war on Saddam Hussein’s regime on 20th March, 2003 and within three weeks, on 9th April 2003, the unprecedented strength and force of coalition armies was successful in ending a tyrannical rule that was holding soul and spirit of Iraq in capture over several decades (Aday, Cluverius, Livingston, 2005). However, the end of Saddam Hussein’s regime did not bring end of the war, or the continued presence of allied forces in Iraq. This in itself was the strongest proof that US’s concern in the war ran much beyond merely overthrowing the incumbent tyrannical rule, and that it was fully committed to democracy and peace in Iraq. This commitment to democratic ideals has cost US much more than its first objective of ending former Iraqi government. While it lost only 139 soldiers before the President of United States declared an official end of combat in may 2003, the number of casualties since then has crossed over 3000, and going up even today (Aday, Cluverius, Livingston, 2005, Iraq Coalition Casualties, 2007). Most of these deaths have been due to suicide attacks and rebel attacks by loyalists of the former dictators. Many other have been engineered by al-Queda terror cells in Iraq, that have claimed military along with high number of civilian lives on almost routine basis, creating difficulties in Iraq’s transition to democracy. Consequences of Iraq war Whether seen from economic, ethical, and political point of view or from perspective of human sufferings and causality, Iraq war has spawned a web of troubles and problems that have continued to take their toll on every one involved with the campaign. The economic costs of Iraq war are huge and involve not just the direct expenditure on US military campaign, but also the cost of war on Iraqi economy, cost of rebuilding Iraqi infrastructure and impact on oil market (Nordhaus, 2002, 55). The initial estimates of cost of Iraq war were projected anywhere from US $ 100 million to US $ 100 billion, although even that was considered an overestimation (Bilmes and Stiglitz, 2006). Very soon the initial estimates were proved wrong and plans for budgetary allocations showed that even congress was estimating the cost of war to be in excess of $ 500 billion. But even this cost was an under projection of the final cost which, in the final analysis of events, shoots upward a staggering $1.3 trillion (Yglesias, 2006). This includes the cost of insurance, medical help, and disability payment made out to soldiers injured or killed in the Iraq campaign. With government’s valuation of a male in prime age at $ 6 million, as determined by environmental and safety regulations, the total cost from casualties alone goes to $ 12 billion (Bilmes and Stiglitz, 2006). Another critical economic cost suffered emanates from diminished American reputation and prestige in Middle Eastern countries and countries hostile to the concept of Iraq war. In these countries American products have lost favor, and American companies no more the first choice to do business with (ibid). As the war has resulted in increase in oil prices, it also threatens to result in increasing prices of various commodities and severely affecting transportation sector, especially the aviation sector where many companies are facing bankruptcy prospects (Bilmes and Stiglitz, 2006). Many analysts have also stated that the money spent in Iraq war might had been better used in strengthening the education and health care system of USA and thus the country has been robbed of benefits worth billion of dollars due to diverted and improvident expenditure on Iraq war (Wilson, 2006) Another negative consequence of Iraq war is the number of casualties and lives lost during the course of the war. Since the beginning of war US military has suffered 3190 deaths whereas 23758 soldiers have been wounded so far (Griffs, 2007). It is important to see that these deaths and casualties are not merely figures and statistics. They represent bright, ambitious and young sons, capable to achieve much in their life, and contribute to the US future in a much better way than to be killed or maimed permanently by a roadside bomb, or an ambush (Grigg, 2006). There are thousands of soldiers who, despite escaping death, have been crippled and suffered permanent loss of their limbs, vision, and disfiguration. These losses to life and health cannot be measured in terms of economic costs and they amount to a life time of agony and pain to survivors and their relatives. The war has also resulted in death of around 60,000 civilian deaths in Iraq (Casualties in Iraq war, 2007). Thousands of Men, women, and children have been killed by suicide attacks, burnt to death in their own home, entire families have been wiped away and thousands of families in Iraq have lost their sole bread earner (Savoy, 2004). Today they are faced with a grim prospect of uncertain and hard life staring at them. Iraq war has also a deep moral underside. US initiated the war with claims that Iraq possessed large consignments of weapons of mass destruction and with allegations that Iraq had links with al Queda as well was somewhere responsible in September 11. 2001 events (Pfiffner, 2004). However, as it turned out, these reports were completely fictitious and created just in order to give credence to the US case against Iraq (Enemark and Michalesen, 2005). No amount of manipulation of facts and findings could produce any substance to the allegations against Iraq. As a matter of fact, on September 18th, 2003 President Bush surprised many when he admitted that there was no evidence of Iraq’s connection with World Trade Center attacks (Pfiffner, 2004). Even the war in Iraq was no more projected as a war against terror network, but as a war to liberate Iraqi people from tyranny of Saddam Hussein- a claim that was hitherto absent in pre war arguments and preparations. These switching of statements greatly damaged US credibility and soured its relations with many important countries such as Germany and France. The road ahead Although the USA and coalition countries’ military objective of Iraq war were completed with dethroning, capture and finally execution of Saddam Hussein, their continued presence have not served either the interests of Iraqi population or the interests of coalition military personnel. As the most satisfying argument, it can be stated that Iraq has successfully removed its former tyrannical ruler, and with elections it has achieved at least semblance of a democratic order, its complete transition to democracy is yet incomplete due to intense internal conflicts and complexities. However, the US has suffered a great and completely unnecessary ordeal through this entire episode that may potentially affect its strategic and economic leverage and its worldwide reputation. Sean A, Cluverius J, and Livingston S. 2005. As Goes the Statue, So Goes the War: The Emergence of the Victory Frame in Television Coverage of the Iraq War. Journal of Broadcasting & Electronic Media. Volume: 49. Issue: 3. Page Number: 314+ Kaufman, Whitley. What’s Wrong with Preventive War? the Moral and Legal Basis for the Preventive Use of Force. Ethics ; International Affairs. Volume: 19. Issue: 3.: 2005. Page Number: 23+. Teson, Fernando R ‘Ending Tyranny in Iraq’. ‘Ethics ; International Affairs’ Volume: 19. Issue: 2: Nordhaus, W.D. 2002. War with Iraq-Cost, Consequence and Alternatives. American Academy of Arts and Science. Yglesias, M. 2006. $1.27 Trillion: The American Prospect. Volume: 17. Issue: 7. Publication Date: July-August 2006. Page Number: 28+. Bilmes, L and Stiglitz, J.E. 2006. The Economic Costs of Iraq War; An appraisal three years after the beginning of the conflict. Accessed on net, 11.03.2007. http://www.informationclearinghouse.info/article11495.htm Wilson, J. Jan 7, 2006. Iraq war could cost US over $ 2 billion. The Guardian. Accessed on net 11.03.2007 https://www.theguardian.com/world/2006/jan/07/usa.iraq Griffs, M. 2007. Casualties in Iraq-The Human Cost of Occupation. AntiWar.com Accessed on web 11.03.2007. http://www.antiwar.com/casualties/ Grigg, W.N. January 9, 2006.Bring ‘Em Home! The New American. Volume: 22. Issue:. Page Number: 12+ Savoy, P. 2004. The Moral Case against the Iraq War The Nation. Volume: 278. Issue: 21.Page Number: 16 :Enemark, C and Michalesen, C. 2005. Just War Doctrine and the Invasion of Iraq.The Australian Journal of Politics and History. Volume: 51. Issue: 4 Pfiffner, J.P. 2004. Did President Bush Mislead the Country in His Arguments for War with Iraq? Presidential Studies Quarterly. Volume: 34. Issue: 1. Publication Year: 2004. Page Number: 25+
<urn:uuid:dec55fe6-d4f3-466e-8051-c55e299a3ee2>
CC-MAIN-2021-43
https://qualityessays.net/negative-side-of-iraq-war/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00270.warc.gz
en
0.95258
2,093
2.765625
3
There’s a lot of confusion out there about enzymes and probiotics: What are they? Where do I get them? Do they do the same things? And which one should I be taking? The answers aren’t as straightforward as you might think. It’s true that both enzymes and probiotics help to promote digestive and immune health, but they go about this in different ways, and they each have their own benefits. Understanding the differences between the two will help you to figure out which one might be beneficial for you. What are digestive enzymes? Enzymes are the super-efficient worker bees of your digestive system. Their job is to facilitate the chemical breakdown of foods, so that your body can send its nutrients off to cells to be converted into usable energy. Different enzymes work on different types of foods. Protease breaks down proteins, lipase breaks down fats, cellulase breaks down fibers and amylase breaks down starches. There are many more, but these are the four main types that are often focused upon when talking about enzymes and enzyme supplementation. Your body creates digestive enzymes in the salivary glands, stomach, pancreas and small intestine, where most of the digestion takes place. But they’re also plentiful in raw foods. In fact, many raw fruits and vegetables contain plant enzymes that assist in breaking down that particular food. This is true of raw milk as well. It contains lactase, the enzyme essential to breaking down milk sugars. Unfortunately, when food is cooked — or in the case of milk, pasteurized — these enzymes are destroyed by the heat. They are extremely sensitive to their environment and can begin to denature at temperatures as low as 104 degrees Fahrenheit (40 degrees Celsius).1 Without the necessary dietary enzymes coming in, your body must work twice as hard to produce the enzymes on its own. This consumes energy, which means you have less to go toward other bodily functions, like your immune system. Digestive enzymes are also sensitive to the pH balance in your body. If their environment becomes too alkaline or too acidic, they may not be able to carry out their work as efficiently, or they could cease functioning altogether. This has become an issue, as we continue to consume more acidic foods, upsetting our body’s natural pH balance. When the body can’t produce enough of a certain enzyme, the result is food intolerance. One of the most common enzyme deficiencies is lactose intolerance, which results from a shortage of lactase in the system. But it’s also possible to have intolerances to other things, like fatty foods. Deficiencies can cause problems, including occasional bloating or indigestion, bowel problems and abdominal discomfort. Enzyme deficiency can also lead to a depressed immune system. It prevents your body from absorbing its food efficiently, so your body systems, including your immune system, aren’t getting as many nutrients. This makes it difficult for them to function as they should. Another issue is that undigested food can accumulate in your intestines, which creates an ideal breeding ground for disease, especially if your body is low on good bacteria (see the section on Probiotics below). In order to avoid putting such a strain on your body’s digestive system, you should make sure you’re getting an adequate supply of enzymes from your diet. Some common sources are: - Raw fruits and vegetables - Raw dairy - Fermented vegetables (sauerkraut, kimchi, etc.) - Raw honey - Coconut water You can also try a supplement like Enzymedica Digest Gold with ATPro. It contains plant-based enzymes to assist in breaking down carbohydrates, fats, fiber and protein. It also has Adenosine Triphosphate (ATP), which facilitates the passage of nutrients through the cell membrane, enabling the cells to make use of them. Though it is possible to get enzymes from animal-based foods, you should choose plant-based forms whenever possible. One study showed that plant- and microbe-based enzymes were up to 5,000 percent more active than the same types of enzymes derived from animal sources, meaning that they do their job much more efficiently. There’s also some evidence that plant-based enzymes can tolerate a wider range of pH without suffering any ill effects.2 What are probiotics? Unlike digestive enzymes, probiotics are living things — bacteria, to be specific. They are found throughout your digestive tract, especially in your intestines. Probiotics help to keep the bad bacteria that enters your body in check, and they assist in promoting a well-functioning digestive system. Certain probiotics even produce those helpful digestive enzymes that break down your foods. Probiotics aren’t produced by the body like enzymes are, so they must be consumed through the diet. They can be found in foods like: - Fermented vegetables - Apple cider vinegar - Some cheeses Probiotic supplements, like Enzymedica Pro-Bio, have become a popular alternative for those who don’t consume enough probiotics in their foods. Make sure any supplements you buy say that they are viable until the end of the shelf life (as opposed to viable at the time of manufacture), or look for “live and active cultures” on yogurt and cheese products. This is essential, because it tells you that the probiotics you’re consuming are still alive. If they’re not, they won’t do you any good. There are many things that can cause the balance of your gut bacteria to be thrown off. The most well-known cause is antibiotics. Antibiotics kill off all bacteria, good and bad, and many doctors now recommend that you take a probiotic with your antibiotic to replenish your depleted stores of good bacteria. But that’s not the only cause of imbalance. Other causes include: - Taking birth control pills - Taking NSAID pain relievers (Advil, Aspirin, Motrin, Aleve, etc.) - Consuming too much alcohol - Eating refined sugars and grains - Not getting enough sleep - Chronic stress So how do you know if you need to introduce more probiotics into your diet? The symptoms are often similar to those of enzyme deficiency, and can include, but are not limited to: occasional bloating, gas or heartburn, and bowel movement issues, as well as some skin conditions and yeast infections. If you have any of these symptoms, you may want to think about adding a probiotic supplement or eating more probiotic-rich foods to see if it helps. There are many strains of probiotics out there, and the healthiest diets will incorporate several, because they all provide different benefits. The two most popular types are Lactobacillus and Bifidobacteria. Probiotics are a relatively new field of study, so science isn’t yet sure exactly what conditions they may assist with, but research is being conducted on the effects of both Lactobacillus3 and Bifidobacteria.4 There are several strains of each of these, and most probiotic supplements will contain one or more of them. Like digestive enzymes, probiotics can be sensitive to their environments, especially to heat. Many probiotic supplements require refrigeration to keep the bacteria at a stable temperature, so they are not killed off before they enter your body and begin their work. If you are purchasing a probiotic supplement, make sure you read the label carefully to see if your product requires refrigeration. Light can also degrade probiotics, which is why most of them are stored in opaque containers. Be careful not to leave these pills out in direct sunlight, as it could render them ineffective. Store them in a cool, dark place. Which should I take? Enzymes and probiotics perform similar functions in the body, but there may be instances where you will benefit more from one or the other. For example, if you’ve recently finished a round of antibiotics, probiotics will serve you better than digestive enzymes. On the other hand, if you’re lactose intolerant, you will probably see more improvement from adding a digestive enzyme supplement containing lactase, as this would help your body to break down the sugars in milk. While they each have their own benefits, you don’t necessarily have to choose between them. Both digestive enzymes and probiotics support healthy digestive and immune systems, and there are many foods that contain both, including kefir and fermented vegetables. We recommend trying to get as many enzymes and probiotics from natural food sources as possible, though, of course, this isn’t always easy to do in today’s world. If you find yourself struggling to incorporate them into your diet, we suggest a supplement. We recommend Enzymedica Digest Gold + PROBIOTICS. This combines the plant-based enzymes that make up our Digest Gold product with eight strains of probiotics, totaling 1.5 billion active cultures. If you’re not sure what your best option is, talk to your doctor or natural health professional about your options. They may be able to help you decide which is best for your specific circumstances. While both probiotics and enzymes can be helpful by themselves, they can always be taken together, like in Enzymedica Digest Gold +PROBIOTICS! 1 Copeland, Robert A. (2000). Enzymes: A Practical Introduction to Structure, Mechanism, and Data Analysis. Wiley-VCH; 248-249. 2 Ianiro G., Pecere S., Giorgio V., Gasbarrini A., Cammarota G. (2016) Digestive Enzyme Supplementation in Gastrointestinal Diseases. Current Drug Metabolism. 17(2): 187-193. 3 U.S. National Library of Medicine. (2018). Lactobacillus. National Institute of Health. Retrieved from medlineplus.gov 4 U.S. National Library of Medicine. (2018). Bifidobacteria. National Institute of Health. Retrieved from medlineplus.gov
<urn:uuid:85fce6b2-8ee6-4914-b800-504cc77cfc8f>
CC-MAIN-2021-43
https://enzymedica.com/blogs/ingredient-science/enzymes-vs-probiotics
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00230.warc.gz
en
0.947076
2,108
2.890625
3
Glyphosate (N-(phosphonomethyl) glycine) is an organophosphorus compound that is used to kill weeds (e.g. annual broadleaf weeds and grasses) competing with crops. Since first being introduced to market around 40 years ago, glyphosate is now one of the most widely used herbicides in the world due to its relatively low toxicity towards mammals compared to other herbicides. After the introduction of genetically engineered “glyphosate tolerant”, the acceptance of glyphosate by farmers was further embraced. Unlike the weeds the herbicide is intended to destroy, crops such as corn and soybeans can withstand glyphosate treatment. Like other pesticides, glyphosate is administered directly to food products, meaning both food workers and the environment may come into contact with the herbicide, resulting in the bio burden of exposure in unrestrained regional populations. As a number of regulatory organizations have registered this herbicide product, glyphosate has been considered non-toxic with minimal risk to human health with sustained exposure at trace levels. However, recent assessments relative to toxicity by various organizations have placed glyphosate at the center of a controversy. The World Health Organization's (WHO) International Agency for Research on Cancer classified it as “probably carcinogenic to humans” in March of 2015.1 However, in November 2015, the European Food Safety Authority (EFSA) released a report stating that there was no scientific evidence connecting glyphosate to cancer.2 Regardless of the dispute in the scientific community, federal regulations have been created by food authorities in several countries. The maximum residual level for glyphosate is typically somewhere between 0.05 to 500 mg/kg, but may differ depending on the food commodity. Glyphosate is an extremely polar compound with high solubility in water and low solubility in the majority of organic solvents. These properties mean that these compounds are not well retained on traditional C18 LC columns and non-polar GC columns. As a result of this, the derivatization with fluorenylmethyloxycarbonyl chloride (FMOC-Cl) is a procedure commonly used to boost extraction and separation methods of glyphosate and other relative compounds with LC- and GC-based methods. These derivatization-based methods are often labor-intensive, time-consuming and not as reproducible. There is a growing demand to develop a method to analyze levels of glyphosate and other polar compounds without the need for derivatization. The EU Reference Laboratories (EURL) recently published two methods that can instantly analyze glyphosate (GLY), its metabolite, aminomethylphosphonic acid (AMPA), and glufosinate (GLU) without the need for derivatization. One method utilized an ion exchange column with a prolonged run time of 23 minutes, while the second method used a Hypercarb column, which necessitates a specific priming/reconditioning procedure and demonstrated considerable amounts of chromatographic peak tailing.3 This study details a 12-minute LC/MS/MS method with an amino-based column to evaluate glyphosate and other related polar compound analysis in underivatized states, with superb selectivity and sensitivity. A PerkinElmer Altus® A-30 UPLC® system was used in combination with a PerkinElmer QSight™ 210 triple quadrupole mass spectrometer. Instrument control, data acquisition and processing was conducted utilizing the PerkinElmer Simplicity 3Q™ software. The LC method conditions are detailed in Table 1. Table 1. LC method. Source: PerkinElmer Food Safety and Quality ||Shodex NH2P-50 2D column, 2.0 x 150 mm, 5 μm ||A: 5 mM ammonium acetate (pH11.0) in water; The mass spectrometer was fitted with an electrospray ionization source functioning in negative ion mode. The mass spectrometer source conditions are illustrated in Table 2: Table 2. Mass spectrometer source conditions. Source: PerkinElmer Food Safety and Quality |Heating Gas Temp By infusing neat standard solutions, MRM settings for each analyte were optimized. Table 3 lists the parameters for each analyte’s MRM transition. The dwell time for each MRM was fixed at 30 ms. Table 3. Optimized MRM settings. Source: PerkinElmer Food Safety and Quality * Quantifier ion A 1 g oatmeal sample was weighed into a centrifuge tube; 10 mL of water/acetonitrile (V/V, 2/1) was added to the tube and the mixture was then shaken/vortexed for one minute, ultra-sonicated for 15 minutes, and centrifuged at 6000 rpm for five minutes. The supernatant recovered was passed through a 0.22 μm nylon membrane filter for LC/MS/MS analysis. To prevent any potential interaction between analytes and glass surfaces, plastic sample vials were employed throughout analysis and samples were immediately analyzed after preparation. Standards Calibration Solutions Preparation of matrix-matched calibration standards was performed by adding varying levels of analytes (5.0, 10.0, 100.0, 200.0 and 500.0 ng/mL, respectively) in oatmeal matrix extract. Results and Discussion Figure 1 exhibits conventional MRM chromatograms for the three analytes spiked to 10 ng/mL (0.1 mg/kg) in oatmeal extract. All three analytes exhibited good peak shape and signal to noise as well as being well retained on the column. GLY and AMPA were eluted at almost identical retention times as a result of their similar chemical structure. GLU was baseline divided from the other analytes. No matrix interferences were observed; these can impact peak integration. Figure 1. MRM Chromatograms of GLY (A), AMPA (B), and GLU (C) spiked at 10 ng/ml in oatmeal extract. Image Credit: PerkinElmer Food Safety and Quality It is generally known that LC/MS/MS, particularly when working in ESI mode, is vulnerable to matrix effects, influencing the accuracy of quantitation. Throughout this study, a comparison of signal intensities of standards in neat solution with those of standards in matrix-matched solution at varying concentration levels was conducted to calculate matrix effects (ME). Table 4. Matrix effect result in oatmeal matrix. Source: PerkinElmer Food Safety and Quality |Matrix effect (%) An ME value less than 100% signifies matrix suppression, whereas an ME value greater than 100% implies matrix enhancement. As shown in Table 4, both GLY and AMPA demonstrate matrix suppression, while GLU exhibits matrix enhancement. Utilizing matrix-matched standards, it is possible to compensate for matrix effects, which may facilitate good quantitational accuracy without using internal standards. Calibration curves were produced by running matrix-matched calibration standards as detailed in the experimental section. Figure 2. Calibration curves for GLY (A), AMPA (B) and GLU (C) in oatmeal extract, respectively. Image Credit: PerkinElmer Food Safety and Quality Figure 2 displays the calibration curves for GLY, AMPA and GLU. Respectable linear correlation coefficients (R2 ≥0.997) were acquired between concentrations of 5 to 500 ng/mL (0.05-5 mg/kg in real sample). For the 5 ng/mL calibrant, the signal-to-noise ratios (S/N) for GLY, AMPA, and GLU were 432, 165, and 325, respectively. Using these values, the limits of quantitation (LOQs; S/N ≥ 10) were determined to be 0.12, 0.30 and 0.15 ng/mL. As the EU has declared that the maximum residue limit (MRL) for glyphosate in oatmeal is 20 mg/kg, the method derived from this study clearly meets this standard. Table 5. Linear dynamic range, regression coefficients, LOQ and S/N at LOQ level for analytes. Source: PerkinElmer Food Safety and Quality |LOQ in matrix (S/N ≥ 10) Ng/mL Table 6. Recovery of the analytes from oatmeal sample at different concentration levels. Source: PerkinElmer Food Safety and Quality ||Spiked Level (50 μg/kg) ||Spiked Level (1 mg/kg) This study announces a rapid, sensitive, and reliable 12-minute LC/MS/MS method that facilitates direct analysis of GLY, AMPA, and GLU in oatmeal without derivatization. The sample preparation method was a straightforward water/acetonitrile extraction, which demonstrated good recoveries and the minimum amount of matrix effects for all three compounds. The calibration curves for three analytes demonstrated excellent linearity over three orders of magnitude with a calibration fit of R2 greater than 0.997. The LOQs for glyphosate and other relative polar compounds were significantly less than the EU’s MRL of 20 mg/kg in oatmeal. - http://monographs.iarc.fr/ENG/Monographs/vol112/ mono112-09.pdf. Accessed on Aug 2nd, 2016 - Conclusion on the peer review of the pesticide risk assessment of the active substance glyphosate. EFSA Journal 2015; 13(11):4302. - Quick Method for the Analysis of numerous Highly Polar Pesticides in Foods of Plant Origin via LC-MS/MS involving Simultaneous Extraction with Methanol (QuPPe-Method). http://www.crl-pesticides.eu/userfiles/ file/EurlSRM/meth_QuPPe-PO_EurlSRM.pdf. Accessed on Aug 2nd, 2016 This information has been sourced, reviewed and adapted from materials provided by PerkinElmer Food Safety and Quality. For more information on this source, please visit PerkinElmer Food Safety and Quality.
<urn:uuid:0f85365c-d1e5-47e8-a900-8a831659757d>
CC-MAIN-2021-43
https://www.azom.com/article.aspx?ArticleID=20623
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00590.warc.gz
en
0.902703
2,105
3.671875
4
Nutrition, Part 2 discussed Nutrition from a Traditional Chinese Medicine (TCM) viewpoint. The TCM recommended diet is: Whole foods with about 75-85% of the diet as vegetables, whole grains and beans/legumes; 10-15% fruit and nuts, and 5-10% animal-based foods. Animal-based foods in TCM: The TCM recommended diet includes small amounts of animal-based foods. They are not the central part of any meal; instead, they are an occasional accent in meals that are vegetable and whole-grain based. Why so little? Because animal-based foods are rich and heavy, and according to TCM, this makes them likely to promote pathogenic Dampness-formation in the body, contributing to a myriad of diseases. (See Part 2 for explanation of Dampness). But, animal-based foods are not entirely excluded from the TCM diet, because in small amounts they help build more Qi and Blood in the body. The TCM diet is what I most often recommend to patients. However, for some people and health conditions, I prefer a 100% plant-based (vegan) diet, with no animal-based foods at all, at least for a time. This is because a vegan diet is very cleansing and detoxifying, and it quickly helps to drop high cholesterol levels, assist gallbladder problems, give a much needed break to the liver and kidneys, and help the body conserve pancreatic protein-digesting enzymes, which can greatly enhance the body’s ability to fight (break down) cancer cells. Other Considerations Regarding Animal foods: The Poor Qi Quality of Animal Foods: Up until about 60 years ago, all animal food products were inherently organic, free-range, hormone-free, antibiotic-free, and grass-fed. Because food animals ate their natural diet of grass, meat was rich in omega-3 fats (which help reduce inflammation). They were also leaner and, from a TCM view, their meat had better Qi, because they lived much healthier and happier lives than their modern-day counterparts. In stark contrast, the meat, dairy and eggs that are available today, as a result of being fed an unnatural diet of grain, sugar, soybeans and animal-byproducts are filled with omega-6 fats (which promote inflammation), have a higher percentage of saturated fat, and fewer beneficial elements. Many livestock, poultry and egg-laying hens do not have access to fresh air or sunlight. They are also kept in such large numbers, small cages, and close quarters that they lack the ability to stretch their limbs, turn around, or perform natural behaviors. All of these conditions create physical and psychological abnormalities leading to disturbing aberrant behaviors toward themselves and each other. These animals are also unable to move away from their own or each other’s excrement, creating hygiene problems. To combat the spread of infection, ranchers use frequent doses of antibiotics on all of their animals, sick or not, which contributes to the development of antibiotic-resistant super-bacteria, and exposes people who consume meat and dairy to these antibiotics and super-bacteria. Because meat and dairy producers make more money by increasing production volume and speed, food animals are treated with various growth hormones. We ingest these with their meat or dairy, and they wreak havoc with our bodies, including our endocrine (hormonal) systems. These animals live very unnatural, unhealthy, and unhappy lives. In my opinion, the Qi coming from these foods cannot be healthy enough to benefit our own Qi, but instead places a burden on our health. What about Organic, Grass-Fed or Free-Range? While these are certainly better, there are some factors to consider. Hundreds of labels can be found in grocery aisles for “healthier” meat, eggs and dairy. It is difficult to know what they really mean. For example: Several companies have created their own agencies to certify their meat organic, setting and breaking their own standards as they see fit. Even if the label says “USDA Certified Organic,” (no antibiotics or growth hormones), it doesn’t necessarily mean grass-fed, free-range, or given the environment to perform natural behaviors. Likewise, if the package says “grass-fed”, it doesn’t necessarily mean organic, free-range, or even that the animal was fed only grass. Many cattle start out on grass pasture for their first 6 – 12 months before spending the rest of their lives on a feedlot; some companies label this “grass-fed.” With the exception of live poultry, the USDA has no regulations on the terms “free-range” or “cage-free,” so all egg, beef, pork, and lamb producers can use these labels freely. The only requirement for “free-range” poultry is that it had access to outdoors for some unspecified amount of time (5 minutes qualifies) each day. As you can see, no label addresses everything, and every label is subject to misinformation or misinterpretation. So, when choosing animal foods, it really is best to find a local, organic farm/ranch that you can actually visit, to learn about their specific animal-rearing practices, so you know for sure what you are getting. Quality is FAR more important than quantity. What about Seafood? Farm-raised sea foods are also raised in overcrowded conditions, routinely medicated with antibiotics, and fed unnatural diets that change the balance of beneficial nutrients. In fact, farm-raised salmon are so unhealthy that their flesh is grey, so dye is injected to make them appear pink. Even wild-caught seafood is risky, since nearly all fish-supporting waters are now contaminated with mercury, dioxins, and hundreds of other toxins from industrial pollution. If you do choose to eat seafood, then wild-caught, smaller fish are the best choices. Avoid the large species like tuna, swordfish, and shark, as their large size means they have had more time to collect more toxins in their tissues. Smaller fish like anchovies and sardines have lower concentrations of toxic elements. Animal-based foods promote disease: Research shows that eating animal-based foods contributes to many diseases common in Western culture, including heart disease and cancer. Here are just a few examples: In his book, The China Study, which involved a 20-year long look at 6500 people from 65 counties across China, T. Colin Campbell, PhD states, “Consuming animal-based protein increases blood cholesterol levels. Saturated fat and dietary cholesterol also raise blood cholesterol, although these nutrients are not as effective at doing this as is animal protein.” Also, “In rural China, animal protein intake averages only 7.1 gr/day whereas Americans average a whopping 70 gr/day….Even these small amounts of animal-based food in rural China raised the risk for Western diseases.” Dr. Campbell also found, that casein, the most abundant protein in cow’s milk, is a strong promoter of cancer cells, in all stages of cancer development. Dr. Neal Barnard reports on a Japanese study that women who follow meat-based diets are eight times more likely to develop breast cancer than women on a plant-based diet. Harvard studies show that regular meat consumption triples colon cancer risk while a Cambridge University study links dairy products to an increased risk of ovarian cancer. Studies of the Seventh-day Adventists found that those who avoided meat altogether showed significant reductions in cancer risk as compared to those who ate modest amounts of meat. So, again, keeping your animal-food intake below 10% of your daily caloric intake will help reduce these risks. Animal-based foods are Unnecessary in Large amounts: “It is the position of the American Dietetic Association that appropriately planned vegetarian diets, including…vegan diets, are healthful, nutritionally adequate and….are appropriate for individuals during all stages of the lifecycle, including pregnancy, lactation, infancy, childhood, and adolescence, and for athletes.” Dr. Benjamin Spock, in the latest edition of his world-famous book, Baby and Child Care, advocates a vegetarian diet for children, and no longer recommends dairy products after the age of 2. He says that children who grow up getting nutrition from plant foods rather than meats are less likely to develop weight problems, diabetes, high blood pressure and some forms of cancer. Good sources of amino acids (protein) are green and leafy vegetables (yes, really! Green plants provide protein to animals as muscular as bulls and horses). Protein is also abundant in beans, lentils, and nuts. If you are a bodybuilder or otherwise require more protein, great vegan protein-shake powders made from pea, rice and hemp proteins can be found online and in most health-food stores. Some recommended brands are Life Basics, Plant Fusion, Vega, and Sunwarrior. Rich sources of calcium are found in green and leafy vegetables (such as kale, collard greens, swiss chard, turnip greens), beans, dried figs, tofu and broccoli. Rich sources of iron include dark green leafy vegetables, beans, lentils, tofu, spinach, swiss chard and beet greens. Omega-3 fatty acids can be obtained from flax seeds, chia seeds, walnuts, and extracts of algae (the type most used in infant formulas, since it can be cultivated in clean fermentation tanks). Other beneficial fats include avocados, coconuts and nuts/seeds. Lastly, I recommend taking a high quality multi-vitamin/mineral (whether you are vegan or not). Crop soils have been greatly depleted, so most all of our food is much less nutritious than it used to be. A high-quality, plant food-based multivitamin will help ensure that you are not missing anything, including B-12. Recommended brands include New Chapter and Garden of Life. The TCM recommended diet includes 5-10% of dietary caloric intake as animal products: Organic, grass-fed, raised in their natural environments, since these were the only type of animal that existed until about 50 years ago, and will provide the highest quality nutrition and Qi for your body. Quality of these products is far more important than quantity. Some patients can make greater health gains, faster, if they adopt a 100% plant-based/vegan diet, at least for a period of time, based on whole foods with lots of vegetables, fruits, whole grains, beans and nuts/seeds. Either way, most people need to add more plant-based meals into their diets, to use animal-foods as accents to meals, not the main course. If you would like further guidance on meal ideas, check out Plant-based Meal Ideas pages on this blog. Also, see the blog post Vegan vs Paleo: Finding the Middle Way. Dawn Balusik, AP, DOM (excerpts published in Tampa Bay Wellness, June 2011)
<urn:uuid:cf4bab06-e862-4394-8b0e-3038f62121c7>
CC-MAIN-2021-43
https://www.acupuncturebydawn.com/post/nutrition-part-3-animal-foods-in-the-diet
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00390.warc.gz
en
0.95127
2,311
2.609375
3
When you drink Coca-Cola Light every day, this is what happens to your body. Nearly half of Americans drink diet soda every day, according to a Gallup poll (via Fox News) These days, Americans are also more health conscious. With the rise of wellness drinks like celery juice Y kombucha plus a multitude of boutique fitness studios and too many health blogs to match try No wonder people realize what they put into their bodies. Still, even when you may think you are making a healthy choice, that may not be the case. Enter: Coca-Cola Light. You can assume that Coca-Cola Light is better than regular Coca-Cola, especially if you are trying to reduce taking in excess regular soda However, even though it is zero calories, it can still harm your body in many different ways. Thanks to the artificial sweeteners and other ingredients contained in each can, this is what happens to your body when you drink Coca-Cola Light every day. Drinking Coca-Cola Light every day could lead to developing metabolic syndrome If you look at the ingredient list on a can of Coca-Cola Light, you might be a little surprised at how many components you can't even pronounce. But, even if you have trouble saying it, one ingredient that is forever found in Diet Coke is at least one type of artificial sweetener. Diet Coke traditionally contains aspartame, although some versions contain Splenda (sucralose), according to Coca Cola. After all, these artificial sweeteners are what make diet drinks taste sweet without adding calories. According to Health line, Aspartame and sucralose are two of the most commonly used artificial sweeteners, and while they and other artificial sweeteners have been considered "safe," some people revealed that Mayo ClinicYou may experience adverse effects from the ingredients. As some studies have shown (through Health line), diet sodas can increase the risk of developing metabolic syndrome, which in turn increases the risk of developing heart disease. You can think the absence of sugar in soft drinks to be a good thing, but that just opens the door for artificial sweetener to be used instead. You can actually gain weight if you drink Coca-Cola Light every day Many people give up their regular sodas and turn to Coca-Cola Light as part of their weight loss efforts. While it can make a difference if you switch from consuming numerous regular sodas a day to just dieting, that's not the only piece of the puzzle. In fact, there is increasing evidence that drinking Coca-Cola Light every day can have an unwanted result. A study published in the Yale Journal of Biology and Medicine found that regular consumption of artificial sweeteners actually led to weight gain, No loss. Additionally, a report from the University of Texas at San Antonio concluded that diet sodas play a role in weight gain. Observing participants' waist measurements, it was found that people "who reported occasional use, drinking less than one diet soda a day, had their waist circumferences increased by almost 2 inches." And people "who consumed diet soda every day, or more than once a day, had their waist circumferences increased by more than 3 inches." You'll crave more sugar when you drink Coca-Cola Light every day Artificial sweeteners definitely have a bad reputation, but many people still turn to them to help them stop consuming so much sugar. While artificial sweeteners like those found in Coca-Cola LightIt may not have calories, they can make you want plus sugar. According to the Yale Journal of Biology and Medicine"Artificial sweeteners, precisely because they are sweet, promote sugar cravings and sugar dependency." So if you're trying to cut down on sugar, Diet Coke is probably not the way to go. In addition, Frank Lipman, a physician and expert in functional and integrative medicine, wrote for Good + good that "the taste of candy, be it artificial or real sugar, seems to play an important role in increasing appetite" in general. Artificial sweeteners increase your cravings, especially sweets. As such, drinking just one Coca-Cola Light every day can make you want even plus Sweet and for anyone trying to recover, that's almost the last thing they want to be craving. You will have an increased risk of diabetes if you drink Coca-Cola Light every day According to the Centers for Disease Control and PreventionMore than 30 million Americans have diabetes, and there are even more citizens who consider themselves prediabetic. Although common, diabetes can be a debilitating disease. And, if you drink Coca-Cola Light every day, you unfortunately have a higher risk of developing the disease during your lifetime. According to Health line, "Although diet soda has no calories, sugar, or fat, it has been linked to the development of type 2 diabetes." Specifically, a study of the Research Center in Epidemiology and Population Health. in France, he found that drinking artificially sweetened beverages "was associated with an increased risk (type 2 diabetes)." Coca-Cola Light may be advertised as a healthy alternative to regular soda, but given its link to metabolic disease in those who drink it regularly, it really shouldn't be presented as something healthy. Drinking Coca-Cola Light every day can affect your intestinal health The term "gut health" has become a kind of buzzword that people throw out. Everyone you know is probably concerned about their gut health, but is it really that important? Well, as it turns out, it is. According to WebMD, your intestinal health affects your all body, including its mental and physical well-being. With that said, if you drink Coca-Cola Light every day, you're not doing your gut health any favors. As Frank Lipman, a physician and expert in functional and integrative medicine, explained in an article for Good + good, artificial sweeteners actually "disrupt the microbiome and can kill the good bacteria in our gut." Specifically, a 2014 study Israeli researchers found that "consumption of commonly used NAS (non-caloric artificial sweeteners) formulations drives the development of glucose intolerance through the induction of compositional and functional disturbances in the gut microbiota." In simple terms, diet soda affects intestinal health and, in doing so, causes glucose intolerance, such as prediabetes and diabetes. Drinking Coca-Cola Light every day can cause hypertension Although high blood pressure or hypertension, as it is also known, can be common: more than 100 million Americans suffer from hypertension, according to the American Heart Association That does not mean it is not dangerous. As the Mayo Clinic explained, "High blood pressure is a common condition in which the long-term force of the blood against the walls of the arteries is high enough to eventually cause health problems, such as heart disease." As if that wasn't scary enough, drinking a Coca-Cola Light every day can actually cause high blood pressure. A 2016 study of the Food and Nutrition Department at Kyung Hee University in South Korea found that "high consumption of SSB (sugary drinks) and ASB (artificially sweetened drinks) is associated with an increased risk of hypertension." Additional studies We also discovered that drinking artificially sweetened beverages like Diet Coke can lead to hypertension. You can increase your risk of having a stroke if you drink Coca-Cola Light every day Nearly 140,000 Americans die of a stroke each year, according to the Centers for Disease Control and Prevention. Strokes are a serious health problem, and naturally most people would go to great lengths to help prevent them. However, when you drink Coca-Cola Light every day, you actually increase your risk of having a stroke. How Health line reported: "Observational studies have linked diet soda with … an increased risk of stroke." But because "there is a lack of research into the possible causes of these results," the link could be "due to pre-existing risk factors such as obesity." However, a study of the Keele Cardiovascular Research Group at Keele University in South Korea found there I was "An association between consumption of sugar-sweetened beverages and ASB (artificially sweetened beverages) and cardiovascular risk". If you enjoy Coca-Cola Light on a regular basis, you might think twice about it the next time you open one. When you drink Coca-Cola Light every day, you run the risk of headaches Even though headaches are a common condition, no one would like to have a headache if they could avoid it. Unfortunately, there are many things that can trigger headaches, such as certain foods and drinks. While everyone is different when it comes to their dietary triggers, you are at risk for a headache when you drink Coca-Cola Light every day. "I have several clients who used to suffer from migraines and pinpointed their cause for consuming soda," said Minnesota-based dietitian Cassie Bjork. Health. In addition to that, esteemed neurologist Orly Avitzur said Business Insider that diet sodas "offer little nutritional benefit, and in some cases diet soda can cause headaches or overeat." One previous study found that "aspartame can be a major dietary trigger for headache in some people." Since the study was from the late 1980s, more modern evidence is needed to really prove it as a fact. Still, there are plenty of anecdotal Evidence that Coca-Cola Light causes headaches, which can encourage many to stay away. You damage your kidneys when you drink Coca-Cola Light every day It is no secret that Diet Coke comes with a lot of luggage for your health. But while most of the problems people have with Diet Coke can be ignored, the fact that it can be terrible for the kidneys is too important to overlook. How Today reported, a study at Harvard Medical School found that "cola diet is associated with a two-fold increased risk of kidney failure." Additionally, Kidney.org reported that other A study revealed that "kidney function decreased for two decades in women who took several diet sodas a day, according to the researchers." But that is not all. Other study found that "Consumption of diet soda was associated with an increased risk of ESRD (End-Stage Renal Disease)." There is mounting evidence that Coca-Cola Light apparently leads to poor kidney health. When you drink diet sodas every day, you are not doing your body any favors. especially those kidneys. That said, it is probably best to avoid bubbly drinking since kidney failure is not a joke. If you drink Coca-Cola Light every day during pregnancy, you can start labor early Apparently there is a million restrictions when it comes to what pregnant women should eat and drink as well as what should not eat and drink. While a soda or coffee here or there is not likely to be harmful, drinking diet soda while pregnant could lead to premature labor. Preterm labor is associated with a number of risks. According to Stanford Children & # 39; s HealthPremature babies can have trouble breathing, kidney problems, and seizures. While it may seem almost unbelievable that a drink can cause premature labor, more than one study has discovered a link between consuming diet soda and premature labor. One of the Fetal Programming Center at the Statens Serum Institut in Denmark concluded that there was "an association between the intake of artificially sweetened carbonated and non-carbonated soft drinks and an increased risk of preterm delivery". Another of the Department of Obstetrics and Gynecology. at the Swedish Institute of Clinical Sciences found that "a high intake of AS (artificially sweetened) and SS (sugar sweetened) beverages is associated with an increased risk of preterm delivery." So if you're pregnant drop that Coca-Cola Light. Pregnant women who drink Coca-Cola Light every day are at risk of having babies who develop obesity. Childhood obesity, and obesity in general, has the potential to wreak havoc on a person's overall health. According to Stanford Health Care, "Obesity puts you at increased risk for type 2 diabetes, heart disease, high blood pressure, arthritis, sleep apnea, some cancers, and stroke." Unfortunately, drinking Coca-Cola Light during pregnancy is a great chance that the baby will develop obesity during infancy and childhood. According to a study of the Department of Pediatrics and Children's Health at the University of Manitoba in Canada, "Maternal consumption of artificial sweeteners during pregnancy may influence childhood BMI (body mass index)." Another study published in the International Journal of Epidemiology in 2017 it revealed that there were "positive associations between intrauterine exposure to ASBs (artificially sweetened beverages) and size at birth and the risk of overweight / obesity at age 7 years." Although a diet soda may sound tempting when you're pregnant, research shows it's probably not worth it. Drinking Coca-Cola Light every day increases your cancer risk One of the most common complaints people have with Diet Coke is that it has the potential to cause cancer. And those concerns are not unfounded. Like Ermy Levy, a behavioral science research dietitian at the MD Anderson Cancer Center at the University of TexasHe explained, there is some evidence that "artificial sweeteners can increase the risk of certain cancers," such as bladder cancer and urine cancer. "That doesn't mean that a regular soda is better for you," added Levy. "They could be affecting our health or cancer risks in ways we don't yet know about," he concluded. Additionally, more than one study has found a connection between regular consumption of diet soda and cancer. A study carried out by the Department of Clinical and Experimental Medicine of the Hospital Medical School in Italy found "a slight correlation between the risk of pancreatic cancer and CSD (carbonated soft drinks)". Other study by the Brigham and Women's Hospital Department of Medicine and Harvard Medical School, discovered "a detrimental effect of a component of diet soda, such as aspartame, on certain types of cancer." Clearly, diet soda increases your risk factors, and research on that link is just beginning. There is a link between drinking Coca-Cola Light every day and depression. Depression is a major problem not only for Americans, but also for people around the world. Some 322 million people worldwide suffer from depression, according to the Association of Anxiety and Depression of America. Unfortunately for those who drink Coca-Cola Light every day, the drink has been associated with mood disorder. As Lisa Young, an internationally recognized nutritionist, wrote in an article for HuffPost, studies have "found An association Between the two, soft drinkers were more likely to be diagnosed with depression. "That does not mean that diet soft drinks Causes depression, he explained, but there is a correlation. According to a 2014 study published in Plus one"Frequent consumption of sweetened beverages, especially diet drinks, may increase the risk of depression among older adults, while drinking coffee may reduce the risk." With that said, you may want to consider swapping your daily Coca-Cola Light for a cup of coffee. Drinking Coca-Cola Light every day can literally damage your cells One of the most alarming and surprising possible side effects of drinking Coca-Cola Light every day is that it actually has the ability to damage your cells. According to Today, Most diet drinks contain a certain ingredient that soft drinks don't usually have: "mold inhibitors." By Today, "They are called sodium benzoate or potassium benzoate." According to Coca Cola, Coca-Cola Light, Coca-Cola Zero Sugar and various other products contain such "preservatives". However, they are not great for you. "These chemicals have the ability to cause severe DNA damage to the mitochondria to the point that they completely inactivate it, completely eliminate it," Peter Piper (no, no that Peter Piper), professor of molecular biology and biotechnology at the University of Sheffield in the UK, revealed to a newspaper some time ago (via Today) Although some companies took the initiative to stop using sodium benzoate, they simply transitioned to use other mold inhibitors, which still have the ability to damage DNA.
<urn:uuid:794e38d4-1cc0-4d07-b5ad-95bcf213be22>
CC-MAIN-2021-43
https://viralpanda.net/when-you-drink-coca-cola-light-every-day-this-is-what-happens-to-your-body/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00589.warc.gz
en
0.958663
3,308
2.5625
3
Dante Aligherie’s, name itself endorses the uniqueness of the spirituality that as said by Vico, “Seems to rise suddenly, towering over the land of Italy”. (Croce, pp. 256) Dante was a divine poet but away from his divinity, there is a flow of words that immerses into our soul, the vastness of images of our demeaning life on this earth. Through the Inferno in Divine Comedy, he showed to us the reality of our lives, by putting the live characters in the different circles of hell which means Inferno; he gave the political commentary on which he himself passed through. The structure of the Hell is based on the number scheme 3, 7, 9, 10, and is divided into 24 Cantos. In the first circle that is Limbo, Dante found un-baptised souls and virtuous pagans, who, have not committed any sin, but did not accept Christ either. Those who had lustful designes on the earth are punished in the second circle of Hell, gluttons are forced in the third circle to be guided by Cerberus, people of materialistic designs got a place in fourth circle, In the fifth circle, Dante found Black Guelph, Filippo Argenti, sixth circle is a punishment circle for Heretics, severity of the torture and punishment can be seen in the seventh circle where the harsh and violent sinners have to go through, it is composed of three rings, the last two cirlces of hell punishes the sins of fraudulence and treachery. But before, we move on to the political implications and influence that Dante exerted on the profiles of the erstwhile civilizations, it is utmost important to dwell on his own political life through which Dante himself passed through and which led him to the state of exile. Dante Alighieri was born in a noble family of Florentine. His mother, Bella degli Abati died when Dante was only seven years old, and soon afterwards in 1280,s his father also died so Brunetto Latini, a man of letters and a politician looked after him. In the beginning of 1300, Florence political condition was in a state of catastrophe. The Guelph party which was in power, split into two groups-Bianchi and Neri: Bianchi led by Vieri de Cerchi and Corso Donati respectively. Bianchi was a democratic party whereas Neri was an aristocratic party and favor Pope. On 1st May there was bloody clash between the two parties and which led situation out of control. On 7th May, Dante was send on a mission to San Gemignano and immediately after-words, he was elected as one of the six priors and along-with his colleagues banished the leaders of both the factions and opposed the legacy of papal legate, Cardinal Matteo d'Acquasparta. It was on 1st November the Charles of Valois entered Florence with his troops, and bestowed the Neri to power. Corso Donati and his friends attained victory and fully revenged their opponents; in whole drama, Dante became the first victim. He was charged for being hostile to the Church and for corrupt practices, and thus was exiled on 27th January, 1302. During this period of exile, he withdraw from politics and what he gave to the world was an exclusive piece of literary art that became a mirror of an immense panorama of vainness and chaos that became a hall mark of the political circle, of which he himself was a victim. Thus, Dante with his poetic deftness, created an imaginative structure of Hell, whereby the souls sins are revealed and are punished. This simple theme brings forth the basic concept of God’s justice, yet from beneath all the punishments that the souls are undergoing emerges the political commentary of fourteenth century Florentine. Virgil takes Dante into the journey of these nine circles of Hell, and in each circle, the sinners are assigned their respective places, according to the nature of their crimes. All through the Journey of Hell, Virgil was Dante’s guide who was send by Beatrice to save Dante and gives him self realization and hope. In Canto V regarding Francesca and Paulo, the Hollanders exclaimed that “Sympathy for the damned, in the Inferno, is nearly always and nearly certainly the sign of a wavering moral disposition” (112). Seeing the condition of Francesca and Paulo, there aroused pity but it was Virgil who all the time explains the significance of their plight. Virgil is Dante’s element and is imbibed by the vast knowledge of the Journey. Virgil knows the way to skip the watchdog by throwing dirt in Cerberus' face and besides other aspects and nature of souls Virgil has enough courage to call Geryon for taking them through the journey, in the canticles he did not leave Dante and fruitfully accomplishes his task assigned to him. For Dante Virgil is a true representative of the world of politics, as he knew his duties not only towards himself but towards others, and through Virgil only Dante is enlightened. In the eyes of Dante, Virgil was a Roman poet, truly dignified and legitimate for remaining in power for centuries ahead. He was both a mentor and a poet and feels inferior to the poetic genius of Virgil, to the extent that when Dante first meets Virgil, he exclaims, "O light and honor of all other poets, may my long study and the intense love that made me search your volume serve me now. You are my master and my author"(Dante and Virgil, para. 2, 3) He strives to give his assertions in number of ways. In the first place he condemned the politicians with whom he do not agree by putting them in the different circles of Hell, and Secondly, in the Canto IV, through the mouth of Ciacco, who is facing the punishment for gluttony, had already prophesied the political turmoil in Florence. “This prophecy relates to the dissensions and violence of the parties of the Whites and the Blacks by which Florence was rent. The "savage party" was that of the Whites, who were known as Ghibellines. The "one who even now is tacking" was the Pope, Boniface VIII. , who was playing fast and loose with both. Who the "two just men" were is unknown”. (Dante, Canto VI) Dante always virtually believes in the separation of Church and State, with both functioning differently but enjoying equal powers. With many references of Rome, Dante conveys his both spiritual and political ambitions. His emphasis on giving equal treatment to both Church and State is aptly emphasized in the final stage when “Lucifer chews both on Judas, the betrayer of Christ, the ultimate spiritual leader and on Cassius and Brutus, the betrayers of Caesar, the ultimate political leader”. Any sort of treachery whether against religion or government have to spend life in the Inferno final circle and Dante asserts equal importance to giving harsh punishment to corrupt priests or paupers. Some of the ideas are similar to those of fellow Tuscan, St. Francis, and the best Pope Boniface VIII. (Wessels, Boniface VIII: The Antithesis of Franciscan Values, para, 2). As a Political voice, Dante presented Nicholas, who was lying upside down in a third pit several years before his time (Inf. 19. 52-7). Nicholas predicting the future told Dante that he clearly visualizes the damnation for simony of not only Boniface VIII but also Pope Clement V, and further about his life he told that his name was Giovanni Gaetoni before he was elected Pope in 1277. Nicholas extended the political control of Papal by adding parts of Romagna, as far towards Bologna and Ferrara, and he entered into a forged compromise in the Franciscan movement between the moderates and the radical spiritualists. Though he was popular for his high moral standards but on the other hand he was involved in nepotism favoring his cubs (relatives) in the topmost positions. He gave the position of cardinals to some of his relatives and to others he gave high posts in the papal state. He expired in 1280 and was buried in St. Peter's in Rome. Dante gave the name Evil Claws to the devils of the fifth ditch, who have responsibility of bringing corrupt political officials and employees, and Dante vociferously gave these devils the names like “Bad Dog”, Sneering Dragon and Curly beard, corresponding to the actual names of civic leaders in Florence and surrounding towns, as in the words of Dante, “with saints in church, with guzzlers in the tavern! " (Inf. 22. 14-15). Caiaphas, a high priest of Jerusalem is placed in the sixth pit. , with an added contrapasso because it was he only who provoked the council of Chief priests for crucifixion of Jesus Christ, and now they themselves are being crucified in the floor of the pit. (Inf. 23. 109-20). There is yet another Simon Magnus, famous for magical powers offered bribery to apostles Peter and John to confer the Holy spirit on him, but Peter denounced this and in the magical contests that followed, when Simon with the help of demon took a flight, Peter put a sign of cross and Simon fell onto the ground. The public’s biggest enemy according to Dante was Bonifice VIII. It was due to the Bonifice VIII, that Dante was sentenced to exile. Bonifice VIII was elected a pope soon after the abdication of Pope Celestine V in 1294. All through, Boniface's intention was to consolidate and expand the powers of church, quite contrasting to the views of Dante, who wanted to have equal powers of pope's spiritual and the emperor's secular authority. Boniface's political designs became a nightmare for Dante when the pope under the false pretense for peaceful settlement sent Charles of Valois, a French prince, to Florence, but due to the Charles intervention black guelphs were able to overthrow the ruling white guelphs, whose leaders including Dante, were sentenced to exile. And here in the journey of Hell in 1300, Dante took his revenge by cursing Pope, who was still alive then. Pope died in 1303. Inferno conveys us that our actions whether good or bad does not effect only us but whole society. Hereby the people he put in his words were all of elite class-of position and power, and this anarchy of Hell is a result of their failure to work on the ideals of Bible. And its politically it shows us the path for a new world where in the end there is an enlightenment.
<urn:uuid:eb11d4d2-fc27-44cb-8ae8-1fe9008207c4>
CC-MAIN-2021-43
https://studytiger.com/free-essay/essay-dantes-inferno-a-political-commentary/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00109.warc.gz
en
0.976866
2,242
2.828125
3
How An Artist, A Toy-Maker, A College Student Use Their Skills To Fight The Pandemic Last month, we asked our audience: What are some of the inventive ways that people are addressing COVID-19 challenges in their community? Dozens of NPR readers wrote in with nominees. Many are people who have found ways to put their special skills and talents to good use. A former toy-maker, laid off from his job, is putting on puppet shows in his living room window for passersby. An artist set up a socially distant art gallery in her backyard. Two siblings are helping local businesses provide low-cost meals to immigrant families in need. Here are six profiles of volunteers who are making a difference. Virtual classes fight free time and boredom In the weeks after March 25, when India announced a nationwide lockdown to battle the coronavirus pandemic, Perpetual Nazareth, an English teacher at Don Bosco High School in Mumbai, India, was flooded with calls from teenage students who were bored and listless. Although the school had pivoted to online classes, "I could tell that it was a tough time for them and for parents, too," she says. When Nazareth wondered how she could help, Joshua Salins, a former student, offered a suggestion: Why not teach them new hobbies? Salins had already established a small business called The Hobby Tribe in October 2019 to make it cheaper and easier for people to pursue the kinds of activities that he himself enjoys — singing, making art and playing music. "We would rent a place, hire teachers and gather people who wanted to learn something new," he says. At first, the group offered six courses: dance, guitar, the keyboard, drums, tailoring and drawing Mandala art (an Indian art form that employs circular patterns and shapes) — at budget rates — ranging from a flat fee of $10-$20 for eight sessions. After the lockdown, The Hobby Tribe ground to a halt, says the 21-year-old founder. Students couldn't attend in-person lessons. So the organization had to reinvent itself. Salins approached his alma mater to see if students would be interested in virtual classes. The response was heartening, says Nazareth, who is not affiliated with the program. Over 100 students signed up in a single day. After shifting online, The Hobby Tribe hired more teachers and expanded to offer 40 courses, including coding, photography, beauty and makeup tips, cooking and trivia games at an even lower price — $2 for eight sessions. Cheryl Moniz's 15-year-old daughter Tamara is taking photography lessons from their home in Mumbai. "Kids need an outlet like this. It gives them a chance to relax, keeps them connected and productive," she says. As word spread, it sparked interest from all over the world, Salins says. Five hundred students are now enrolled. Many are Indians based in the U.S., U.K., Nigeria, the United Arab Emirates, Australia, Cyprus, Hong Kong, Singapore, Oman and Qatar. People of all ages are welcome to join. "In these tough times, hobbies do more than build character. It's an interesting way to interact with like-minded people," Nazareth says. "They can mold you into the person you want to become." Kamala Thiagarajan is a freelance journalist based in Madurai, India, who has written for The International New York Times, BBC Travel and Forbes India. You can follow her @kamal_t. DIY dispensers provide free face masks to all For residents of 15-mile-long Lopez Island, which sits off the coast of Washington, finding a face mask is pretty easy. Thanks to Chom Greacen, 29, mask dispensers are scattered throughout the island. People can pull a free cloth mask from a dispenser whenever they need one. Greacen normally conducts energy research in Thailand, but due to travel restrictions, she hasn't been able to continue her work. So she's been devoting her spare time to getting masks to the more than 2,000 islanders. "Back in March and April, masks [of all types] were difficult to come by or they were expensive," Greacen says. While the islanders mobilized to get N95 masks donated to their EMTs and health care workers, she realized that everyone else would need a mask, too. In March, Greacen came up with her DIY project, Grab-and-Go Masks. "I thought of the little sanitary product vending machines that are in public bathrooms," Greacen says. She made a prototype of a hanging rectangular box using leftover campaign signs from her husband's run for the school board. The durable, plastic material would protect the masks within. The masks are also DIY, made using blue disposable shop towels made of polypropylene — they're heavy duty paper towels typically used for cleaning up grease, oils and spills in automotive repair shops. The recommendation for the material came from Dr. Peter Tsai, the inventor of the N95 respirator. To date, 40 volunteers have helped make some 6,000 masks using kits — complete with directions and supplies — provided by Greacen. Volunteers also help restock the dispensers. Greacen has not spent a cent on the project. Shop towels for the masks are donated by the Islands Marine Center (IMC), a local marina and boat dealership, which has two of Greacen's dispensers on its property. "It was great that Chom sprung into action [at the start of the pandemic] because there was no way to get masks whatsoever," Tim Slattery, general manager of the IMC, wrote in an email to NPR. "We work in an essential business that was still up and running, so having masks available was a must." Today, dispensers can be found all over the island, indoors and outdoors. They're in high-traffic areas such as grocery stores, a bakery, the fudge shop, the ferry dock and the farmer's market. Some hang on the front doors of businesses for customers to grab as they enter. Greacen plans to keep the project going as long as masks are needed. Lopez Island has only had three cases of the coronavirus to date, according to the San Juan County Health Department. Dr. Robert Wilson, the island physician who treated and diagnosed the three cases, says Greacen's promotion of mask usage and making them available to the public has been great. "[Lopez Island] is your typical small town," Wilson says. "People here are more connected and more likely to get involved. They're interested in keeping each other safe." A socially distant gallery brings art to the neighborhood In early April, Shawana Brooks, an artist and a curator in Jacksonville, Fla., had a big idea: What if she started an art gallery in her backyard? Because of the pandemic, her husband Roosevelt Watson III, also an artist, couldn't show his exhibit the Way Maker Series to the public. The eight-piece mixed-media project honors historical figures from Jacksonville who have fought for the rights and freedoms of the Black community, such as William Stetson Kennedy, who infiltrated and investigated the Ku Klux Klan in the 1940s, and James Weldon Johnson, who brought attention to racism and lynching as a member of the NAACP. So Brooks convinced Watson to put his works outside their home and share it with the neighborhood. She created a sign encouraging others to enjoy the art while practicing social distancing. And that's how 6 Ft. Away Gallery was born. The gallery, which sits on a half-acre of land on their property, is free and open daily to everyone — rain or shine. As in any gallery, the art is for sale. Hundreds of people have already visited, Brooks says, and they now host backyard discussions where Watson talks about his artwork. Watson says the gallery has given him an opportunity to share his art at a time when there aren't many opportunities for artists. Brooks and Watson are planning to feature more artists in their backyard in the future. But for now, they are raising money to help local Black artists struggling to find work. The project, called Color Jax Blue, uses those funds to pay artists to create large-scale murals that encourage the Black community to vote. Karen Barnes-Rivera, a former colleague of Brooks and a local business leader who nominated 6 Ft. Away Gallery for this piece, says the pop-up gallery and Color Jax Blue have offered hope to the Jacksonville community during the pandemic. "These projects remind us that creativity, art, vibrancy and a sense of community can create resilience," Barnes-Rivera wrote in an email to NPR. Former bee-suit makers sew masks for farmers Susana Gómez Luz normally sews beekeeping overalls and veils for Maya Ixil, a cooperative of over 200 small-scale farmers that harvests, sells and exports coffee and honey in the remote village of Santa Avelina, Guatemala. But during the pandemic, the 23-year-old switched gears to learn how to make face masks, an idea spearheaded by the co-op, which paid for materials and her labor. "I downloaded an image of the design and practiced it five times before getting started," she says. "It was easy." With the help of two colleagues, Gómez Luz manufactured reusable face masks, made of finely meshed cloth with extra lining inside. Since May, they have sewn more than 200 masks and donated them to most of the indigenous farmers and beekeepers in the co-op. Wearing a mask to prevent the spread of the coronavirus is mandatory in Guatemala. But a tight-fighting, multi-layered mask goes for about $2.50. With the cutback in hours due to lockdown restrictions, some workers make as little as $5 a day. Although lockdown restrictions are now easing, farmers have only been able to work half days since the national quarantine was imposed on March 17, due to a 4 p.m. curfew that was lifted at the end of July. And while their wages have halved, the price of basic food basket items like eggs has risen by up to 50%, says co-op manager Miguel Ostuma. The donated masks have been a huge help, says Domingo de la Cruz Toma, a 52-year-old beekeeper with the co-op. "It's been really beneficial." Marcela Pino, co-director of the U.S. nonprofit Food 4 Farmers — which works with coffee farming communities like the Maya Ixil — says this resourcefulness is characteristic of the vibrant co-op. "They work so hard ... and they never lose spirit," says Pino. Sophie Foggin is a journalist based in Medellin, Colombia, covering politics, human rights, history and justice in Latin America. Local restaurants offer cheap comfort food to families in need Siblings Esther Chong, 31, and Sam Chong, 34, knew they wanted to help families in need in Palisades Park, N.J., during the pandemic. The town, which has a large upper-middle class population, is home to the largest percentage, at 64%, of immigrants of any municipality in New Jersey. Many are of Asian and Latino descent. And many are undocumented and haven't had access to unemployment benefits and stimulus checks during the crisis. So on April 6, the pair — who sell mobile phone accessories online as their day jobs — started Our Community Dinner Table (CDTable) to provide food to families in need. Using funds donated by the community, the Chongs buy pre-packaged dinners — at an average of about $6 each — from struggling locally run restaurants, helping to provide them with some income. Then, working with CDTable volunteers, they distribute the meals to families for free in the public library's underground parking lot, Monday through Friday. Recipients queue on spray-painted lines, 6 feet apart for social distancing, to choose their meal for the day: usually Italian, Korean or Latin American. "Every ethnicity's idea of comfort food is different," says Esther. "In America, a lot of people think of chicken noodle soup. For Asian communities, maybe it's more like rice porridge. We wanted this to be a source of comfort as well." So far, CDTable has served nearly 16,000 meals — 300 a day at its peak in June — and raised more than $90,000 through grants and fundraising. Most of the funds go toward buying more meals as well as supplies like bags for the meal and personal protective equipment for volunteers. CDTable now partners with about 10 restaurants and has 10 to 15 volunteers who show up on a regular basis, including Mayor Chris Chung, who has helped distribute meals almost every day. "The impact [of CDTable] has been enormous," says Chung, for both recipients and restaurants. James Chang, assistant manager of Jin Go Gae, a longstanding Korean catering company, says their business took a "big hit" when large gatherings ceased because of the pandemic. Providing discounted meals to CDTable has been a consistent source of income as well as a way to share their food with their community. "It gives us real joy," says Chang. Joanne Lu is a freelance journalist who covers global poverty and inequity. Her work has appeared in Humanosphere, The Guardian, Global Washington and War is Boring. Follow her on Twitter: @joannelu A window puppet show to entertain passersby After being laid off from his job making toys for animals at Brookfield Zoo in Chicago in early April, Matthew Owens was looking for a way to pass the time at home in the pandemic. So he decided to revive his longtime love of making puppets. A few weeks later, he started a project called the Lockdown Puppet Theater. Every Saturday afternoon since then, he has put on free 30-minute puppet shows for passersby from his second-story living room window. The circus-themed show features characters such as the Tattooed Man, clowns and a high diver whom Owens drops from his window into a glass of water that his wife places on the sidewalk before the show. The audience favorite is a toad puppet named Yoshi who lip syncs to a 1950s Japanese yodeler. Owens now has over 50 different puppets, each of which he makes by hand. Owens does not take donations and pays for the materials with his own funds. "I just want people to be happy and to have something to smile about," says Owens. The show has become a local hit. Today, anywhere from 20 to 50 people, adults and children alike, line up to watch the shows. Since July, the state of Illinois has allowed public gatherings of 50 or fewer people; previously this was restricted to 10 or fewer. For his part, Owens does remind the audience to maintain social distancing and wear masks throughout the show — and, for the most part, he says, people abide. Emily Landon, an infectious disease specialist at the University of Chicago, says audience members need to be responsible and take precautions for their safety. They should disperse, for example, if the crowd becomes too large. Landon adds that Owens could end or pause the show if the audience is not taking those safety precautions. "In my mind, the sort of ethical responsibility would be to post a sign saying, 'Please stay 6 feet apart, please wear face coverings,' and if more than 50 people gather, then I have to stop," she says. Hannah Long, a local Chicago resident who has seen the show twice with her two young kids, says the show was a reminder to her family that there are still "good and lighthearted things in the world" — and also restored a sense of community missing since the pandemic began, she adds. "Some people say the show is the highlight of their week," Owens says. "Despite the fact that they are wearing masks, I am pretty confident the audience is smiling underneath." Jessica Craig is an intern on NPR's science desk. Thank you to everyone who nominated a problem-solver in your community. We enjoyed reading through them! Copyright 2021 NPR. To see more, visit https://www.npr.org.
<urn:uuid:465d8300-f1cf-4731-8686-6244412b5f18>
CC-MAIN-2021-43
https://www.kazu.org/2020-08-12/how-an-artist-a-toy-maker-a-college-student-use-their-skills-to-fight-the-pandemic
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00390.warc.gz
en
0.97586
3,417
2.53125
3
For language buffs, COVID-19 is a potential sci-fi plot. Think: Millions of families go inside for months—what will they all sound like when they come out? After all, Latin became French when Latin speakers in France spoke more with one another than with speakers elsewhere for long enough that the step-by-step morphings in France had created a different tongue from those in Spain or Italy. If people are shut up in their houses for months on end, won’t each unit start developing its own slang, its own vowel colorings, and more? As neat-o as it is to imagine, American English will not be separating into different dialects due to people interacting less. Spatial distance isn’t the only kind. Communications technology allows round-the-clock verbal interactions with legions of other people regardless. Many American adults are spending almost as much time Zooming and FaceTiming as we were interacting with people live before late March, and if and when we go out in the open, we will use language much as we always have. But the pandemic will still change language, broadly construed—just not among adults. One effect that this crisis may well have on American language is to bolster the longevity of its diversity. In this country, currently about one in four children live in homes with a language other than English as the main language. Yet, the sad truth is that these languages tend not to make it far past that home unless there are massive communities of people living in that language, which is true of just a few such as Spanish and Mandarin, or in more isolated communities, Yiddish and German. Often, children in bilingual homes learn their parents’ and grandparents’ language to a functional but abbreviated degree. They can converse fluently on a basic level, but never master the level of language required to discuss complex topics and miss many of the pickier aspects of the language’s grammar. Linguists call this “heritage language.” People who speak a language only at this level seldom pass it on to their children, even if they happen to marry someone who also speaks the language on the same level. Extended family members are often dismayed to see that their kids only reach this level in their home language, if even that. Spanish and Chinese speakers have it somewhat easier because of how much media is available in their languages. Also, they can live in neighborhoods where a critical mass of people speak the language, signs are written in it for blocks on end, etc. For speakers of Polish, Hebrew, or Tagalog, however, this is much less likely. The truth is that if kids raised in America are going to grow up speaking home languages like those beyond the “heritage” level, in most cases they will have to spend holidays and every summer in the country where that language is spoken, unless they are to lead lives of unusual isolation within the larger society, which preserves languages like Yiddish (hardly dying as it is often claimed, given that about 150,000 people are using it at home in the United States as I write) and Pennsylvania Dutch. Note, however, that conditions under the virus have created something ominously close to just that kind of isolation for the time being. Children whose Bengali or Danish was slipping have now been spending infinitely more time with parents (and especially in immigrant communities, grandparents) and have been able to use the home language all day every day for the first time since toddlerhood. I have heard from many parents who say that they are pleased to see their children’s skills in the home language explode or at least improve. A summer immersed in a language can do wonders, as veterans of Middlebury College’s famous language-learning program can attest. The lockdown is clearly going to amount to the equivalent of about two summers, and there are mini-Middleburys happening in millions of houses worldwide. Many will be less sanguine about another possible legacy of the virus. If this kind of lockdown becomes necessary in waves until there is a vaccine, and especially if other pandemics arrive in the near-future and mean again separating children from formal schooling for months at a time, the ever-increasing oralization of American language use will be entrenched even more deeply than before. The data are in: School via screen doesn’t work well. Children of highly educated parents with book-lined homes can weather it; most of their education takes place passively anyway. However, that is not most children in the United States, despite how disproportionately people of that class are represented among those who write about kids’ fate during this crisis. I salute the teachers who have been suddenly saddled with the responsibility to transfer their teaching onto the internet. My children’s teachers have done this better than I ever could (or have, as a teacher of college students). However, for most kids, the idea that online teaching is a less-than-ideal but workable substitute is a fiction. Exercises presented on a screen, little buttons to push, “writing” on a keyboard rather than with the hand at an early age, no one directing students from one activity to another in person, no questions addressed directly by a living person—all of this engages most young people too little to instruct them in any real way. Online teaching as a norm is a crisis. Or at least, a crisis in inculcating the formal level of language, which is considered one of school’s main functions. At home one learns to talk; at school is where one learns to speak, especially those from less educated homes. At home one writes stuff down here and there; at school one learns to compose the kind of text that presents you to the world as a serious person. Humans are genetically programmed to talk, and ordinary talk is complex and nuanced indeed. Yet modern civilization exerts a requirement that citizens master a secondary layer of communication, the formal one. And this level is only partly about big words and where to put a comma. In forcing kids to not only write for real but to read text, school is also where one acquires familiarity with the extended argument, making a case, and even seeing the coherence in views you find unfamiliar. The goopy little exercises making kids write about what they did last summer and answer questions about pollination can seem trivial—until you imagine a kid growing up not doing such things, and only talking. It’s a different way of being in the world—and an America where online learning becomes a regular fate for all of America’s kids is going to leave those kids in essentially that kind of world. If we are seeing the beginning of what will be referred to in 10 or 15 years as a “corona generation” of kids hobbled by months-long stretches of online school, then I take the risk of predicting that—along with pedants angry that people aren’t calling it a “COVID generation”—we will see the National Assessment of Educational Progress scores go down. It will first become evident with the round of fourth-graders tested next year. That online school is only, for example, half of the school year will still have palpable effects: Students from less bookish homes have long been known to exhibit a “summer slide” in what they retain from June when they get back to school in the fall. Today’s corona kids—COVID kids, sorry—will already experience a double summer slide. At best, the hiatus will be half a year. This will leave their abilities in the marvels of casual speech unaffected. However, it will put a major dent in their capacities in the artifice of formal expression. Already we have seen a transition from an era of long emails as common coin in the 1990s to brief texts as the norm starting in about 2005, such that even many people who were comfortable with long emails then today prefer the brevity of texts, while people under about 25 often find not just email but Facebook too wordy. The terseness of texting and Instagram rule now: As always, formal language is a stunt, in its way, not a natural condition, and best learned starting young. When society does not cultivate formal language as ordinary, normal humans only rarely seek it out voluntarily. We are wired to just talk, and, with texting technology, to write as much like we talk as we can. This will be a generation embracing even more than ones before them the picture over the sentence, the short over the extended form, with the sensibilities of Instagram, TikTok and Quibi dominant. Within this will be a great deal of creativity, vibrancy, and even poetry, make no mistake. There will be a healthy vernacular flavor to much of this, in line with the browning of the culture since the 1990s. This will be, for example, the generation that conclusively eliminates the increasingly square tradition of the handshake, in favor not of coy formalities like bowing, but assorted elbow and chest bumps and other strategies with a “street” flavor. However, the current virus is distracting legions of those kids from an aspect of language that is a gateway to stirring texts and suasional self-expression, and if we must face similarly disruptive pandemics sequentially, this serial distraction from true schooling will risk rewiring these children’s minds permanently. Beyond the possibility that kids from homes with a second language will often find themselves conversing more fluently with their grandparents, the virus will not leave Americans talking differently. Rather, COVID-19 threatens to leave a cohort of children with a more oral and pictorial orientation towards communication than they would have had if school had not been pulled out from under them. Theirs will be a linguistic competence often dynamic and creative. But they will be missing the benefits of the more artificial, yet useful, aspects of language that, in societies with writing, most students experience mainly in the classroom.
<urn:uuid:7d2cb618-3411-48c7-8b23-6f67e7641abe>
CC-MAIN-2021-43
https://retailplanningblog.com/blogs/feed/the-coronavirus-generation-will-use-language-differently
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00510.warc.gz
en
0.968894
2,063
2.765625
3
RACE RELATIONS TODAY Sunday, February 23, 2014 LBJ’S “GREAT SOCIETY 50 YEARS ON A LOOK AT THE STATE OF RACE RELATIONS TODAY RACE RELATIONS TODAY ONE STEP FORWARD, TWO STEPS BACK LBJ MEETING WITH MLK IN THE WHITE HOUSE IN 1964 (PHOTO COURTESY WIKIMEDIA COMMUNICATIONS) TAGS: PRESIDENT LYNDON BAINES JOHNSON, LBJ, GREAT SOCIETY INITIATIVES, CIVIL RIGHTS ACT, VOTERS RIGHTS ACT, MEDICAID, MEDICARE, RACE RELATIONS, PRESIDENT BARACK OBAMA, WAR ON POVERTY, VIET NAM, SOCIAL UNREST, THE 1960’S, INEQUALITY, LEGACY OF SEGREGATION, RESURGENCE OF INSTITUTIONAL RACISM, LAWS TO RESTRICT AFRICAN AMERICAN VOTERS (Sunday February 23, 2014 NYC) Race in America remains as thorny and divisive an issue today as it has ever been. In some respects it appears that there has been a sometimes subtle, sometimes overt retrograde backslide. It is difficult to argue that we live in anything approaching a “post-racial America”, a garbage term in and of itself. Issues of race and inequality have become more pronounced paradoxically since the ascendance of the former freshman Senator from Illinois; Barack Obama has been twice elected as our country’s first African American President. Many naively believed that the presence of an African American in the White House would automatically douse the insidious burning embers of our most racist past. They could not have been more wrong. If anything in many parts of the United States we live behind a contrived façade of racial harmony; Democrats, Progressives, Liberals as well as earnest people of good conscience do believe that our country has turned the corner when it comes to race. Look, they say, we have elected an African American President. They are proud about this feat and, once President Obama took the Oath of Office for the first time in January 2009, they were content to walk away back into the comfortable confines of their own lives confidently self-righteous that they had done their part to mend America, to move it forward, and to relegate the racial divide and our untidy history of slavery, segregation, bias, bigotry and disparity to history. But that has clearly not been the case. Actually the exact opposite has occurred. One need only to listen to or read in the news the political rhetoric in Washington, DC, State Houses and Legislatures across the country to see the depth of the resistance to moving towards a more equitably welcoming society when it comes to African Americans and Latinos. Perhaps we should have expected this to some degree; maybe we could have been more alert to the potential of what President Obama would represent to different people. Aside from the aesthetics of an African American First Family in the White House and the fact that children of color could see a President and his family that looks more like them, the Obama Presidency has torn open barely healed wounds and has inflicted some new ones, at least in the eyes of those who had never imagined a “Black” President in their lifetimes. THE DREAM OF THE “GREAT SOCIETY” This is the 50th anniversary of the body of law and legislation, policy and initiatives crafted by our 37th President, Lyndon Baines Johnson (LBJ). An old time Democrat from Texas, Johnson found himself suddenly thrust into the Presidency from his post as Vice President after President John F. Kennedy (JFK) was assassinated in November 1963. A master of the Senate LBJ inherited a Presidency that was on the cusp of one of the most tumultuous periods of time in American history. Fairly or not LBJ and all his accomplishments are often lost in the shadows cast by the myth of what JFK might have done had he lived and the war in Viet Nam. The nation was soon to be torn asunder by issues of race and rights, war and conscience that strained the mortar of virtually every institution of government and law in the land. Anyone born after the late 1950’s likely has no personal memories of what was transpiring in the world. Those of us of a certain age might recall watching the news broadcasts and the grainy black and white images made possible by the old cathode ray tubes. We saw our fellow Americans who happened to be “Negro” being beaten, assaulted with powerful water hoses as the police and dogs fought with them. We saw brutal bloody clashes in places like Alabama, Mississippi and other locales in the Deep South. Just 51 years ago systemic segregation was the law in a large swathe of America. There were public facilities designated for “Whites” and “Coloreds” and in the most racist parts of the South that distinction was unimpeachable and often enforced lethally. There were places were lynching was still a means of “punishment” and a spectator sport often conducted in the town square in a carnival-like atmosphere of deep seated hatred that may be hard for some to imagine today. Yes, these barbaric tactics were employed across Dixie into the 1960’s. This was the toxic state of affairs 100 years after the conclusion of the “War between the States”, as they say down South, known more commonly as our Civil War. As a expanding organized Civil Rights movement took shape across the country and the horrors of Jim Crow’s South were broadcast to the wider nation and the world, it became increasing obvious that without federal protections afforded by new federal legislation, the notion of “Civil Rights” would remain an unachievable goal. African Americans did not even have the right to vote; they were excluded from participation in the most basic aspects of life in society. Into the increasingly violent and widening breech LBJ bravely waded in and convinced of the responsibility to end systemic endemic segregation, managed after mighty efforts to pass the Civil Rights Bill, The Voter’s Rights Act as the first of what would become a hefty portfolio of law designed to alter the social fabric and some of the other vexing realities of poverty and inequality for people of every race. While LBJ exerted a Herculean effort to pass and enact his Great Society programs he found himself having to spend more and more time addressing our growing involvement in Viet Nam. Though JFK had initially sent American “advisors” and a limited number of troops to Viet Nam, it would become LBJ’s war and come to dominate the national debate and anti-war movement for the ensuing five years of his Presidency. THE NOT SO GREAT SOCIETY The body of legislation and initiatives LBJ had designated as the “Great Society” came to be seen by many as a failed experiment at “social engineering”. Well intended but wildly ill-conceived, certain aspects of “Urban Renewal”, for example, only exacerbated the very same conditions LBJ had hoped to correct. Housing “projects”, those congested complexes of concrete and brick that to those of us living in New York City became known simply as “The Projects”, created conditions that only served to perpetuate some of the social pathologies they had been designed to address. There were no commercial zones in the Projects which meant that the residents living there had to walk long distances to purchase groceries and goods; their children had to walk further to bus and subway stops and the stark landscape of these “vertical” neighborhoods bred an environment where the residents felt ostracized in a way; they were certainly apart from and separate from the surrounding neighborhoods in very high density housing. Significant components of LBJ’s Great Society were measures directed to alleviate the underlying causations of economic disparity and all that comes with it. Johnson boldly declared his “War on Poverty” and it turned out to be as big a quagmire as was the ever expanding mess in Viet Nam. “Public Assistance” programs backfired horribly and resulted in a pattern of publically subsidized poverty until President Bill Clinton made the efforts to “reform welfare as we know it”. Several other components of the Great Society remain intact today and have proven to be effective and essential for many Americans over the last 50 years. Medicare and Medicaid have provided health care insurance for older Americans as well as the indigent. They rank among the greatest successes of LBJ’s tenure. Some of his other signature economic/employment measures have also stood the test of time and looking back over these past 50 years; they stand out for their lasting efficacy and positive impact. THE GREAT BACKSLIDE Putting partisan politics aside and looking away from the perpetual gridlock and ineptitude of Congress, the topic of race in America seemingly still grows like amoral weeds in the fields and ditches, sprouts through cracks in sidewalks and avenues of our great country. All our Congress is good for is fomenting and tapping into the basest sentiments of their constituents. The widespread practice of gerrymandering of Congressional districts virtually assures that seats that are currently held by one Party or the other will most assuredly remain in their current and respective side of the aisle. The unrestrained millions of private single-minded special interest dollars contributed on both sides of the Party divide from unknown sources cement the fact that extreme partisanship will continue to reign supreme over the political and social horizon for as far as the eye can see. Caught in this ideological vise pitting Left versus Right is the mass of our populace White, African American, Latino and all of the other ethnicities that are the woven tapestry of our society today are just trying to keep their heads above the roiling waters of debt, un - and sub-standard employment. The last vestiges of those facets of LBJ’s Great Society that proved to be more social engineering than Democratic governance have been dismantled and abandoned; crudely tossed into the landfill of antiquated solutions aimed at endemic ailments not remedially solved by the federal government. Today economics is almost as great a divide as race once was. While racial issues are being more aggressively used by the Right wing, ultra-Conservative, Tea Partyers, and virulent anti-Obama factions the growing disparity in income, the chasm between the “haves” and “have nots”, is being exploited by the Right with not very subtle undertones to stoke the fires of residual racism. Claiming the income gap represents an innate inability of African Americans to work and lift themselves out of poverty denies the real reasons that African Americans are more likely to live in poverty cloistered in seedy neighborhoods with high crime rates, few convenient amenities like grocery stores, pharmacies and businesses that service their communities. The Republican Party which controls the Governorship and State Legislatures in 37 states have introduced more bills to limit, restrict or otherwise make it more difficult for African Americans to vote. Such initiatives are yet another abhorrent form of social engineering just as egregious as the gerrymandering of Congressional districts. Racism is alive and well; actually flourishing in many states across the land. For their part, much of the onus for their current and seemingly perpetual plight is rightfully placed on the African American community themselves. In a perverse twist the ascendance of some African Americans has created a greater sense of apartness from those still struggling to improve their lot. We have an African American President and African Americans working in professions of every type from physicians and lawyers to engineers, academics, upper management level position across the entire spectrum of our economic and industrial base. That cannot be denied. But what had been once considered a corollary effect of this upward mobility of some to help more gain entry into what was called a “Black middle class” has simply not happened or, in places where it has it is not as robust and widespread as had been hoped. In many ways the hope and promise of LBJ’s Great Society have boomeranged; they allowed for some forward impetus towards racial equality yet, just as a rubber band can be stretched it will stretch only so far before returning to its original dimensions or breaking. Taking stock of our society today there is evidence that the racial rubber band has been drawn several degrees back closer to its original 1964 dimensions. Racism today is more insidious, more disguised and instead of the overt Jim Crow laws Statehouses utilize the legislative process as the modern day tools of that racism. A majority of those states have introduced Bills that are just modern day versions of the old “Poll Taxes” crafted to make it more difficult for African Americans to vote. If there are any people who are more aware of the power of words than writers they are politicians. They have big staffs of aides and advisors and speechwriters. They know the power of their words and now employee an array of euphemisms and metaphors, some “dog whistle” terms known to incite their audience and invoke very specific emotions and attitudes. Much of the rhetoric employed by the Right is incendiary and many of the most out front racist politicians use a slimy cast of surrogates to appeal to their constituents. From the “Birthers” who vociferously proclaim President Obama is not a “legitimate” President, that he was born in Kenya, is a communist/socialist, anti-American charlatan destined to march the United States down some twisted gravel road towards alliance in a “New World Order”, to those convinced our President is an “apologist” for America more inclined to side with other nations than represent “true” American interests, the Obama Presidency has been the catalyst igniting long simmering anti-government and racist sentiment. Some of it is embarrassing and the rest of the world looks askance at what passes for our “politics” today. Last week at a political rally for the Republican candidate for Governor of the State of Texas, the aging one time rock and roller, gun enthusiast and outspoken opponent of everything democratic or Obama, Ted Nugent, referred to our President as a “sub-human mongrel”. When did such terminology become acceptable in political discourse? When did questioning the very nature of a twice elected President stop raising eyebrows before more reasonable politicians would step in and repudiate hate speech? It had become vogue to use vulgar language when President Bill Clinton invited sexually scandalous behavior into the Oval Office but has grown exponentially since Barack Obama first announced his candidacy for the Office of the President. SADLY, FULL CIRCLE And so it goes; the circle will be unbroken. Our brief tour of race relations in America 50 years after the Great Society that LBJ envisioned and sacrificed all his political capital for remains an elusive reality. Despite the great strides forward, new realities continually jab us back into a corner. The Hydra headed reptile of racism, ignorance, intolerance and even hatred is a patient presence in our society. It exists in dormancy for lengths of time only to come out of its cave to resurface in our political-social-cultural landscape as poisonous as ever. There are no real victims here because both sides of the political divide, each side in any debate regarding race in America has a proliferation of loud mouth carnival barkers spewing unconstructive, divisive arguments that offer nothing of substance to the matters at hand. Thanks to the 24/7 “infotainment” cycle, there are enough bully pulpits for all to be heard from. For every point there is a counterpoint; for every Right-winger there is a Left-winger willing to engage in the battle no matter how useless and senseless it may be. Arguments need not have merits in today’s discourse; all that seems to matter is if you can talk over your opponent, speak with sufficient volume and feigned fury to satisfy like-minded citizens. When all is said and done one is left to wonder if anything worthy, any objectively valid point was expressed and the conclusion is usually no. We as a country and as a people have a long way to go in many aspects as a society and culture. Perhaps, the first steps could be towards a more civil style of debate and discussion, an acknowledgment that “all are created equally” and, as such, ought to be treated similarly. Copyright The Brooding Cynyx 2014 © All Rights Reserved
<urn:uuid:8002c1a6-9f15-4094-9daf-f74b64f259bf>
CC-MAIN-2021-43
https://www.broodingcynyc.com/2014/02/lbjs-great-society-50-years-on.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00550.warc.gz
en
0.971213
3,425
2.53125
3
It is unfortunate that our world is filled with so many painful, deadly, rare and strange diseases. Whether they are present at birth, or acquired later in life, most will cause unwanted physical and mental suffering. According to the National Organization For Rare Disorders database (NORD), there are around 6,800 known rare diseases in the United States alone. A disease is considered to be uncommon if it has affected less than 200,000 Americans. With the help of advanced medicine, many have had their disorders controlled or cured. On the other hand, abnormal conditions have had doctors striving to find cures and medications to help stop or control them from becoming worse. Unfortunately, there are many cases where doctors cannot do a thing to prevent or stop them. However, with advanced technology always increasing, the chances for survival and perhaps a normal life are better than ever. Here we will discuss some of the strangest diseases in humanity. Darier’s disease is a rare hereditary skin condition that is characterized by dark and crusty wart-like blemishes on the skin. The crusty patches often contain pus. Extreme cases of this disease are more rare, but common cases will consist of just skin rashes that flare-up under conditions such as stress, high humidity and tight-fitting clothing. Some patients with mild forms of Darier’s will often have a short stature coupled with fingernails formed with vertical striations. Other symptoms include thickened palms and soles, fragile fingernails, along with rashes from exposure to the humidity, sun and heat. The rashes often have a distinctive odor. Although the disease affects other areas of the body, it most commonly affects the neck, chest, ears, back, forehead and groin. Patients say the hard blisters can be extremely painful. The condition was discovered by French dermatologist Ferdinand-Jean Darier. It affects both men and women and is not contagious. The disease generally starts in later teens or early adulthood. The symptoms are said to be caused by an abnormality in the desmosome-keratin (cytoskeletal components found in animal cells) filament complex that leads to a breakdown in cell adhesion. Fortunately, there is treatment for this rare disease. Treatment: For the more severe cases, oral retinoids, topical or oral antibiotics, and other prescriptions are given. No specific treatment is required for minor cases. However, doctors recommend staying away from excessive heat, humidity, stress and clothes that fit too tight. Although this unusual disease is not deadly, it still causes physical and mental pain, stress and a big inconvenience. Scheuermann’s disease is a self-limiting skeletal disorder of childhood. It is most commonly found in teenagers. The vertebrae happen to grow unevenly with the anterior (near the front) angle being often greater than the posterior (near the back). Due to the uneven growth, the vertebrae shape has overcurvature. Unfortunately, because of the rigid apex of their thoracic vertebrae, patients with this condition cannot correct their posture. Another symptom of Scheuermann’s disease is painful neck and back pain from long periods of standing or other physical activity. They may also lose height (depending on the severity) and have a noticeable ‘hunchback’ posture. Treatment: Oddly, the reason for Scheuermann’s disease is still unknown. The condition appears to be dependent on a number of different causes. For less extreme cases, treatment using manual medicine and physical therapy can help, reverse or prevent it from becoming severe. For the more extreme cases, patients may be given a surgical procedure to prevent it from becoming even worse. For the most server cases (very rare), the condition may cause internal problems along with spinal cord damage. Subconjuntival hemorrhage is bleeding underneath the conjunctiva (the lining of the eyelids and covering of sclera). When the tiny blood vessels of the conjunctiva are ruptured, blood will leak into the space between the conjunctiva and sclera. There is no discharge from the eye. One with this condition may not even be aware they have it. They may find out they have it by taking a close look in the mirror, or having a friend or family member inform them. It is weird that the hemorrhage can be caused simply from just a severe cough or sneeze. It can also be caused by high blood pressure, blood thinners, heavy lifting, vomiting, straining due to constipation or rubbing your eyes too hard. By visualizing this condition, it appears like it may have been caused by a severe trauma event. One with subconjunctival hemorrhage may have a bruise that appears black underneath the skin or bright-red underneath the conjunctiva. The hemorrhage may eventually spread and turn green or yellow. Although the condition can appear frightening and possibly serious, it is actually painless and harmless. However, with extremely rare cases in elderly patients, the condition may be a warning sign of a potentially serious vascular disorder. Treatment: There usually is no necessary treatment required for subconjunctival hemorrhage. The condition will usually go away on its own in approximately 2-3 weeks. However, artificial tears may be given for a few weeks. Boils On The Buttocks Boils are localized infections of the skin beginning as a reddened and tender area. The infected area usually becomes firm, hard and more tender. The center of the boil will eventually become filled with infection-fighting white blood cells (pus). The blood cells come from the bloodstream to assist in eradicating the infection. The pus will then form a head that can potentially drain out of the surface of the skin. Boils can occur anywhere in the body, including the buttocks. Like the image above shows, they can spread all over. If you already hemorrhoids coupled with big boils spread throughout your buttocks, just imagine the unwanted discomfort and pain they would cause. Rare Pilonidal Cysts On Buttocks: A pilonidal cyst is a rare abscess that occurs in the crease of the buttocks. It usually contains hair and skin debris. These types of cyst usually occur when a hair punctures the skin and then becomes embedded. The risk is increased by irritation from direct pressure such as long sitting periods and long trips. Pilonidal cysts are more common in men than women. They are said to not only be uncomfortable, but also very painful. Treatment: If a boil does not drain on its own, surgery will likely be recommended. Caudal Regression Syndrome Caudal regression syndrome is an extremely rare disorder that is present before or at birth. It only occurs in about one in every 25,000 live births. The syndrome poses abnormal fetal development of the lower spine. It arises from a factor or various factors that are present around the 3rd to 7th week of fetal development. There are several levels of malformation. The very least severe is a partial deformation of the sacrum, the second level bilateral deformation, with the most severe cases involving the complete absence of the sacrum. The condition can come in a variety of forms such as partial absence of the tail bone regions of the spine, lower absence of the lower vertebrae, pelvis, parts of the thoracic and lumbar areas. In some cases, the patient may only have a small part of the spine missing. In the more severe cases, there can possibly be fused, webbed, or smaller lower extremities and even paralysis, along with bowel and bladder control usually being affected. Treatment: Fortunately, cognitive impairment is not affected with this particular disability. Although surgery, prosthetics, colostomy and other procedures may be necessary to treat this condition, adults are still able to attend college and live independent lives. Enamel hypoplasia is an acquired or hereditary defect formation of the enamel of the teeth. The enamel coating is much thinner than normal and can also be missing in some areas. Although the enamel is still hard, it can also be thin, malformed and deficient in amount. Generally, the tooth has a pit in it. In some cases, the natural enamel crown has a hole in it while in the more extreme cases, the tooth has no enamel, leaving dentin as the remaining component of the teeth. Causes of this condition are from gene mutations, nutritional deficiencies, bacterial infections, slow enamel formation, various diseases and other environmental factors. Treatment: The treatment of enamel hypoplasia depends on its location and severity. For the more mild cases, artificial enamel is given. For the severe cases where the teeth are lacking the majority of their enamel, metal crowns and artificial teeth are given as treatment. Bacterial meningitis is an infection of the meninges (three layers of protective tissue) that surround the brain and spinal cord. It can be caused by a fungal, bacterial or viral infection. It can be chronic or mild, but those experiencing symptoms are advised to see a doctor immediately. Children between the ages of one month and two years are at a highest risk of infection. Adults are at a risk for various reasons such as alcohol abuse, chronic nose and ear infections, or acquire pneumococcal pneumonia. What is strange about this infection is that sometimes meningitis occurs for no known reason. This fact has likely raised more paranoia for hypochondriacs. However, bacterial meningitis usually occurs following a head injury or a weakened immune system. The causing bacteria is most often found in the environment as well as in your nose and respiratory system. Treatment: Unfortunately, there is a 10 percent death rate from this unwanted infection. This is why it is so crucial to catch it early and prevent it from causing death. The infection can cause the surrounding brain tissues to swell, which can interfere with blood flow. The result of blood flow interference can lead to paralysis or stroke. Bacterial meningitis is treated with antibiotics and fluids to replenish loss from sweating, loss of appetite and diarrhea. Walking Corps Syndrome (Cotard’s Syndrome) Walking corps syndrome is a very rare mental illness where the patient believes he or she is either dying or does not even exist. They may also have the delusion that they lost their blood and/or internal organs. It is linked to depression, suicidal ideation, sleep deprivation and derealization. Of course, people suffering with this illness are held back from living a normal life. This unusual disorder is caused by a malfunction in an area of the brain called fusiform gyrus and also in the amygdala. This can result in the lack of emotion when viewing a familiar face because the fusiform gyrus plays an important role of recognizing faces . Due to this disconnection, it can also result in complete detachment. Treatment: Pharmacological treatments are often given to patients suffering from Cotard’s. The Recent Resurrection Of A Giant Virus 30,000 Years Later Raises Questions In March of 2014, a mysterious and giant virus was discovered buried in Siberian permafrost. By drilling horizontally into ice, drillers were able to extract samples. Researchers studied an ultrathin section of a Pithovirus particle in an infected Acanthamoeba castellanii cell. The researchers then took samples of the permafrost and put it into direct contact with amoebas (single-celled organism) in Petri dishes. They then waited to see the surprising results. After further investigation, they discovered the virus had killed the organisms. The virus was said to belong to a previously unknown family of viruses (Pithovirus) that share just a third of its genes with any known organism and only 11 percent with other viruses. This mysterious virus only infects single-celled organisms because it lacks the characteristic cellular machinery and metabolism of microorganisms. It closely resembles the largest viruses ever found. Although the origins of the virus still remain a mystery, scientists believe it had most likely evolved from single-celled parasites following the loss of essential genes. Although the virus is said to be harmless to humans, the discovery raises interesting questions. Climate change may have brought the virus to the surface. Therefore, other potential viruses that were previously long-dormant could come back. If there was a deadly virus that happened to be resurrected (a virus our bodies had never encountered), our immune systems may not be able to fight it off. However, scientists are optimistic that the chances of deadly viruses reaching humans is slim. A marine virologist at the University of British Columbia in Canada says the chances of humans getting a potentially harmful virus is low because they are not abundant enough to circulate and affect the heath of humans. More Unusual Diseases You May Not Of Heard Of Below are two great videos with information on some diseases that you may not be aware of. These are some diseases you just don’t hear about. Luckily, some can be controlled or even cured. Smallpox was considered to be one of the deadliest diseases around, but was fortunately eradicated by vaccination. The last natural case was diagnosed on October 26, 1977. Over time, scientists will find even more ways to tackle these unwanted conditions. Want to share more strange and unusual illnesses that were not mentioned in our post? As we previously mentioned, they out there. We always want to hear your interesting input.
<urn:uuid:b8b0797a-31a8-4c41-a894-43451afadb3f>
CC-MAIN-2021-43
https://odditiesbizarre.com/strange-diseases-humans/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00070.warc.gz
en
0.954646
2,783
3.234375
3
There are so many small changes you can make at home that can help you on your journey to sustainable living. Small actions do add up and these simple steps will help you go green in the bathroom. Having an eco-friendly bathroom is not just about the way we look. Our day-to-day habits can have a pretty substantial impact on the environment. The bathroom is a water hog, I mean think about how much of it you use, when you shower, brush your teeth and flush the toilet. A reminder to never use the toilet as a garbage can make sure you never flush these types of things. On average, each of us uses well over 100 litres of water a day just flushing the toilet. Toilets account for 28 % of your total indoor water use. Combined with showers and baths, the bathroom represents about 50 % of your home’s total indoor water use. Let's Go Green In The Bathroom: Check the faucets A steady drip can waste up to 55 litres of water in just 24 hours, if not looked after, it could add up to 20,075 litres per year. Install a low flow showerhead, you’ll save 15 to 30 litres of water per minute. And install low flow aerators on your faucets. Check for leaks and fix them immediately. Here is a sobering stat, each year, bathroom leaks waste about 3 trillion litres of water, I have trouble wrapping my head around this! Some leaks are hard to find, others are small and we tend to put them off, but a leaky faucet that is constantly dripping is not only wasting millions of litres of water, you are also paying for water you are not using. If you want to know how much water your household uses each year take a look at this water consumption calculator from Consumer Support Group. Let's talk about the toilet The toilet is the largest single water-guzzling appliance in the house using about one third (1/3) of an individual’s total water use. Most older toilets use about 20 litres of water every time they are flushed. There are several things you can do to green your toilet. Install a more efficient toilet if you can. If your toilet is over 25 years old, it's using 13 litres of water per flush. For a household of four, this could save about 76,000 litres per year. You can also DIY your current toilet to be more efficient by placing an old plastic bottle (filled with water) into the tank, this helps to displace the amount of water in the tank, telling to use less water on every flush. Check your toilet for leaks. Put a few drops of coloured food dye into the bowl and leave it for 30 mins, do not flush, if the dye appears then you have a leak. The most common toilet leak in a toilet in the rubber flapper valve. It can be as simple as replacing it. My advice, have a pro come in and take a look. I mentioned this at the top of the post, please don't use your toilet as a garbage can. The only thing you should be flushing is toilet paper. There is no such thing as a flushable wipe, no matter what it says on the label. Don’t flush good money away. Replace your toilet with a new efficient ultra-low-flush toilet and use between 50% and 80% less water per flush, depending on the size of your current toilet. Turn off the tap when brushing your teeth. Now onto a product that we all use, in fact, we here in Canada overuse it. Did you know the average Canadian uses about 22kg of disposable tissue paper products each year, including about 100 rolls of toilet paper? Cutting down trees that are over 100 years old, for me is obscured! Have you ever thought about how sustainable your toiler paper actually is? Always use recycled tissues and toilet paper and make sure they are 100% post-consumer recycled. Look for the TCF label that means it's “totally chlorine-free”, PCF will work as well, which stands for “Processed chlorine-free”. Every ton of recycled paper saves 17 trees, 380 gallons of oil, 4,000 kilowatt-hours of electricity, 3 cubic yards of landfill and 7,000 gallons of water. And make sure it comes with a third-party certification like FSC. The average Canadian household uses about 100 rolls of TP per year. Almost all of it coming from the Boreal Forest, and old-growth forest. If all of us opted to use paper from recycled materials instead of virgin pulp, we'd save 4,800,000 trees! You can also save a ton of money by buying in bulk. You can also consider using less toilet paper and opting for a bidet attachment instead. Green in the shower Take a shower instead of a bath, this will save you about 30 litres each time, or you can always shower with a friend, just saying! LOL! And if you shorten your showers you will save about 10 litres a minute and thousands of iters over a year. It all adds up. Install a low-flow showerhead. These have come a long way in the last 10 years. There is so much selection to choose from. They are inexpensive and very easy to install. They work by slowing the flow rate of the water and can save about 5-7 litres of water per minute. Turn off the tap and fix leaky drips Don't let the water run when you are brushing your teeth, shaving, washing your face or hands. A drippy tap can drive me nuts! The sound is like torture! But it's also super wasteful to let a drip go on and on. A faucet drip is closing you and the plant, in fact, you are wasting over 10,000 litres of water in one year! So get it fixed! Also, consider a faucet aerator that works by mixing air and water hence reducing the water flow, you can use these in the kitchen as well. Being eco-friendly in the bathroom has lots to do with saving water, but there are other things you can do. Reduce over-packaged bathroom products As you run out of body wash, hair products etc. try switching to reusable alternatives like shampoo bars, there are so many good ones to choose from. And switching to zero waste makeup can help reduce your plastic waste too. Instead of body wash in plastic bottles, choose bar soap with no packaging or paper if you can't find that. If you must go with plastic, try to support companies that are using recycled plastic to reduce their impact. Use reusable makeup rounds instead of cotton ones. These convenient, one-time-use items create a lot of waste and cause a lot of issues in landfills. If you use a natural sponge, it's actually not the best option either. The harvesting of natural sponges disturbs marine environments and syntenic sponges are normally made from plastic. So reusable options are your best bet. And this goes for your razors, tampons and menstrual pads, toothbrushes and toothpaste, cotton rounds as well. Try a safety razor or menstrual cup instead. Let's talk about recycling To go green in the bathroom we want to try and reduce the amount of water we waste and the volume of garbage we create. Opting for reusables is one way, but you also want to make sure you are recycling properly. Plastic shampoo, conditioner and body wash bottles can go in the blue bin, look out for plastic with the #1 or #2, these have higher recycling rates. Most toothpaste tubes come in cardboard boxes, you can recycle the box, not the tube. Same for the toilet paper roll, it's made from cardboard too and be can be recycled. Don't worry about removing the labels from bottles, just give them a good rise. You also can't recycle plastic dental floss boxes, pumps, plastic toothbrushes and the plastic that most toilet paper comes in! Research has shown that simply by placing a blue bin in the bathroom, people tend to recycle more. It's so important to get to know your recycling facility, most cities have apps and websites to help. Try to buy towels with low-impact dyes or something know as colour-grown cotton, this happens thanks to the plant's genetic properties, colours include shades of green, beige and brown. Most people think bamboo is better than cotton, but it's really much more chemical-intensive the manufacture. If you are buying bamboo make sure it's made with low-impact dyes and has a reputable certification like Oeko-Tex. Organic Lifestyle carries some great one. And also check out Shoo-Foo, they carry a range of made-in-Canada bamboo towels. You can find cheaper organic cotton at places like Winners, The Bay and Bed Bath and Beyond, but there is no guarantee that they are sweat-shop-free. Shower curtain liners are made with PVC or Polyvinyl Chloride which contains phthalates. Phthalates have been linked to asthma and can also act as hormone disruptors. Go for PEVA instead, it’s vinyl but has no phthalates. Choose an eco-friendly shower curtain too. A study conducted in 2008 by the Centre For Health, Environment and Justice found plastic shower curtains can off-gas as many as 108 VOCs (volatile organic compounds) with some lingering in the air for over a month. Vinyl is a common shower curtain but to go in green in the bathroom I recommenced avoiding this type of material. Instead, try hemp, it naturally resists mildew and mould, it will cost you more up-front but think about how many times you replace your vinyl liner, it will pay for itself in less than a year. Glass shower doors are another great option, they are much more of an investment but will rid you of having to replace your shower current over and over again. Antibacterial bath mats are just a trend, they are packed full of nasty chemicals we simply don't need in the home. If you are using a teak bath mat, odds are the wood comes from old-growth tropical rain forests or a PVC (a type of plastic) embedded in pebbles. Instead, look for organic cotton mats that have no backing. Backings are normally made using synthetic glue and other chemicals like formaldehyde. BONUS: Stop using toxic chemicals to clean If you want to go green in the bathroom, consider cleaning products that don't harm you or the planet. Bleach, a common "cleaner" used in the home is actually not a cleaner at all, it's a disinfectant, so yes it's' getting rid of germs, but it's not actually getting rid of grime or build-up and its really toxic to inhale. There has always been a lot of nonsense talk stating that green cleaners don't work as well as conventional ones, and it's a bunch on malarky! We've been trained to think that if the home does not smell like bleach it can't possibly be clean. Nothing can be farther from the truth. Natural cleaners have come a long way, companies have spent thousands in R&D to make sure their products work! FINAL THOUGHTS ON HOW TO GO GREEN IN THE BATHROOM It can be difficult when you are starting a green journey to figure out what to do in the home. The first step is really the understanding of how each thing in one given room impacts the world and then putting good action steps in place to make those changes. Change takes time. Rome was not built in a day! I'd love to know what you think, have you go green in the bathroom? What kinds of actionable practices have you put in place? Share in the comments below!
<urn:uuid:7f9ff0cf-367d-40b2-af2b-0360ec0a122d>
CC-MAIN-2021-43
https://theecohub.ca/10-simple-ways-to-go-green-in-the-bathroom/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585270.40/warc/CC-MAIN-20211019140046-20211019170046-00190.warc.gz
en
0.950309
2,483
2.8125
3
Samuel Grimshaw of Henrico Co, Virginia Early Immigrant Who May Have Been a Slaveholder Samuel Grimshaw apparently immigrated to Virginia in about 1795 and was therefore one of the earliest Grimshaw immigrants to the U.S. He was recorded living in Henrico County in the 1810 U.S. Census, and in September 1812 he was registered as a 30-year-old British alien who entered the U.S. in 1795 and was working as a farmer in Henrico County. He was therefore born in about 1882 somewhere in England, possibly in Yorkshire where the name “Samuel” was very common among the Quaker Grimshaws. He married Elizabeth Perkins in about 1814 and the couple had a son, James, born between 1814 and 1818. Samuel apparently operated a tavern in the “Old Ordinary” section of Henrico County in 1816, when he took out a fire insurance policy on the building. He met with an untimely death in or just before 1818, when his will was made at the time of his death. It is not known if Samuel immigrated alone (he would have been age 13 when he arrived in the U.S. in 1795) or with his parents. Although the connection is not at all clear, Samuel Grimshaw — or his parents — may have owned slaves, thereby giving the Grimshaw surname to a slave family. A Grimshaw slave family was subsequently owned by the Tayloe family on a plantation, Mount Airy, which is in Richmond County about 50 miles east of Henrico County. Slaves William Grimshaw and Esther Jackson were married in the 1820s. Both were owned by the Tayloe family, and they lived on some portion of the Mount Airy plantation. William and Esther Grimshaw had among their children Winney and Juliet Grimshaw, born in 1824 and 1826. After an incident in which he was whipped, William successfully ran away in 1845 and later settled in New Brunswick, Canada. He was never reunited with his family in Virginia. William and Esther Grimshaw’s daughter, Juliet, was sold to Dr. Tyler on nearby plantation and gave birth to William Grimshaw. This William later lived in Washington, D.C. He wrote extensively on black freemasonry and is the subject of a companion webpage. Thanks go to Doris Hightower for providing information on Samuel Grimshaw and the individuals subjected to slavery who had his surname. Thanks are also extended to Richard S. Dunn, professor emeritus at the University of Pennsylvania, for his paper1, “Winney Grimshaw, a Virginia Slave, and Her Family”, which was published in the Fall, 2011. A visit to the Library of Virginia in Richmond resulted in the addition of a number of records in September 2011. The earliest record of Samuel Grimshaw is apparently in the 1810 U.S. Census, which is shown in a companion webpage and is reproduced in part as follows: Before the automated search capabilities became available on Ancestry.com, a manual search of printed census indexes was performed, as described on a companion webpage. The results of this search for the 1810 index (and a prior census) are summarized below. Grishaw, Isaac S.; William Before the automated search capabilities became available on Ancestry.com, a manual search of printed census indexes was performed, as described on a companion webpage. The results of this search for the 1820 index (and a prior census) are summarized below. The 1820 U.S. Census includes a record for Samuel Grimshaw’s widow, Eliza. Grimshaw (see companion webpage), which is partially reproduced below: Samuel Grimshaw was included in the records2 of British aliens who were living in the U.S. during the War of 1812 and were required to register as resident aliens. The following information is reproduced from a companion webpage (Scott, 1979, p. 324): Grimshaw, Samuel, age 30, in U.S. since Sept. 1795, Henrico Co., farmer, (5-12 Sept. 1812) Samuel apparently registered in September 1812 while living in Henrico County, Virginia at age 30 as a farmer. No family is indicated, but it seems unlikely he was a descendant of earlier Virginian immigrants; he would have probably been born in the U.S. and therefore not an alien. On the other hand, if he was age 30 in 1812, he would have been born in about 1882 and would therefore have been only 13 years old when he arrived in the U.S. in 1795. The following background information is provided in the reference (Scott, 1979, p. v-vi): The recording of ships passenger lists was not required by law until 1819, and prior to that date only scattered lists of immigrants exist. It is, therefore, of the greatest importance that another source can supply information concerning thousands of British subjects – Canadian, English, Irish, Scottish, Welsh, and West Indian, most of them immigrants- who were residing in the United States during the War of 1812. On June 1, 1812, President Madison sent his war message to Congress, which on June 18 declared war. Subjects of Great Britain were henceforth enemy aliens and were to be dealt with in accordance with an act of July 6, 1798, and a supplementary Act of July 6, 1812. Accordingly, notice was promptly given that all British subjects in the United States were to report to the marshall of the state or territory of their residence “the persons composing their families, the places of their residence and their occupations or pursuits; and whether, and at what time, they have made the application to the courts required by law, as preparation to their naturalization.” It was ordered that notice was to be published in the newspapers and that reports by the aliens were to be sent by the several marshals to the Department of State. The returns, long in the custody of that department, were many years ago deposited in the National Archives. Normally a return gave the name of the alien, aged fourteen or more, years of residence in the United States, number of persons in the family, place of residence and status. Happily many returns supply further data of no little genealogical value – country of origin, for example. The Library of Virginia has a microfilm with a copy of the marriage bond record for Samuel Grimshaw and Elizabeth Perkins. The front and back of the record are shown below. That Samuel and Elizabeth were indeed married is shown on the will record further down on this webpage. Samuel apparently operated a tavern (evidently called the “Old Ordinary”) in 1816 and obtained a fire insurance policy on the building from the Mutual Assurance Society of Virginia. A copy of the policy document from the Library of Virginia is shown below: A description of the image of the policy is described as follows on the Library of Virginia website: Full View of Record: LVA Catalogs URL (Click on link) http://image.lva.virginia.gov/cgi-bin/GetMU.pl?dir=0526/G0039&card=33 Document Image Title Grimshaw, Samuel. Publication March 28, 1816. Gen. note 1956. Note Location of property: Henrico County (The Old Ordinary). Note Owner and occupant. Other Format Available on microfilm. Mutual Assurance Society of Virginia. Declarations. Vol. 44, Reel no. 5. Biog./Hist. Note This collection contains policies issued for Richmond and Henrico County between 1796 and 1867. The individual policies (declarations) and reevaluations include the name of the insured, the location of the property, the name of the occupant, a description and estimated value of each structure, and, in most instances, a sketch of the property. The “Mutual Assurance Society, against Fire on Buildings, of the State of Virginia” was incorporated by the General Assembly on December 22, 1794, and held its organizational meeting on December 24, 1795. William Foushee (Richmonds first mayor elected in 1782) was named the first president and directors were chosen for Richmond and vicinity, Petersburg, Fredericksburg, Staunton, Alexandria, Winchester, and Norfolk. Property was insured in Virginia, West Virginia (until 1868), and the District of Columbia. Insurance offered by the company was against “all losses and damages occasioned accidentally by fire.” Reevaluations of insured property were required every seven (7) years or whenever additions were made to a policy. While the society suffered financially with the fall of the Confederacy, its reserve fund, required by law, enabled it to recover rapidly from the effects of the war. Related Work Part of an index to the Richmond and Henrico County fire insurance policies issued between 1796 and 1867 by the Mutual Assurance Society of Virginia that are housed in the Archives of the Library of Virginia. Samuel lived only a few years after his marriage to Elizabeth Perkins, having passed away before March 31, 1818 according to his will. However, the will (shown below in two parts) indicates that he had a young son James Grimshaw by the time of his death. After his death in March, Samuel Grimshaw’s estate was appraised. A copy of the appraisement from the Library of Virginia is shown below. It is noteworthy that no slaves are shown among his possessions at the time of his death. Samuel Grimshaw designated Samuel Garthright as executor of his estate as shown in the will. On June 13, Garthright completed an account of the disposition of the estate, which is shown below (from the Library of Virginia). The account was recorded on August 3, 1818. An advertisement related to Samuel Grimshaw appeared in the Richmond Enquirer some 11 years after his death and was found in “Heritage Quest”. A copy is shown below: Paper: Richmond Enquirer; Date: 03-27-1829; Volume: XXV; Issue: 106; Page: ; Location: Richmond, Virginia Although the connection is not at all clear, Samuel Grimshaw — or his parents — may have owned slaves, thereby giving the Grimshaw surname to a slave family. A Grimshaw slave family was subsequently owned by the Tayloe family in the 1820s on a plantation, Mount Airy, which is in Richmond County about 50 miles east of Henrico County. A number of Grimshaws entered Virginia in the Jamestown area southeast of Henrico County in the late 1600s and early 1700s about 80 miles south of Mount Airy, and one of their descendants could have been candidates as the Grimshaw slaveholders. However, at the time of the 1810 U.S. Census, Samuel was the only Grimshaw recorded living in the region around Mount Airy and would therefore seem the best candidate. Samuel immigrated in 1795 and was therefore not one of the Jamestown Grimshaw descendants. However, the appraisement for Samuel made at the time of his death in 1818 does not include slaves. Slaves William Grimshaw and Esther Jackson were married in the 1820s. Both were owned by the Tayloe family, and they lived on parts of the Mount Airy plantation. William was the son of a “Letty” (Letitia?) Grimshaw. William and Esther Grimshaw had among their children Winney and Juliet Grimshaw, born in 1824 and 1826, while they were owned by the Tayloe family. After an incident in which he was whipped, William successfully ran away in 1845 and later settled in New Brunswick, Canada. He was never reunited with his family in Virginia. It is also possible that William was descended from slaves owned by Thomas Grimshaw, first of nearby Alexandria, Virginia and later of Winchester, Virginia. There is no evidence in the records obtained so far on Thomas that he was a slaveholder, and he lived a good deal further away, to the north in Alexandria. Richard S. Dunn published a paper in 2011 that details the life of Winney Grimshaw and her family1, including her sister Juliet Grimshaw. Juliet was sold to Dr. Tyler on a plantation in the next county to the west and gave birth to William Grimshaw. This William later lived in Washington, D.C. He wrote extensively on black freemasonry and is the subject of a companion webpage. Click here for the Richard Dunn Article, “Winney Grimshaw, a Virginia Slave, and Her Family”. Dunn also published an earlier paper3 on Winney in which he compared her life to that of a slave in Jamaica. 1Dunn, Richard, 2011, “Winney Grimshaw, a Virginia Slave, and Her Family: Early American Studies, Fall 2011, p. 493-521. 2Scott, Kenneth, compiler, 1979, British Aliens in the United States During the War of 1812: Baltimore, MD, Genealogical Publishing Co., 420 p. 3Dunn, Richard, 1977, A Tale of Two Plantations: Slave Life at Mesopotamia in Jamaica and Mount Airy in Virginia, 1799 to 1828: The William and Mary Quarterly, Third Series, vol 34, no 1 (January 1977), p. 32-65. Skeletal webpage posted December 2006. Updated February 2007 with addition of 1810 and 1820 U.S. Census records. Updated September 2011 with addition of information from article on Winney Grimshaw by Richard Dunn. Updated October 2011 with addition of marriage bond, will, appraisment and accounts from Virginia State Library.
<urn:uuid:ca933a1c-4e6f-482c-829f-968bba103983>
CC-MAIN-2021-43
http://grimshaworigin.org/grimshaw-immigrants-to-the-new-world/samuel-grimshaw-of-henrico-county/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00470.warc.gz
en
0.977969
2,873
2.625
3
These days, it seems like everybody’s talking about genealogy. Whether it’s detectives cracking cold cases with decades old DNA or your next door neighbor sifting through the secrets of her ancestors, family history is big business. It’s incredible to think that, by 2017, over 12 million people had tested their own DNA with consumer genealogy kits. Today, 1 in 25 adults in the United States have access to their personal genetic data. And it isn’t difficult to see why they’d want it. When combined with ancestry research tools, our passion for genealogy is allowing us to know not just who we are but why we are. It’s not all plain sailing, though. It’s important to understand the risks before you give up personal data. It’s also useful to know what you’re looking for. Ancestry and DNA genealogy tools are closely related, but they may use different techniques. In many ways, Ancestry.com – one of the longest running genetic genealogy websites in the world – is the grandfather of today’s personal DNA craze. Ancestry.com is where it all started, where ordinary people were first promised a glimpse of extraordinary lives. Let’s take a closer look at Ancestry.com and its long and interesting history. Here are some fascinating facts about the world’s largest genealogy website. 1. Ancestry (the business) Is Older Than You Think The dot com precursor to Ancestry was founded in 1983. Rather fortuitously, it appeared at around the same time as the birth of the internet, though its creators weren’t to know how closely their fates would align. Ancestry began life as a print based service that sold genealogical research books to professionals and academics. It wasn’t until 1990 – when founder John Sittner teamed up with information provider Prodigy – that the company went digital. 2. It Was the First Digital Genealogy Service As the rest of the world was getting to grips with the internet, Ancestry was using its early experience to break new ground. By 1995, it had spent six years shaping its products and services for digital distribution. It had the knowledge and skills to become the world’s first digitally based genealogical business. In May of 1995, it became the first to register a domain name. That domain name was, of course, Ancestry.com. And the rest, as they say, is history. 3. The Social Security Death Index Was Ground Breaking When Ancestry.com went live in 1996, it gave users access to five searchable databases: the Social Security Death Index, the Geographic Reference Library, the American Bibliographical Library, American Marriage Records and the Early Immigration Library. The Social Security Death Index was the most popular of the databases. Although it wasn’t strictly new information – other companies provided records for a fee – Ancestry was the only one to give access to a regularly updated system. The others sent out copies on CDs. 4. AncestryDNA Is the World’s Largest DNA Network In 2012, Ancestry.com expanded its collection of products and services to include the AncestryDNA database. Today, it is the largest of its kinds anywhere in the world, with over ten million people (and personal DNA records). AncestryDNA was one of the first to offer commercially available DNA testing kits. In the years since, it has been joined by a number of high profile companies, including 23andMe. With many providers now offering DNA testing at home, challenging questions are being raised about data privacy, application and responsibility. 5. It Adds 2 Million Records Every Day It’s difficult to grasp the staggering volumes of information that Ancestry.com handles on a daily basis. The company claims to add around two million new records to its data every single day. Over the last twenty years, a total of 20 billion records have been uploaded to the Ancestry.com website. Figures show the company has broken growth records for the last five years. And this incredible run is expected to continue throughout 2019 and beyond. Today, it is not a lack of public interest that Ancestry.com has to worry about. It is the astronomical rise of rival genealogy companies and the saturation of its market. 6. Users Can Search More Than 11 Billion Profiles Ancestry.com claims to have a vast genealogical database containing more than 100 million family trees and over 11 billion ancestral profiles. It also gives users access to more than 300 million photographs, scanned documents and written stories. With over 2 billion searchable records from the United Kingdom and in excess of 6 billion historic records from the United States, users from western nations are almost guaranteed a result. Ancestry.com hosts records from a total of eighty countries, so there’s a good chance that, wherever you are from, you’ll make a valuable discovery. 7. It Has More Than 50 Pending Patents The unique world of DNA is constantly in flux, always changing and developing. It means genealogical providers need to be at the top of their game when it comes to research and development. Ancestry.com usually has around fifty patents pending at any one time. Today, the majority of these patents relate to the nature of access, storage and collection, rather than new DNA applications. As genealogy providers clamor to be top of the tree (pun definitely intended), they’re also fighting to have the most secure, most efficient historical networks and databases. 8. There IS a Free Version of Ancestry.com The vast majority of Ancestry.com is, unsurprisingly, kept behind a paywall. However, the company does offer a surprising number of databases and collections for free. So, people interested in tracing their family history don’t need to pay upfront to launch an ancestral journey. In fact, a free browse is highly recommended for new users. It’s a good way to get a feel for the website and its basic research tools. There are over 800 ‘free’ databases to explore. Keep a level head though, because some are only partially accessible to non-members. Don’t forget, Ancestry.com is a business. It’s very good at hooking new users with just the right amount of free information. 9. AncestryDNA Does NOT Own Users’ DNA In recent years, there has been widespread concern about the growing popularity of commercial DNA testing, particularly via home kits. It has prompted several companies – including AncestryDNA – to revise and reinforce their data privacy policies. For instance, AncestryDNA has removed some ambiguous language from its policy and written articles on DNA licensing. It says AncestryDNA does not own users DNA samples. Users license the samples (lend) the samples only until a time they decide to revoke the license or change their privacy preferences. 10. DNA Data CAN Be Given to Third Parties DNA Licensing places strict rules on the application and use of personal DNA samples. However, they can be used in third party research and even passed on to third party companies. It is legal unless the user in question has opted out of data sharing. If you are worried about data privacy, the best course of action is, simply, to avoid DNA databases. Even if you opt out, there are no guarantees breaches won’t occur. Then again, the same can be said for all forms of data sharing, particularly social networks. The truth is, there are safeguards. When it comes to genealogical services, your DNA samples are protected. 11. You CAN Ask Ancestry.com to Delete Your DNA This is a really important fact and one that a lot of people seem to get wrong. Users – of Ancestry.com and other DNA based websites – retain the right to withdraw personal DNA samples. If you or somebody you know has submitted a sample to AncestryDNA, they can contact the company and ask for it to be destroyed. Confusion around this issue seems to stem from the fact that, while DNA samples can be deleted, users cannot reverse the decision to share it with others. For example, say you discovered a new relative on the database. You have the option to share your DNA data with them to provide your connection. But you cannot un-share it. 12. Ancestry.com Is a Good Place to Work According to a number of publications, Ancestry.com is known as a fair, pleasant and satisfying place to work. It has been named ‘best to work for’ on a number of occasions and has a celebrated company culture. Currently, the business employs around 1,600 workers globally. It has 1,000 workers in Utah, 400 in San Francisco and a further 100 in Dublin. It is enhanced and developed by the finest corporate talent, with employees poached from businesses as varied as eBay, Martha Stewart Living, Johnson and Johnson and Amazon. 13. Users Can Check for New Records With six billion searchable records in the United States alone, it’s handy to know where to look for updates and new records. On Ancestry.com, there is a webpage dedicated to recently updated collections and records. It changes all the time, so it’s worth checking back regularly if there’s a specific time period or institution you wish to research. The recently updated section is especially handy if there’s a piece of information that is not on Ancestry.com but might be soon. 14. There’s an Ancestry Mobile App Since 2011, Ancestry.com members have been able to browse records, view photographs, make connections and get notifications via the branded mobile app. Since its inception, it has been downloaded more than seventeen million times. Users the app does enhance the experience of being an Ancestry.com member. Though it is a little clunky in places – and does not offer the same depth as a desktop tool – it is a handy way to edit family trees and keep new discoveries close. 15. Ancestry Searches Prioritize Domestic Data Almost all of us have some kind of history overseas, in a country we know little or nothing about. It can make searching for our ancestors rather tricky. When creating searches on Ancestry.com, don’t forget that domestic results come first. The Ancestry search tools always bring up information from your home country first. If you’re specifically looking for overseas matches or results, try browsing the Card Catalogue. Use it to search for the name of the country. It could save you hours of sifting through historical records. 16. Sometimes You Can Be TOO Specific One common mistake when using Ancestry.com is to forget how often historic data was changed, mistaken or manipulated. For instance, members researching world wars are advised to input several birthdates. As many young men lied about their age to be eligible for military service, it’s common for an incorrect date to be listed. Being creative with spellings and dates can yield unexpected results, particularly for people who are having trouble finding records. It may be that an immigrant’s surname was misspelled, a birth was recorded sloppily or a relative just decided to change their data. These things were much more common in days gone by. 17. Ethnicity Results Aren’t Always Permanent You may be surprised to know that your ethnicity results on Ancestry.com are subject to change. Even though we think of ‘race’ as a fixed construct, most genealogy websites are keen to stress that results are estimated. Ethnicity is determined via the use of a sample of people whose ancestors originate from the same region. As DNA science develops and the sample size expands, genealogy tools are able to better tweak your results. The story of your ethnicity is more fluid than you think. 18. You Don’t Inherit the Same DNA As Your Siblings Another common misconception is that siblings always share the same DNA makeup. After all, you’re getting half from your father and half from your mother. Well, it doesn’t need to be the same halves. It’s entirely possible for full siblings to have slightly different ethnicity estimates, for example. This occurs because DNA from your parents also contains DNA from both grandparents. In essence, children get a mixture of genes from their parents AND grandparents. So, the half you got from your father isn’t exactly the same as the half your brother inherited. 19. Rival Companies Are Gunning for Ancestry.com Perhaps unsurprisingly, younger genealogy companies are starting to challenge Ancestry.com for market dominance. 23andMe – the second most popular DNA testing service – took Ancestry to court in early 2018. It alleges Ancestry used misleading advertising and stole some of its relative matching techniques and technologies. It also wants the courts to nullify the trademark Ancestory.com currently has on the word ‘ancestry.’ While this may sound a little petty, Ancestry.com started the row after it attempted to sue 23andme for its use of the word in branding and advertising. 20. Informed Use Is Safe Use – Stay Protected There are so many scare stories about DNA testing and genealogical databases that it’s no surprise people are worried about data security. The reality is, companies are doing things with personal DNA data that they’ve never done before. They’re breaking new ground, so there are bound to be new challenges. The important thing is to be clear on the difference between companies who exploit data and ones who are working at the cutting edge of research, where regulations may not as well established. Know your rights. Know your products. Before you sign up, have a full understanding of what it means to upload personal data to these websites. But don’t automatically assume they’re the villain simply because they’re doing things differently. Society is built on innovation.
<urn:uuid:bbee2571-870e-4b4f-ac11-f57c1ec80e82>
CC-MAIN-2021-43
https://moneyinc.com/ancestry-com/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00230.warc.gz
en
0.949217
2,901
2.578125
3
Differential Pressure Gauge measures the pressure difference between two points. Quickly measure the positive, negative or DP of air or non-corrosive gas. Differential pressure gauge measures the pressure difference between two independent pressure sources. Differential pressure gauge is a more economical method. Used for process monitoring or control, which allows a small amount of migration of process media. Suitable for air and non-flammable, non-corrosive applications. And certain air and natural gas applications. Using a simple, frictionless magnet spiral movement, it can quickly indicate the pressure of low pressure or non-corrosive gas. Whether positive, negative (vacuum is also available) or differential pressure. This design can prevent vibration, jitter, and overvoltage. No liquid needs to be filled in the watch. Features of Differential Pressure Gauge - Up to 81 kinds of range selection, can accurately meet your requirements - Using simple and unique patented double helix frictionless magnetic moving parts, you can quickly measure the positive pressure of air or non-corrosive gas, measure negative or differential pressure - Designed to resist shock, shock and overload protection - No liquid U-tube pressure gauge liquid evaporation, freezing, poisoning and leakage problems - Can measure fan, blower, filter resistance, furnace ventilation, orifice plate - Pressure drop and other system pressures. Can also be used in various purification rooms, biological safety - Cabinet, clean bench, dust removal equipment, medical breathing equipment, air intake - Sample differential pressure detection - Up to 63 models with different ranges and engineering units, suitable for various applications - Ultra-thin design: embedded thickness is only 38.5mm, overall thickness is only 52.2mm - Patented wave stripe decorative mask, beautiful and elegant - Standard accuracy up to 2%; after high precision calibration, up to 1% FS - Optional mirror dial to eliminate visual reading error and brushed 304 SS or plated bezel Specifications of Differential Pressure Gauge - Accuracy: ±2% (ascending pressure) - Pressure Ranges: 0-5 thru 0-150 psi - Dial size: 2″, 2 1/2″, 3 1/2″, 4″, 4 1/2″ and 6″ - Case: stainless steel - Body: aluminum, brass or stainless steel - Wetted Material: body material, Teflon ®, ceramic and various O-ring materials - Medium: air and non-flammable, compatible gas (optional natural gas) - Shell: cast aluminum or ABS shell, plexiglass beveled panel. Dark gray coating for 168 hours salt spray test - Withstand voltage range: -20Hg. to 15 psi (-0.677 bar to 1.034 bar) - Temperature range: 20 to 140 °F (-6.67 to 60 °C) - Opening size: 114mm - Installation positioning: diaphragm for the vertical direction - Connector: the same 1/8″ NPT high and low-pressure connector, one pair on the side and one pair on the back - Weight: 500g / only - Standard accessories: - two 1/8″ NPT threaded joints for high and low-pressure joints and connected to rubber hoses; - two 1/8″ NPT plugs for blocking the remaining two high and low-pressure ports; - Three threaded countersunk mounting connectors (mounting ring, snap ring retainer are used to replace the three ports on the MP and HP metering accessories). |RANGES FOR REFERENCE||Pa||Kpa||Inches of Water||MM of Water||Dual Scale| |0-30||0-0.5||0-.25||0-6||in w.cc||Pa or Kpa| What is a differential pressure gauge? Differential pressure gauges are suitable for measuring small pressure differences between two pressure sources in an industrial process. Measure the pressure loss of the filter and observe the working state of the valve. It is widely used to measure the pressure of fan and blower, filter resistance, wind speed, furnace pressure, orifice differential pressure, bubble water level, liquid amplifier or hydraulic system pressure, etc. It is also used for air-gas ratio control and automatic valve control during the combustion process. Differential pressure gauge is suitable for measuring the differential pressure, flow rate and other parameters of various liquid (gas) media in the process flow of industrial sectors such as chemical, chemical, metallurgy, electric power, and nuclear power. The instrument structure is all made of stainless steel. Among them, the measurement system (double bellows emergency connection parts) and the pressure guiding system (including joints, conduits, etc.) adopt special structure design and advanced technology. Principle of differential pressure gauge How does a differential pressure gauge work ? Based on the pressure-sensitive element, two bellows with the same stiffness are used. Therefore, they are forced to produce the same concentrated force on the movable bracket under the same measured medium. Because the two sides of the spring sheet do not produce deflection under the action of equal moments. It is still in the original position. In this way, the gear transmission mechanism does not operate. The pointer is still at the zero position. When different pressures are applied (generally the high-pressure end is higher than the low-pressure end), the forces of the two bellows acting on the movable bracket are not equal. The corresponding displacements are generated. The gear transmission mechanism is driven and amplified. The pointer deflection The latter indicates the differential pressure between the two. Differential Pressure Gauge Application The SI-2000 differential gauge is ideal Where safe and reliable pressure measurement is essential: - Filtration Monitoring - Pressure Drop across a Strainer - Flow Rate Dwyer Magnehelic series 2000 differential pressure gauge American Dwyer differential pressure gauge/ Dwyer 2000 differential pressure gauge The Dwyer 2000 series differential pressure gauge is an ultra-low-range, inexpensive, and solid structure field indicator. It uses the frictionless Magnehelic movement principle to eliminate wear, hysteresis and gaps. Without liquid filling, it will not vaporize and freeze. It can quickly indicate the low pressure and non-corrosive gas (positive pressure, negative (vacuum) and differential pressure). There are 81 ranges, the minimum range is 0-60Pa, or 0-6mm water column or 0-0.25′ water column). This design has anti-vibration, shaking and high resistance to overpressure. Main technical indicators of American Dwyer differential pressure gauge Dwyer 2000 - Accuracy: accuracy 2% FS (suffix 0 is 3%; suffix 00 is 4%) Rated pressure: -0.7 ~ 1Kg / cm - Connection: 1/8 “NPT internal thread, two pairs of high and low pressure ports, on the side and back - Overpressure: The vent will open automatically when it is about 1.75Kg / cm2 - Ambient temperature: -5 ℃ ~ 60 ℃, weight 0.5Kg - Outer shell: cast aluminum outer shell, the main body and components pass the 168-hour salt spray test, and the exterior is dark gray coating. - Standard accessories: two 1/8 “NPT threaded joints, used as high and low pressure joints and can be connected with rubber tubes; two 1/8” NPT plugs; three threaded countersunk fittings. - According to the requirements of the National GMP Drug Production Verification Guidelines for clean workshops, we recommend selecting: - Add positive micro-pressure (5-10Pa) to the workshop or room, and select a 2000-60Pa micro differential pressure gauge. - Check the filtering effect of coarse, medium and high-efficiency air filters, use differential pressure gauges such as 2000-125, 250Pa, 500Pa or 1Kpa, and observe the pressure difference of the filter at any time to replace the filter. Scope and model: 2300-120PA, 0- ± 60Pa, 2300-250PA, 0– ± 125Pa, 2300-500PA, 0- ± 250Pa, 81 range specifications including 2000-30KPA. Sino-Inst provides DWYER differential pressure gauge, micro differential pressure gauge, mechanical differential pressure gauge including MAGNEHELIC 2000 series. All products can be ordered, domestic brands include: MACROHELIC and MAGRFHELIC, price concessions, spot supply. Differential pressure gauge price Dwyer 2000 series differential pressure gauge: USD 75.00 / pc SI-2000 Differential Pressure Gauge: USD49.50 / pc The exact price needs to be determined according to the product parameters and the quantity purchased. Differential pressure gauge installation (1) Installation method one, as shown in the following picture 1. Drill three installation vias with an angle of 120 degrees and a diameter of 4.5mm on the installation plane. Use the corresponding short installation screws in the accessory package to match the screw holes on the bottom of the instrument. Make it secure . (2) Mounting method two, as shown in the following figure 2. Take three mounting brackets first. Use the corresponding installation short screws in the accessory package. Fix the mounting bracket to the three mounting screw holes on the bottom of the instrument. Then take the longest three screws. Lock with the installation plane 2-Embedded (panel) installation Cut the installation and insertion holes with a diameter of 115.5mm on the panel. Insert the meter into the holes. See Figure 3 below. Take three mounting brackets, and use the corresponding installation short screws in the accessory package. Fix the mounting bracket to the three mounting screw holes on the bottom of the meter. Then take the longest three screws. Fasten the meter to the panel. 3-Embedded (color steel plate) installation Cut the embedded hole with a diameter of 115.5mm on one side of the color steel plate. As shown in figure 4 below, install the A-S1 / S2 panel into the color steel plate. The pressure port on the back of the meter is firmly installed. Among them, accessories A-S1 / S2 and A-S81 need to be purchased separately. 4-For other installation methods. see the following figure 5. Use special screws A-S9 to fix the installation with the pressure interface on the back of the instrument. Accessories A-S9 need to be purchased separately. After installing the instrument, adjust the zero position. Using a suitable flat-blade screwdriver, turn the zero adjustment screw at the bottom of the transparent mask counterclockwise. Align the pointer with the zero position of the dial. During zero adjustment, both the high-pressure port and the low-pressure port can communicate with the atmospheric pressure. With the standard configuration accessories, according to actual needs, correctly connect the “+” and “-” pressure ports. Note that the instrument has a pair of pressure ports on the side and back. The unused pair of plug seals in the application accessories. Two pressure ports for L (low pressure) and H (high pressure). Different connections can quickly indicate gas pressure. Including positive, negative or differential pressure. Zero calibration can be directly operated externally. Pressure measurement: Connect the pressure source to any one of the two high-pressure ports with an air pipe, and block the unused one; make one or two low-pressure ports open to the atmosphere. Differential pressure measurement: Connect the high-pressure source to any one of the two high-pressure ports with an air pipe, and connect the low-pressure source to any one of the two low-pressure ports with an air pipe; block the two pressure ports that are not used on the meter. Negative pressure measurement: Connect the pressure source to any one of the two low-pressure ports with an air duct, and block the unused one; make one or two high-pressure ports open to the atmosphere. Send to Calibration Institute for calibration. Self-calibration requires equipment and qualifications, otherwise it will not be recognized. If no report is required, you can calibrate yourself: First, at least one certified high-precision micro-pressure gauge or pressure gauge is required as a reference standard. Then select 5-7 points in the full range and give a fixed pressure difference at both ends of the differential pressure gauge. If it is a micro differential pressure gauge, you can use a micro pressure pump to give pressure to the high pressure port. If it is a large number of differential pressure gauges, you can use a liquid pump to give pressure to the high-pressure port. Then observe the reading between the reference standard and the differential pressure gauge, record and calculate the relative error. The role of the pressure gauge is used to measure and indicate the size of the pressure in the pressure product. If the pressure product does not have a pressure gauge, or if the pressure gauge fails, the pressure of the pressure product cannot be expressed. This directly threatens safety. A sensitive and accurate pressure gauge is installed on the pressure product. The operator can operate the pressure product correctly with the pressure gauge to ensure safe and economical operation. The pressure gauge can accurately indicate the level of the steam pressure in the pressure product. The operator can adjust the heating degree of the product according to the indicated value of the pressure gauge. In order to ensure the requirements of the gas department and the safe operation of pressure products. Sino-Instrument offers over 50 Differential Pressure Gauge products. About 50% of these are differential pressure meters, 40% are water meters, and 40% are level meters. A wide variety of Differential Pressure Gauge options are available to you, such as free samples, paid samples. Sino-Instrument is a globally recognized supplier and manufacturer of Differential Pressure Gauge, located in China. The top supplying country is China (Mainland), which supply 100% of Differential Pressure Gauge respectively. Sino-Instrument sells through a mature distribution network that reaches all 50 states and 30 countries worldwide. Differential Pressure Gauge products are most popular in Domestic Market, Southeast Asia, and Mid East. You can ensure product safety by selecting from certified suppliers, with ISO9001, ISO14001 certification. Request a Quote - Oem smart Pressure Transducer pressure sensor - Silicon Pressure Transmitter - Oem smart Pressure Transducer pressure sensor - Explosion-proof electronic pressure transmitter - SMT3151 Diaphragm Seal Pressure transmitter - Submersible level transmitter - static pressure transmitter - high temperature transmitter - SMT3151 TGP -stainless steel sensor process, oil and gas industry - 3151 dp transmitters manufacturer - SMT3151 differential pressure level transmitter
<urn:uuid:55759ae6-428f-4023-91cd-3979fbcdc7c7>
CC-MAIN-2021-43
https://www.drurylandetheatre.com/si-d2000-differential-pressure-gauge/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585280.84/warc/CC-MAIN-20211019171139-20211019201139-00470.warc.gz
en
0.853927
3,143
2.53125
3
If you’ve ever been to a shrine in Japan, odds are you’ve seen a pair of dog-like lions flanking the entrance. If you’ve been to Okinawa you’ve seen them just about everywhere. In fact you can see some variation on these creatures in China, Korea, Myanmar, Tibet, and other East Asian countries, or even at Chinese restaurants in the West. They are variously known in English as lions, dogs, lion dogs, Fu dogs or Foo dogs. In Japan they are called komainu 狛犬, and in Okinawa they are shīsā. All these different names beg the question, “What exactly are they?” Canine or Feline? I’ll refrain from thrashing about the shrubbery and say right away that these animals are in fact lions. How then, did they come to be called dogs by some? We’ll come to that momentarily, but first we must look to India. There are also ancient lion statues in Middle Eastern countries, but India is the surest place to begin the lion statues’ path to Japan, for it seems to have moved along with the Buddhist faith. Lions appeared in Indian temple art and, as early as the third century, showed up in Chinese Buddhist art. In those times, the lion was a symbolic protector of the dharma (the teachings of Buddha). “If it’s good enough for Buddha, it’s good enough for the emperor,” may have been the line of thought, for, over time, they also became protectors of imperial gates. Here the history seems to become a bit unclear. The Chinese word for lion (statues included) is shi 獅 or shishi 獅子, but there was another creature that appeared in China at around the same time called the xiezhi 獬豸. At some point between the third and seventh centuries, paired stone xiezhi also made their way to Korea, where the name was pronounced haetae or haechi. The haechi appears very lion-like, but often has a scaly body, a small horn on its head, and sometimes small wings. By the Nara period (710-794), lion guardians had come to Japan as well. I found nothing to indicate whether the original source of their introduction was China or Korea. Early on, they were usually made of wood and only used indoors. In the ninth century, a change occurred, and the pair came to consist of one open-mouthed lion (shishi 獅子) and one close-mouthed, horn-bearing, dog-like komainu. The name komainu itself means “Korean dog.” Given the name and its horn, it would seem that the komainu, at least, came from the Korean haechi. By the fourteenth century the horn disappeared, and both animals of the pair came to be known as komainu. At the same time, people started making them in stone and using them outdoors. Again, the history seems to be vague, and I found no sources to solidly confirm how komainu came to be ubiquitous at shrine entrances. This is only me theorizing, but I think it likely that lion guardians may have initially been associated with Buddhist temples. I say this because of the lions’ Buddhist associations in China, and the early Korean influences on Japanese lions (Buddhism having been introduced to Japan from Korea in 552 CE). If this was the case, the shift from temples to shrines could be explained by the fact that they often shared grounds and, in trying to spread the faith, Buddhists often drew parallels between characters and symbols of their religion and those found in Japan’s native beliefs. You may be wondering if anyone in pre-modern Japan had ever seen a real lion. It’s a long way from the savannah, but there are Asiatic lions as well. Although their range is quite small today, prior to the nineteenth century they could be found throughout Persia, Palestine, Mesopotamia, and much of India. Captive lions were also known in China. I was unable to find any sources confirming or denying the presence of captive lions in Japan. However, during the Tokugawa periods, exotic animals were sometimes featured as part of festivals, so there is a possibility. Still, I think it’s safe to say that the vast, vast majority of Japanese people had never seen a real lion prior to the modern age. Open Wide and Say あ When seen in pairs, both in Japan and Okinawa, one lion usually has its mouth open while the other’s is shut. It’s no coincidence, but rather Buddhist symbolism. The open mouth is meant to be forming the sound “a” あ, while the closed mouth is forming the sound “un” うん. Combined, they form the word a-un, the Japanese rendition of the Indian word om ॐ. Originating in Hinduism and adopted by Buddhism, om’s meaning seems somewhat vague at times, but is sometimes described as the name of God or the sound of the vibration of the universe. At least in Japan, “a” and “un” are also symbolic of beginnings and endings, in the same way that Western countries use alpha and omega. It’s also sometimes said that the open-mouthed animal is male, while the other is female. Komainu: Popular Protector In Japan lion statues are a fixture on shrine grounds, but seldom seen elsewhere. On the other hand, anyone who has been to Okinawa will know you can’t swing a cat without hitting a lion, though you probably wouldn’t want to do that. I’m sure the cat wouldn’t appreciate it, and the lion might take offense at your mistreatment of his cousin. That said, lion statues are omnipresent in Okinawa. In Okinawa lion statues are known as shīsā, meaning lion. They are made of a variety of materials, though the signature regional choice is red clay. They can be found not only at areas of special spiritual significance, but on the roofs or at the entrances of homes and businesses. It’s also easy to acquire your own shīsā, as statues of all sizes are nearly ubiquitous among souvenir shops. Is it a bird? A plane? No . . . It’s Shisa-man! They may not be faster than a speeding bullet, in fact they’re usually quite stationary, but a shīsā’s powers are nothing to be trifled with. Here are two legends of shīsā heroism: A Chinese envoy brought a gift for the king, a necklace decorated with a figurine of a shisa. Meanwhile, at Naha bay, the village of Madanbashi was being terrorized by a sea dragon that ate the villagers and destroyed their property. One day, the king was visiting the village, when suddenly the dragon attacked. All the people ran and hid. The local priestess had been told in a dream to instruct the king when he visited to stand on the beach and lift up his figurine towards the dragon; she sent a boy to tell him. The king faced the monster with the figurine held high, and immediately a giant roar sounded throughout the village, a roar so deep and powerful that it even shook the dragon. A massive boulder then fell from heaven and crushed the dragon’s tail. He couldn’t move, and eventually died. At Tomimori Village in the far southern part of Okinawa, there were often many fires. The people of the area sought out a Feng Shui master, to ask him why there were so many fires. He believed they were because of the power of the nearby Mt. Yaese, and suggested that the townspeople build a stone shisa to face the mountain. They did so, and thus have protected their village from fire ever since. shīsā also feature in some much more modern stories. King Shīsā キングシーサー, a giant monster based on a shīsā, first appeared in Godzilla vs. Mechagodzilla in 1974, and again in 2004’s Godzilla: Final Wars. In the English dub his name was changed to King Caesar, which seems a bit redundant. In his first appearance, King Shīsā was a benevolent protector of humanity, but had been sleeping inside a mountain in Okinawa for a long time. When Godzilla alone cannot defeat his robotic doppleganger, the human heroes of the film awaken the ancient King Shīsā with a very non-ancient sounding song. Then King Shīsā and Godzilla team up to pound Mechagodzilla. In Godzilla: Final Wars, King Shīsā fights against Godzilla, but since he was being controlled by aliens we won’t hold it against him. In these movies, King Shīsā favors close combat, although he does have the ability to redirect an opponent’s energy attacks. Komainu: King of the Beasts Though a lot of their past remains unclear, guardian lions are fascinating. Although there are tons of komainu to be seen at shrines across Japan I’m sad to say that I haven’t seen them utilized much in modern pop culture. Maybe some of you out there know of some examples of which I’m unaware. On the other hand, the Okinawan shīsā is very much a living symbol, so at least this overlooked legend has a happy home in Ryukyu. Okay, that's all from me!
<urn:uuid:e465671c-7964-4858-8ee1-5f5cd501c64e>
CC-MAIN-2021-43
https://www.tofugu.com/japan/komainu/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00349.warc.gz
en
0.975369
2,051
3.203125
3
This conflict summary was commissioned by the International Center on Nonviolent Conflict (ICNC). We are an educational organization dedicated to developing and sharing knowledge related to nonviolent civil resistance movements for human rights, freedom, and justice around the world. Click here to access ICNC’s homepage. Pakistan’s Movement for the Restoration of Democracy (1981-1984) A coalition of eleven Pakistani political parties known as the Movement for the Restoration of Democracy (MRD) formed in 1983 to pressure the dictatorial regime of Muhammad Zia-ul-Haq to hold elections and suspend the martial law. The MRD, which remained mostly nonviolent, was strongest among supporters of the Pakistan People’s Party (PPP) in Sindh Province. Though it launched one of the most massive nonviolent movements in South Asia since the time of Gandhi, failure to expand beyond its southern stronghold combined with effective repression from the military led to its demise a year and a half later. Zulfikar Ali Bhutto became President of Pakistan in 1971 and Prime Minister in 1973 and served in both positions until a coup ousted him in 1977. Bhutto was native to Pakistan’s Sindh Province, which lies in the far southeast of the country bordering India and the Arabian Sea. He was charismatic and popular among supporters of the large Pakistan People’s Party (PPP), which he had founded. The PPP slogan was, “Islam is our faith, democracy is our policy, socialism is our economy: All power to the people.” President Bhutto nationalized major industries, increased the power of worker’s unions and redistributed over a million acres to landless peasants. He convened the National Assembly on April 14, 1972, to create a new constitution that they completed a year later. Bhutto’s popularity, however, sharply declined in subsequent years as he also assumed the role of Prime Minister. From 1974 to 1977, Pakistan experienced a series of high profile assassinations, disputed elections, and episodes of political infighting that created a sense of public disorder. Corruption was ubiquitous and Bhutto made a series of unpopular comprises with landholders and elites. Bhutto’s opponents would often disappear. Organized street demonstrations against him became increasingly common. The military finally responded to rising anti-government unrest by staging a coup in July 1977 and arresting Bhutto and members of his cabinet charging them with complicity in the political assassination. Army Chief of Staff General Muhammad Zia-ul-Haq became Chief Martial Law Administrator and claimed immediate control of Pakistan by suspending the new constitution and dissolving national assemblies. Zia promised to hold elections within three months of taking power but never did. He pursued a broad policy of Islamization of a particularly reactionary orientation, reintroducing such medieval punishments as amputation, stoning, and flogging. The United States, traditionally Pakistan’s biggest foreign backer, suspended aid in 1977 due to its nuclear program. Zia’s isolation ended with the Soviet invasion of neighboring Afghanistan in late 1979, which precipitated massive U.S. investment in Pakistan’s military. Billions of dollars and weaponry flowed to the Afghan Mujahadeen by way of the Pakistani Inter-Services Intelligence Agency (ISI). Zia was transformed overnight from a reprehensible dictator to an ally in the fight against Soviet communism. Zukifar Ali Bhutto was tried, convicted and sentenced to death. Despite appeals by foreign leaders for clemency for the former president, he was hanged in April 1979. Meanwhile, nearly 3,000 PPP supporters were jailed, many of whom remained imprisoned for the next decade. Zia was particularly unpopular in the Sindh Province, where support for the PPP remained relatively strong. The first stirrings of a significant opposition movement against Zia’s regime arose in February of 1981. Eleven diverse political parties formed a coalition called Movement for the Restoration of Democracy (MRD) to pressure Zia’s regime to hold elections and suspend the martial law. Zukifar Ali Bhutto’s PPP was prominently included, as well as the Awami Tehrik, the Jamiat Ulema-e-Islam, the National Awami Party, the National Democratic Party, the Pakistan Mazdoor Kisan Party, the Pakistan Muslim League, the Pakistan National Party, the Quami Mahaz-e-Azadi and the Tehrik-e-Istiqlal. Many of the parties in the MRD were formerly antagonistic to each other, but became united in opposition to Zia. The primary base of support for the MRD lay in the Sindh Province. The MRD immediately initiated a campaign to pressure Zia to suspend the martial law and restore democracy. They issued a press release calling for free, fair and impartial elections. However, the effort soon became compromised when armed hijackers seized a Pakistan International Airlines plane and forced it to land in Kabul, Afghanistan. The hostage-takers killed several passengers, among them a member of a powerful Pakistani family. The hijackers belonged to a group known as Al-Zukifar, which was led by Bhutto’s son. The popular backlash to the terrorists’ links to the MRD, however indirect, crippled the movement. It would take two years to recover from the hijacking. By 1983, the MRD regained enough momentum to reassert itself. Zia sensed the MRD would likely choose Independence Day, August 14, to renew its offensive. To cut them off he announced a plan for the restoration of democracy on August 12, 1983. However, Zia’s speech elaborated merely an intention to move toward democracy rather than any specific proposals. Details regarding the role of the military, the 1973 constitution, and the future of political parties were left unclear. The MRD, deflated by the surprise move, nonetheless called for the launch of a popular campaign two days later. Based on lessons learned in previous civil insurrections, including the abortive 1968-69 uprising against the Ayub Khan dictatorship and the 1977 protests against Bhutto himself, MRD organizers ordered movement leaders to seek voluntary arrest and rally their supporters in the streets. To avoid alienating the public, a policy of selective aggression was advanced in which MRD supporters channeled their energy against government personnel rather than public property. Uniformed military personnel were similarly avoided in the hope of minimizing violent retaliation. Foreign news media were updated on arrests and violence, but domestic news – heavily censored by the regime – remained quiet. MRD organizers led processions out of Sindhi villages to provoke arrest. Millions of people took part in boycotts and strikes and hundreds of thousands took part in demonstrations. The conflict became particularly intense in rural areas of Sindh Province. Zia’s effort to portray the MRD as an Indian-backed conspiracy to destabilize Pakistan was without merit, but gained credence among some Pakistanis when Indian Prime Minister Indira Gandhi endorsed the movement in an address to the lower house of the Indian parliament. Despite charges to the contrary, the MRD in Sindh was not attempting to secede from Pakistan but instead was focused on the restoration of the constitution. However, particularly under the leadership of PPP veteran Ghulam Mustafa Jatoi, the MRD was perceived by many to be a Sindhi movement seeking redress for various grievances at the hands of the majority Punjabi-dominated administration in Islamabad. As a result, it became difficult for the movement to expand beyond its base in that southern province. Zia’s interior secretary, Roedad Khan, later wrote that the regime was able to manipulate this perception to their advantage and prevent the MRD from gaining greater appeal on a nationwide level. Within Sindh, however, the movement had broad support, forcing Zia to send 45,000 troops into the province to suppress the uprising. Between 60 and 200 people were killed and up to 15,000 were arrested. The jails overflowed and the regime was forced to set up camps to keep prisoners in tents. By November, it became apparent that the movement was not gaining momentum nationally and Zia was not prepared to concede. The Pakistani military was quite effective in its repression, avoiding where possible those seeking arrest and not creating martyrs by arresting the top leadership, but instead rounding up the second and third level organizers on the community level. This strategy cut the center out of the Movement organization. In 1984, Zia called for a referendum seeking approval for his ultra-conservative and authoritarian brand of Islamization. Most of the MRD parties boycotted the referendum and only 10 percent of eligible voters participated. Nonetheless, Zia declared a victory and hung on to office. In August 1988, Zia was killed in a suspicious plane crash that also took the lives of many of his top aides and the U.S. ambassador to Pakistan. Elections that soon followed returned the PPP to power and the MRD dissolved. After a decade of largely democratic but corrupt rule, another military government seized power, receiving well over a billion dollars in U.S. military assistance over the next eight years until General Pervez Musharraf was forced out in 2008 in large part due to a civil insurrection led by lawyers and other civil society organizations. Boycott, mass demonstrations, voluntary arrest. However, some nonviolent resistance has been mixed with rioting and small-scale armed clashes. Pakistan is today technically democratic, although the PPP government does not have total control of much of the military and intelligence agencies. The current government is also riddled with corruption and is still dominated by the same elite families that had run the PPP for decades, raising questions regarding the depth and future of Pakistani democracy. However, the emergence of civil society movements in the Sindh uprising of the 1980s and in the more recent struggle against the Musharraf regime give some promise of the emergence of a new political culture. For Further Reading - Bin Sayeed, Khalid (1984). “Pakistan in 1983: Internal stresses more serious than external problems”. Asian Survey, Vol. 24, No. 2, A Survey of Asia in 1983: Part II. Pp. 219 – 228. - Ali Shah, Mehtab. (1997). The Foreign Policy of Pakistan: Ethnic Impacts on Diplomacy, 1971- 1994. I.B. Tauris & Co.: London. - Duncan, Emma (1989). Breaking the Curfew: A Political Journey Through Pakistan. Penguin Group: London. About this Conflict Summary This conflict summary was commissioned by the International Center on Nonviolent Conflict (ICNC). We are an educational organization dedicated to developing and sharing knowledge related to nonviolent civil resistance movements for human rights, freedom, and justice around the world. Learn more about our work here. Hundreds of past and present cases of nonviolent civil resistance exist. To make these cases more accessible, ICNC compiled summaries of some of them between the years 2009-2011. You can find these summaries here. Each summary aims to provide a clear perspective on the role that nonviolent civil resistance has played or is playing in a particular case. They are authored by people who have expertise in a particular region of the world and/or expertise in the field of civil resistance. Each author speaks with his/her own voice, and conflict summaries do not necessarily reflect the views of ICNC. To support scholars and educators who are designing curricula and teaching this subject, we also offer an Academic Online Curriculum (AOC), which is a free, extensive, and regularly updated online resource with over 40 different modules on civil resistance topics and case studies.
<urn:uuid:79677eca-d652-488c-990e-490444d7d342>
CC-MAIN-2021-43
https://www.nonviolent-conflict.org/pakistans-movement-restoration-democracy-1981-1984/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00510.warc.gz
en
0.964182
2,393
3.171875
3
The Indian Penal Code (Act No 45 of 1860) criminalized abortions and included severe punitive measures against the woman and the abortion provider. In an effort to reduce maternal deaths caused by septic abortions, the Shantilal Shah Commission was set up by the Government of India in 1966. Based on their recommendations, The Medical Termination of Pregnancy (MTP) Act was passed by the Parliament in 1971. The MTP Act (Act No. 34 of 1971) 1 India, has been defined in its opening lines as ‘An Act to provide for the termination of certain pregnancies by registered medical practitioners and for matters connected therewith or incidental there to’. An adult woman requires no other person’s consent except her own. When Pregnancies may be Terminated: Pregnancies not exceeding 12 weeks may be terminated based on a single opinion formed in good faith. In case of pregnancies exceeding 12 weeks but less than 20 weeks, termination needs opinion of two doctors. Mifepristrone (RU 486) & Misoprostol are approved for use up to 63 days gestation. Who may terminate a pregnancy : Only a Registered Medical Practitioner as defined by the MTP Act, can provide surgical abortion and prescribe the drugs. Grounds for Termination: A pregnancy may be terminated for the following indications : - If the pregnancy would involve a risk to the life of the pregnant woman or of grave injury to her physical and mental health. - If there is a substantial risk that if the child was born, it would suffer physical or mental abnormalities as to be seriously handicapped. Explanations I and II further clarify the following indications: - Pregnancy alleged by the pregnant woman to have been caused by rape. - Pregnancy resulting from a failure of any device used by any married woman or her husband for the purpose for limiting children. The MTP Act does not permit induced abortions on demand. The responsibility rests with the medical practitioner to opine in good faith regarding the presence of a valid legal indication. Such a provider-dependent policy may sometimes result in denial of abortion care to women in need, especially the more vulnerable amongst them. Secondly it states the need for two doctors to certify opinion for a second trimester MTP, which serves as a major restriction in places where there is scarcity of medical personnel. Moreover, while the MTP Act permits women seek legal termination of an unwanted pregnancy for a wide range of reasons, the clause about contraceptive failure applies only to married woman. The MTP Act of 1971 has been an empowering act for the healthcare system and its beneficiaries, setting aside the application of the Indian Penal Code in certain well-defined situations. It allows clinicians to offer legal safe abortion services within well-defined limits. Even today, voluntarily ‘causing miscarriage’ to a woman with child – other than in ‘good faith for the purpose of saving her life’ is a crime under Section 312 of the Indian Penal Code, punishable by simple or rigorous imprisonment and/or fine. The Pre-Conception Pre-Natal Diagnostic Techniques (PCPNDT) Prevention of Misuse Act was enacted and brought into operation from 1st January, 1996, in order to prevent sex selection which was resulting in termination of pregnancies with a female fetus. Some states in India still have a ‘two child family norm’ which provides disincentives for those who have more than two children, including being able to run for local political elections. Moreover, people nowadays prefer to have smaller families, but still desire to have sons. This combination creates a demand for sex selection. Government response to this at state and central level has resulted in community messages being confused to mean that all abortions are illegal, not only those which were the result of sex selection. Thus unfortunately access to safe abortion, especially in the second trimester is getting more restricted. The belief that a restrictive abortion policy will prevent sex selective abortion is baseless. Policies need to ensure that measures for preventing sex selective abortion do not affect access to safe abortion care for the genuine abortion seeker. MBBS doctors with postgraduate training or qualifications in gynecology and obstetrics, or those having completed MTP training programmes are recognized to perform MTPs. Although safe abortion access is legalized, it is not yet a right. India is signatory to ICPD, CEDAW with certain reservations. Statistics: Unsafe abortions are among the major preventable causes of maternal morbidity and mortality in India. Most of the abortions are not reported and hence the available statistics of abortions in India are of varying reliability. According to the Consortium on National Consensus for Medical Abortion in India, the available statistics are grossly inadequate as hospitals keep records of only legal and reported abortions. In the following table Number of abortions reported includes legal reported induced abortions. |Number of abortions 1st and 2nd trimester abortion services should be available in all the public sector. However, in reality access to safe abortion is denied due to various reasons like lack of medical facilities, lack of doctor in those health centers where facilities are available etc. The cost ranges from: |For surgical abortions||1st and 2nd Trimester are supposed to be free of cost in a Public Sector| |For Medical abortion||Are not yet available in public sector| Abortion services, 1st and 2nd Trimester, are available in easily available in the Private sector. Almost all specialist gynaecologists in the private sector will provide surgical abortion and medical abortions. The cost ranges from – For surgical abortions 1st Trimester : 1000-2000 INR 2nd Trimester : 5000 INR For Medical abortion: 1000INR Methods used for abortion in the 1st trimester are dilatation and curettage (D&C), Electric Vacuum Aspiration (EVA), manual vacuum aspiration (MVA), Medical methods of abortion MMA with Mifepristone and Misoprostol. 2nd Trimester is done with Ethacridine lactate instillation. MIfepriostone and Misoprostol are increasingly being used, although as of now this is off-label use. According to the MTP Act, pregnancies may only be terminated in the following settings. - A hospital established or maintained by the Government. - A place approved for the purpose of the Act by the Government. Any procedure performed in a centre which does not have government approval is deemed illegal. In case of Medical methods for termination of pregnancy not exceeding 63 days, it may be prescribed by a registered medical practitioner, having access to a place approved by the Government for surgical and emergency back up when such is indicated. All approved centres are required to maintain an Admission Register in the format prescribed for at least 5 years from the last entry. This is a secret document and can only be revealed under order of a court. According to the Consortium on National Consensus for Medical Abortion in India, every year an average of about 11 million abortions take place annually and around 20,000 women die every year due to abortion related complications. Most abortion-related maternal deaths are attributable to unsafe abortions. All abortion equipment is available in India. There are currently 20 brands of Mife-Miso being sold. The Combipack is also being sold since September 2009. Neutral Various pharmaceuticals manufacture Mifepristone and Misoprostol. The tablets are made available as individual or as kits at the pharmacist. Pregnancies may only be terminated in the following settings. - A hospital established or maintained by the Government. - A place approved for the purpose of the Act by the Government. - The Government should be satisfied with safety and hygiene. - The following facilities should be provided. - An OT table and instruments for abdominal and gynaecological surgery. - Anesthetic, resuscitation and sterilisation equipment. - Drugs and parenteral fluids for emergency use. Any procedure performed in a centre which does not have government approval is deemed illegal. District hospital (first referral) level: The District level facilities should offer all primary-care level abortion services as outlined by the Government, even though they may be available at primary care level. Hospitals should offer abortion care on an outpatient’s basis which is safe, minimizes costs and enhances convenience to the women. Secondary and Tertiary referral hospitals: According to Government standards, secondary and tertiary hospitals should have staff and facility capacity to perform abortions in all circumstances permitted by law and to manage all complications of unsafe abortion. The provision of abortion care at teaching hospitals is particularly important to ensure that relevant cadres of health professionals develop competence in abortion service delivery during clinical training rotations. A large pool of informal providers meets the gap between demand for abortion services and the low availability in the formal sector. Informal providers include herbalists, faith healers, traditional birth attendants, even nurses or auxiliary nurse midwives, paramedics and unqualified persons (Dais, magicians, ozha and other indigenous providers). They may also include practitioners of Indian systems of medicine (ISM) including homeopaths and ayurvedic physicians, largely located in villages and small towns. In rural remote and tribal areas, where services of formal providers are not readily available, women depend on them. At the same time, in rural areas the informal providers are preferred for inducing abortion among women who conceive out of wedlock because of the confidence in them for maintaining secrecy and protecting the family honour. According to Abortion Assessment Project of India Report, of the total abortion facilities surveyed, public sector accounts for only one-fourth of the facilities. This low level of investment by the state in the context of large scale poverty limits access of women to abortion services.This is exacerbated by the fact that PHCs which are mandated by policy to provide abortion services are not doing it in any significant numbers, as most public facilities are either district, sub-divisional or rural hospitals. The availability of abortion facilities in both better and less developed regions is reasonably good at 4 facilities per 1,00,000 population with public facilities accounting for one-fourth of this. A large proportion of the legal providers are gynaecologists and a majority of them are female providers. It encourages the promotion of family planning services to prevent unwanted pregnancies and at the same time recognises the importance of providing safe, affordable, accessible and acceptable abortion services to women who need to terminate an unwanted pregnancy. The recent amendment to decentralize regulation of abortion care to the district level serves to encourage registration of abortion facilities by minimising administrative delays. While defining corrective measures to deter abortion facilities that provide unsafe abortion care, the Act offers full protection to registered providers from any legal proceedings for any injury caused to a woman seeking abortion. Hindu, Muslim, Christian, Sikh, Jewish, Jains, Buddhists all have conservative / orthodox elements that would be anti abortion. All Ob Gyns are taught abortion procedures as part of undergraduate and post graduate studies. The MTP Act of 1971 did not provide abortion as a right to women. It expanded the permitted reasons for abortion in India, legalising abortion subject to the fulfilment of certain conditions. Abortion on any grounds other than those specified in the law is an offence punishable under the Indian Penal Code. The Medical Termination of Pregnancy Act, 1971, discriminates against unmarried women by not recognising that unwanted pregnancies in unmarried women could result in at least as much anguish and suffering as that experienced by married women. The number of medical practitioners required to give their assent for termination of the pregnancy is contingent upon the duration of the pregnancy. Pregancy of > 12 weeks require consent of two doctors. Non allopathic doctors are not included as service providers of abortion. If we expand these services through these providers many women will be benefited especially in the rural areas where access to same abortion is a major problem. Though abortion has been legal in India since 1971, the community is not aware about it and therefore most of them fall prey to unsafe illegal methods of abortion. More advocacy is needed to make community aware about this issue. Asia Safe Abortion Partnership is a network of activists, providers, researchers and others who have a feminist perspective and alsoa rights based approach while focusing on women sexual and reproductive health and rights. Asia Safe Abortion Partnership-ASAP is an affiliate of International Consortium for Medical Abortion (ICMA) and has collaboration with various organizations. We work in 15 countries across Asia. ASAP serves as a forum for information and experience sharing, strategic thinking and planning for a collective vision aimed towards regional and international advocacy. We support our members in undertaking research activities, capacity building and networking. We work to promote new technologies, including manual vacuum aspiration and medical abortion. We manage an e – forum which has a regular discussion and updates on issues related to women’s health and rights. - Government of India. The Medical Termination of Pregnancy Act, 1971. (Act No. 34 of 1971). Available from: http://mohfw.nic.in/MTP%20Act%201971.htm - Siddhivinayak Hirve. Abortion Policy In India : Lacunae and Future Challenges. Available at: http://www.cehat.org/aap1/policyreview.pdf - India Development Gateway : Types of abortion services and where they are provided. Available at http://www.indg.in/health/womenhealth/abortion - Ravi Duggal and Sandhya Barge. Abortion Assessment Project – INDIA . Abortion Services In India Report Of A Multicentric Enquiry. Available at http://www.cehat.org/aap1/national.pdf
<urn:uuid:a5fca25a-acf5-49b4-af9e-2dc127aaa85e>
CC-MAIN-2021-43
https://asap-asia.org/country-profile-india/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00150.warc.gz
en
0.941439
2,833
2.703125
3
The Monetary History of the Imperial Rome Following the death of Julius Caesar and the conclusion of the final civil war of the period, the emergence of a major monetary reform under Augustus changed forever the monetary system of Rome. It is with Augustus that we find a complete revision of the monetary system of Rome where bronze coinage was at last reintroduced. Augustus retained exclusive control of the minting process of gold and silver while the bronze coinage came under the authority of the Senate in 23 BC. These Senatorial bronze issues were hallmarked with the letters S.C. (“Senatus Consulto”) confirming the Senate’s authority. At first, these bronze issues bare the names of the responsible moneyers as part of the legends prior to 4 BC. The monetary reform of Augustus thereby revalued the various ratios of gold and silver as well as bronze as follows: Gold (aureus) …………. 25 denarii Gold (quinarius)……….. 12.5 denarii Silver (denarius) ……… 16 asses Silver (quinarius) …….. 8 asses Orichalcum (sestertius) … 4 asses Orichalcum (dupondius) …. 2 asses Copper (as) …………… 4 quadrantes Orichalcum (semis) …….. 2 quadrantes Copper quadrans ……….. .25 as Prior to the Augustian reform, Gold was never a regular part of the Roman monetary system. Its appearance tends to be concentrated during periods of war. Therefore, it is with Augustus that gold became a regular issue. The dupondius and as were similar in size yet were distinguishable by the colour of the metals (yellow orichalcum, red copper). At several Asiatic mints Augustus continued to strike the large silver pieces, equal to three denarti, which are usually termed cistophori or tetradrachms. Coins of this size and value had been the primary coinage of Asia Minor from the 2nd century BC. Therefore, the expanded Roman Empire sought to maintain the acceptable old Greek standards of money by producing these larger silver denominations but in accordance with the general style of Roman sometimes with Latin legends but also in Greek. The emperor Nero took considerable interest in the Imperial coinage. The dupondius of Nero was further distinguished by the radiate crown worn by the emperor which became a distinctive feature within the Roman monetary system. This radiate crown was eventually adopted by Caracalla to create double gold and silver denominations. Nero also instituted the quadrans struck in orichaleum in addition to those earlier pieces struck in Copper. It appears that Nero, who was artistically inclined, may have had an ultimate goal of creating a much more attractive coinage series by discarding copper in favour of the brass appearance of orichalcum. Nevertheless, while the radiated crown survived as a means of marking double values, Nero’s experimentation with various metals did not survive his own reign. There were a few small exceptions such as the orichalcum asses struck by Trajan and Hadrian. The sestertius, similar in size to the European crown or US silver dollars, ultimately became a standard monetary unit, which survived until the late 3rd century AD. However, we also find that many legal contracts were expressed in terms of sesterii. The monetary reform of Nero also involved the reduction in weight of the gold and silver coinage. Nero also reduced the fineness of the silver denarius in an attempt to increase the money supply, thereby refecting the inflationary pressures. The silver denarius of the late Republic up to 64 AD was struck for the most part with an average finess of 98%. The pre-reform period did see a slight reduction in weight of the denarius dropping from 4 grams to 3.5 grams by the time Nero came to the throne. However, the monetary reform of Nero in 64 BC saw fit to reduce the silver denarius to 93.5% fineness with an average weight of 3.36 grams. With the outbreak of Civil War following the death of Nero, the silver weight of the silver denarius was further reduced. The weight fell to 2.93 grams on average while the fineness remaining in the 94% range. We do see some pieces struck with a fineness at times droping below 90%. The victor to rise from this Civil War was Vespasian. Here we find that the average silver content between 69-71 AD clearly stood at 90%. Vespasian attempted to restore the silver content by raising it to 92.5% on average in 72 AD. This demonstrates that there was at least some public concern over the debasement of the silver coinage, which had prevailed during the Civil War of 68-69 AD. Vespasian’s attempt at restoring the silver content of the denarius was short-lived. His youngest son, Domitian lowered the silver content once again upon coming to the throne in 81 AD. Between 81-82 AD, the silver content appears to have decline to 91.5%. Domitain then raised the silver content to 97.9% during 82-85 AD restoring the denarius to nearly Republican standards. However, the inflationary pressures of the period made this effort unsustainable as the silver content dropped once again to remain at 93.5% between 85-96 AD. The silver content of the denarius was slightly reduced again to 93.25% when Nerva came to the throne in 96 AD. Trajan appears to have maintained the silver content of the denarius at an average of 93.5% during the period of 98-100AD. No doubt, the Dacian wars prompted a further reduction in fineness to 92.75% between 101-102 AD, 91.5% in 103-111 AD and a final decline back to Civil War levels of 90% between 112-117 AD. Hadrian carried out his own continuation of the debasement trend in Roman silver coinage as the fineness fell to 88.5% between 117-118 AD. Still, Hadrian’s reforms, which included taxation, also sought to raise the silver content once again in 119 AD back to 90% where it remained until 128 AD when it rose slightly to 90.5%. Antoninianus Pius, Hadrian’s successor, reduced the fineness once again to 88.5% upon rising to the throne in 138 AD. It would appear, that as a new emperor came to power, monetary gifts were necessary to insure or “buy” loyality. As a result, during the 2nd century AD, we seem to find a pattern of almost immediate debasement during the early years of a new emperor. In 140 AD, Antoninianus Pius raised the silver fineness slightly to 88.75% and again in 148 AD to 89%. However, financial pressures once again appeared and the fineness of the silver denarius dropped sharply to a new low of 83.25% in 150 AD. Again, there was an attempt to restore the silver content when it was raised back to 86.5% in 158 AD. When Marcus Aurelius came to power in 161 AD, once again we see an almost immediate reduction in the silver content to a new historical low at 77.5% between 161 and 165 AD. Aurelius raised the silver content back to 80% in 165 AD, but by 170 AD it fell once again back to 78% rising slightly to 78.5% in 175 AD. Upon the death of Marcus Aurelius and the rise to power of his son Commodus in 180 AD, we once again see an immediate decline in silver content to 75%. Commodus, however, made no attempt to restore the fineness of the silver coiange following his early years. In fact, a further reduction in the fineness of the denarius took place in 184 with a modest decline to 74.5%. By 188 AD, the silver fineness declined further to 73%. In addition to a steady reduction in silver content, Commodus also reduced the weight standard. Between 180-186 AD, the weight of the denarius decline to 3.16 grams while between 187-192 AD, the weight was reduced to 2.86 grams. Following the assassination of Commodus in 192 AD and the outbreak of another Civil War, both the silver content and the weight of the denarius declined. Didius Julianus, who made the highest bid for the throne at the auction held by the Praetorian Guards, was unable to come up with the cash to pay the troops due to the fact that the treasury had been depleted far more than he had expected by Commodus. The weights of even the bronze coinage declined by at least 10% while the gold coinage declined by some 5%+. The conclusion of the Civil War of 193-194 AD resulted in the rise of Septimus Severus to the throne of Rome. As was the case with Vespasian, Severus attempted to restore order by initially raising the monetary standards. The fineness of the silver denarius was raised to 79% in an attempt to reinstitute pre-Commodus coinage. This attempt to restore confidence was yet another brief hining moment in the monetary history of Rome. By mid 194 AD, the inflationary pressues became emence during the final decade of the 2nd century. The silver content declined sharply to 60.5% in 194 AD with a further reduction to 56.5% in 196 AD. The coinage record illustrates that this final decade resulted in what would be clearly termed as the Financial Panic of 194 AD. Here we see that within a 2 year period, the silver content of the denarius collapsed by 28% implying that inflation must have been much greater. The Roman monetary system and economy appears to have survived the financial panics of this final decade of the 2nd century. The silver content declined only slightly to 55% in 202 AD where it remained until 209 AD when it rose briefly back to 55.75%. With the death of Septimus Severus and the succession of his two sons Caracalla and Geta in 211 AD, we once again see the financial pressues of a change in government. The silver content of the denarius was lowered from 55.75% to 54.75%. The following year, 212 AD when Geta was murdered by his brother Caracalla, the silver content was reduced again to 50.5%. This near 10% reduction in silver content conincided with Caracalla’s purge of Geta and all his supporters leaving once again a financial need to firm up his support. After this crisis, Caracalla did raise the silver content slightly back to 51.25% in 215 AD shortly before his murder in 217 AD. The monetary reform of Caracalla also included the issuance of two new denominations – the double denarius and double aureus This new silver denomination became known as the antoninianus named after Caracalla’s official name, Antoninus. Both the gold and silver double denominations were distinguished by the portrait of the emperor wearing a radiated crown. This was the same method of distinction between thedupondius and as originally introduced by Nero. The weight, however, was NOT double but on average only 50% greater in both cases. Clearly, the inflationary pressures during the reigh of Caracalla were significant.
<urn:uuid:3b1acf8c-1365-4c08-b289-5715514f7903>
CC-MAIN-2021-43
http://www.monetaryhistoryofworld.com/by-empire/the-monetary-history-of-the-rome/monetary-history-of-imperial-rome/27bc-217ad/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00150.warc.gz
en
0.966032
2,370
3.640625
4
Although details remain to be confirmed, the publicly known facts about President Trump’s interactions with Ukraine support a case for impeachment based on abuse of presidential power. Impeachment has always been, first and foremost, a constitutional defense against executive misuse of power. Impeachable abuses have never required proof of crime. Mr. Trump’s behavior is a classic example of abuse of presidential power for personal or political gain, and is therefore properly impeachable. The Historical Pedigree of Non-Criminal, Abuse of Power as an Impeachable Offense Since the British invented impeachment in the 14th century as a parliamentary weapon against royal overreach and official misconduct, abuse of power has been on the short list of behaviors meriting impeachment. In Anglo-American practice, the essence of an impeachable “abuse of power” is the illegitimate use of power legitimately bestowed on the individual by virtue of his office. Impeachments for abuse of power were common in England during the four centuries preceding the American founding. The most common abuses charged were varying forms of self-enrichment or misuse of authority to gain personal power. Corruption of the financial sort figured in impeachments of the Earl of Suffolk in 1386, Sir Francis Bacon in 1621, the Earl of Strafford in 1641, and many others. But self-aggrandizement of non-monetary kinds also featured largely during this period. The very first impeachment, that of Lord Latimer in 1376, charged that he “notoriously accroached [gathered to himself] royal power.” The Duke of Buckingham, a royal favorite, was impeached in 1626 for using the powers granted him by the king both to enrich himself and to build political power for himself and his family. The reason old British history matters today is that the American framers consciously modeled the impeachment mechanism with British practice in mind, seeking to vindicate the same principles. Among the features of impeachment they adopted from the mother country was the phrase “high crimes and misdemeanors,” which they knew to be a term of art Parliament had used intermittently since 1386 to describe the kinds of conduct properly impeachable under the unwritten British constitution. In fact, when George Mason proposed adding “high crimes and misdemeanors” to treason and bribery in the definition of impeachable conduct in the U.S. Constitution, he justified the addition in part by reference to the impeachment of Warren Hastings, Governor General of Bengal, that had just begun in England. Adding this phrase, said Mason, was necessary so American impeachment would cover the kinds of offenses charged against Hastings and other British ministers. There are two key points about Hastings’ charges: First, the allegations against Hastings involved abuses of his powers as Governor General: oppression of native populations under his control, autocratic or deceitful dealings with rulers of Indian principalities, misconduct of local wars, and self-dealing benefitting himself or other British officials. Second, few if any of the charges were actual indictable crimes. Edmund Burke, the principle parliamentary prosecutor of Hastings, conceded the point. He said of the charges that they “were crimes, not against forms, but against those eternal laws of justice, which are our rule and our birthright: his offenses are not in formal, technical language, but in reality, in substance and effect, High Crimes and High Misdemeanors.” The Philadelphia delegates were also working against a background of recent American practice. Ten of the thirteen new states had written constitutions with impeachment provisions. Virtually all of them included language that extended to official abuse of power, and from 1776-1788, there were several state impeachment controversies involving abuse of power. The most notable was an (aborted) effort to impeach Thomas Jefferson on the ground that he had misused his authority as wartime governor of Virginia. We need not rely on the Framers’ knowledge of British or prior American practice to prove that they saw abuse of a variety of presidential powers as impeachable. They said so in plain language. At the constitutional convention, James Madison said an impeachment mechanism was necessary because a president “might betray his trust to foreign powers.” At the Virginia ratifying convention Madison noted that a president would be impeachable for abuse of the pardon power or for efforts to secure by trickery ratification of a treaty in the Senate. In short, the Framers adopted the phrase “high crimes and misdemeanors” both knowing and intending that the term included non-criminal abuses of official power. What’s more, although federal impeachment has been a rare event in the 230 years since the constitution was ratified, abuse of power is firmly established as impeachable in American practice. Federal judges have repeatedly been impeached (though not always convicted) for varying abuses of their lawful authority, from overtly corrupt rulings to sheer vindictive bullying. The second article of impeachment approved by the House Judiciary Committee against Richard Nixon charged abuse of power. Its many factual specifications boil down to the contention that Nixon tried to use federal agencies to help his friends, hurt his perceived enemies, and gain political advantage for himself. Conversely, the impeachment of President Bill Clinton illustrates the centrality of misuse of official power to successful presidential impeachment. Clinton surely escaped conviction in the Senate precisely because, however despicable (and illegal) his conduct may have been, it did not involve his powers or duties as president. His acquittal implies that abuse of official power is among the key distinctions between discreditable personal behavior and impeachable misconduct. When does exercise of legitimate presidential power become impeachable abuse? Defenders of this (and previous) presidents commonly ask how a president can be impeached for exercising a power he undeniably possesses. This question not only ignores history, but turns the constitutional function of impeachment on its head by ignoring the crucial issue – the manner in which the power was exercised and for what purpose. The founders included impeachment in the Constitution primarily to respond to misuse by the president of express or implied powers given him elsewhere in the document. To the founders, the main point of impeachment was that there must be a remedy when a president perverts the powers of his office, either for personal or political self-aggrandizement or when the president’s acts threaten the proper distribution of authority among the coordinate branches or otherwise offend the law or fundamental governing norms. That said, any abuse of power case requires that we distinguish between legitimate and illegitimate uses of presidential power. The most common illegitimate purpose is self-interest. Our constitution confers power on government officials to be employed for the public good, not to advance the private interests of the official. Although this principle is plain, its application can be difficult in the case of elected officials. When, for example, a president pursues a policy that is popular with voters, he furthers not only a policy objective but his personal interest in reelection. Most of the time, that is entirely appropriate. Indeed, it is an inescapable — and essential — feature of representative government. Presidents and other elected officers are to use their own judgment, but should also always bear in mind the wishes of their constituents. Because public and personal interests are inevitably commingled in almost any exercise of presidential authority, whenever an impeachable abuse of power is alleged we must consider three points: first, whether the challenged action serves the president’s private interests; second, whether the president’s behavior can be justified by plausibly legitimate reasons of state; and third, whether, even if a plausible public purpose is suggested, the private interest so far outweighs the public one that the president’s action should be deemed a pretext, and thus an abuse of power. The Ukrainian incident appears to be an impeachable abuse of power In his dealings with Ukraine, Mr. Trump may have misused at least three baskets of executive authority: supervision of domestic law enforcement and national security agencies, the commander-in-chief power, and the conduct of foreign policy. Whatever may be true in autocracies like Vladimir Putin’s Russia, in the United States, an elected official with authority over criminal investigative and prosecutorial agencies may not command those agencies to investigate his political rivals in order to gain an electoral advantage. The second article of impeachment approved by the House Judiciary Committee against Richard Nixon alleged exactly this kind of abuse of power. Nixon used or attempted to use the IRS, the Secret Service, the CIA, the FBI, and his own secret team of White House operatives to get political intelligence on the Democrats generally and dirt on individual “enemies.” When these misdeeds started to leak, Nixon used the powers of his office in an effort to suppress them. In sum, Nixon abused his authority over domestic law enforcement and the national security apparatus to damage political opponents. The Ukraine transcript suggests that Trump may have done the same by requesting or commanding Attorney General Barr to use the Justice Department to investigate what are, so far as is publicly known, wholly unsubstantiated allegations against Joe Biden and his son. This point requires further investigation inasmuch as we don’t yet know whether Trump contacted Barr, and if so what Barr did about it. What cannot be denied, however, is that Trump misused his constitutionally conferred authority to conduct foreign relations and his commander-in-chief power over military matters. To understand the magnitude of the abuse requires placing his behavior in geopolitical context. Ukraine, which shares a long land and sea border with Russia, gained its independence upon the collapse of the Soviet Union. President Putin, and many other Russians, view the disintegration of the Soviet Union as a tragedy. Putin is passionate to reverse it, at least to the extent of restoring Russian control over Ukraine and other states on the Russian frontier. To that end, Russia has purported to annex Ukrainian territory in the Crimea, in violation of the bedrock rules of the UN Charter that underpin our global order, and is currently engaged in intermittent military operations in the eastern portion of Ukraine where it supports a military separatist movement. In short, Ukraine is under a direct, urgent, and continuing threat of being swallowed by an expansionist Russia. The Russian threat is not merely to Ukraine, but to the overall peace and security of Europe, a matter of sufficient importance that the United States entered two world wars to preserve it. Ukraine maintains its precarious independence only by virtue of political, military, and economic support provided by the United States and the European Union. American support has included over $1 billion in congressionally authorized defense related aid during the past five years, various forms of intelligence cooperation, and maintaining economic and diplomatic pressure on Russia through sanctions and other means. In short, legislative and executive branches of the United States have adopted a policy – supported by legal, moral, and geopolitical considerations — of supporting Ukraine’s independence from Russia. The available evidence seems to demonstrate that Mr. Trump conditioned continuance of American support for Ukraine on making the relationship, in his word, “reciprocal.” As the public has now seen in a transcript of President Trump’s July 25 call with Ukraine’s newly-inaugurated President Zelensky, the reciprocal “favor” Trump demanded was that Ukraine investigate a debunked fringe theory that Ukraine, not Russia, meddled in the 2016 election, and that it investigate Joe Biden, the leading Democratic candidate to oppose Trump in the 2020 election. In short, Trump used two of his core presidential powers, and indeed leveraged the vast geopolitical might of the United States, to extort a country threatened with national extinction for the singular purpose of helping him win re-election. Mr. Trump and his defenders are busily offering rationalizations for his behavior, most centering on the claim that it is legitimate to employ American power to encourage other countries to root out “corruption,” or to assist American authorities in investigating crimes subject to American jurisdiction. One can admit the principle without conceding that it has any application in the present case. The contention that Mr. Trump was concerned to any degree about promoting the rule of law in Ukraine is risible. As is the suggestion that his contact with President Zelensky had anything to do with a legitimate U.S. law enforcement effort. The most obvious tell on both points is Rudy Giuliani. A president fighting American crime or foreign corruption doesn’t send his private lawyer abroad to get dirt on political opponents. In short, the president used the powers of the presidency illegitimately to serve his private interests. The proffered rationale for his conduct is transparently pretextual. Worst of all, Trump’s pursuit of his private interests was directly contrary to long-established, congressionally ratified American foreign policy objectives and, indeed, endangered the security architecture of western and central Europe. The norms and immemorial understandings of American constitutionalism make clear that a president may not use the power of his office to request, induce, inveigle, coerce, or extort another country into doing things primarily to benefit the president’s electoral hopes. A president who does so is impeachable on that ground.
<urn:uuid:cdf61932-fcdd-4b4d-b859-3e074f1a71e0>
CC-MAIN-2021-43
https://www.justsecurity.org/66407/trumps-extortion-of-ukraine-is-an-impeachable-abuse-of-power/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00669.warc.gz
en
0.964906
2,702
2.828125
3
Learn R&B vocals from the masters in this extensive online course. Written by Gabrielle Goodman and Jeff Ramsey, who together bring years of experience performing and recording with legendary artists like Roberta Flack, Al Jarreau, Patrice Rushen, Maxwell, Whitney Houston, Diana Ross and Mary J. Blige, this course is the perfect introduction to learning the proper approach to singing R&B music. In this 12-week course, you'll explore the foundations of effective R&B singing, including phrasing, riffing, and shouting - in addition to traditional vocal technique used for overall healthy singing. The course begins with a history of R&B music from a vocal standpoint, including the use of falsetto by Doo Wop artists such as the Platters. Since R&B is a style that has a large improvisational component, you will also study improvisation techniques in this genre ranging from the shouts of Little Richard to the riffs of Chaka Khan. You will discover how various scales, like the pentatonic and blues scales, are used in R&B to create a truly soulful sound. You will learn to properly utilize diaphragmatic breathing, range extension, vocal textures, and much more. This course covers the vocal techniques of the key singers in the R&B genre, including Stevie Wonder, Aretha Franklin, Ray Charles, Jackie Wilson, James Brown, Marvin Gaye, Curtis Mayfield, Michael Jackson, Beyoncé and many more. The goal of the course is to enable you to sing in the R&B style with facility and freedom. You'll also gain a clear awareness of R&B phrasing, scale patterns, proper breathing, pitch accuracy, and clear rhythmic articulation. By the end of the course, you will be able to demonstrate solid vocal technique, as well as stylistic authenticity in the R&B genre. By the end of the course you will be able to: - Learn classic R&B repertoire, as well as the music of contemporary R&B artists - Use vocal phrasing for the R&B style, employing syncopation and rhythmic articulation - Understand improvisation techniques for R&B singing including blues scales, blue notes, pentatonic scales, slurring, and belting - Learn the history of R&B music as it relates to songs performed - Employ basic vocal techniques, including breathing, placement range extension, and vocal textures Lesson 1: R&B History and Performance Techniques - Defining R&B - Introduction of Basic Warm-Ups and Basic Vocal Technique - Early R&B Music and Its Relationship to The Blues - A Brief Discussion of Key Suitability for Your Blues Lesson 2: Early Forms of R&B - Ribcage Expansion - The Blues Form and Scale as It Relates to R&B - Improvisational Styles of Early R&B Artists - R&B Improvisation Showcase: Ray Charles - R&B Improvisation Showcase: Jackie Wilson - R&B Improvisation Showcase: Sam & Dave - Doo Wop Songs and Groups: The Use Of Legato and Falsetto - Doo Wop Showcase: The Platters Lesson 3: The Motown Era - Registers, Releasing the Jaw, and Developing Range - Vocal Styles Used in the Motown Era - Informal Diction and the Use of Colloquial Expressions Lesson 4: Aretha Franklin, James Brown, and Soul Music - Registers, Voice Classifications, and the Importance of the Chest Voice in R&B - Characteristics of R&B and the Similarities in Gospel Singing - Vocal Improvisation Showcase: Aretha Franklin - Vocal Style Showcase: James Brown - The Connection Between Gospel Music of the 1960s and Its Relationship to R&B Lesson 5: R&B Music of The Early 1970s: Soul and Funk - Discovering Forward Placement - Falsetto Singing Spotlight: Marvin Gaye and Curtis Mayfield - Falsetto Singers, the Sweet Sopranos, and Discovering Forward Placement - Performance and Expression - Singing Over the Ending Vamp While Using Light Textures Lesson 6: Transforming Pop and Folk songs into R&B - Using Crescendo - Vocal Dynamics - The Groove and The Importance of Accented Beats and Syllables - Examples of Song Transformation into R&B Lesson 7: R&B Music Mid 70s to 80s - Soul/Disco and Soul/Funk Groove Patterns - Nuances and Affectations - Vowel Modification and Sung Speech - Stevie Wonder Lesson 8: Gospel Influences in R&B/ Soul and Vice Versa - Summary of 70's- 80's Gospel artists - Glissando and Its Usage in Gospel and R&B - R&B Artists Who Grew Up Singing in Church Lesson 9: Prince, Michael Jackson, and Adult Contemporary R&B - The Music of Prince - Vocal Textures Used by Michael Jackson - Covered Tone - Adult Contemporary/Quiet Storm Lesson 10: New Jack Swing and Pop/R&B - New Jack Swing - Whitney Houston - The Commercialization of Melisma in Popular Music Lesson 11: Hip Hop Soul - Hip Hop Soul - Crossover Commercial Success of R&B to Mainstream Lesson 12: R&B Mid/Late 90's – Current - Summary of Neo Soul Artists - Putting it into Motion Prerequisites and Course-Specific Requirements Students must have basic vocal technique including the ability to match pitches and possess a strong sense of rhythm. In order to strengthen the skills listed above we recommend that students take Voice Technique 101. - Vocal Improvisation: Techniques in Jazz, R&B, and Gospel Improvisation by Gabrielle Goodman, Goodness Music - A basic audio recording tool that will allow you to record yourself and save the recording in MP3 format. You will have a tool to use for this purpose inside the learning environment. Alternatively, you can use software like Audacity (PC) or GarageBand (Mac), or a Digital Audio Workstation (DAW). - A printer is recommended, so that you can print out music examples used in the course After enrolling, please check the Getting Started section of your course for potential deals on required materials. Our Student Deals page also features several discounts you can take advantage of as a current student. Please contact [email protected] for any questions. General Course Requirements Below are the minimum requirements to access the course environment and participate in live chats. Please make sure to also check the Prerequisites and Course-Specific Requirements section above, and ensure your computer meets or exceeds the minimum system requirements for all software needed for your course. - Latest version of Google Chrome - Zoom meeting software - Speakers or headphones - External or internal microphone - Broadband Internet connection Jeff Ramsey (1967-2020) was a Professor at Berklee College of Music. Jeff toured and recorded the world over for such artists as Lalah Hathaway, Al Jarreau, Patrice Rushen, Maxwell, Celine Dion, Whitney Houston, Diana Ross, and Barbara Streisand, just to name a few. He did commercials in radio and television for Burger King, Levi 501, Mountain Dew, and Michelob. Jeff did solo work for many different projects including famed Tower of Power bassist, Rocco Prestia's cd entitled "Everybody On The Bus", and James Day's ep "Remember When" and CD "Better Days" with the latter two garnering the UK hit "Don't Waste The Pretty." Jeff lent his voice to James Day's 2nd CD project, "Natural Things," released May 11th, 2009. Jeff's vocals can also be heard on a project released in 2009 on Plastic City entitled DKMA Presents Andrastea - The Ascent as well as a compilation CD released by Plastic City entitled Deep Train 6- Dedication. Jeff was also an educator and shared his experiences on the road and the studio with his students at Berklee College where he was a Professor in the Voice Department. He sang, did session dates, and performed around the New England area, and released his solo debut CD entitled My Best, in the Fall of 2009. For more info, visit www.jefframseymusic.com. Read Less Author & Instructor Gabrielle Goodman is a professor in the Voice Department at Berklee College of Music. Well-versed in jazz, R&B, classical, and gospel, Gabrielle has performed with Chaka Khan, Al Jarreau, Nancy Wilson, and Roberta Flack, who calls Goodman "one of the finest singers around today." Her 1993 JMT/Polygram debut release Travelin' Light brought her international acclaim. Gabrielle won an ASCAP Songwriting Award for "You Can Make the Story Right," a song she co-wrote with Chaka Khan. Gabrielle has performed in both classical and jazz idioms with the Syracuse Symphony, the Baltimore Symphony, the Baltimore Opera, and the National Symphony. Her theatrical appearances include Maya Angelou's gospel/opera King and the Canadian production of Ain't Misbehavin’, co-starring Dee Dee Bridgewater. In the early 2000s, Gabrielle toured the show Forever Swing with Michael Buble. She has performed as a guest soloist with the Boston Pops on numerous occasions. Her television appearances include The Late Show with David Letterman, The Arsenio Hall Show, A&E Channel, the BBC, and the German TV show Talk and Swing. Gabrielle has 3 self-produced albums and has released three albums, respectively, Angel Eyes (2004), Song From The Book (2011) and Spiritual Tapestry (2014) which includes guests Patrice Rushen, Terri Lyne Carrington, Walter Beasley and Armsted Christian, in addition to authoring the book Vocal Improvisation in 2009. Recent tours include performances in China, Bogota, Colombia, and Russia. Read Less When taken for credit, R&B Vocals can be applied towards these associated programs:
<urn:uuid:81cb4bcd-91d0-42fd-bf63-e0b16ce8503e>
CC-MAIN-2021-43
https://online.berklee.edu/courses/r-b-vocals
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00189.warc.gz
en
0.889632
2,171
2.546875
3
The cockpit of an aircraft contains flight instruments on an instrument panel, and the controls that enable the pilot to fly the aircraft. In most airliners, a door separates the cockpit from the aircraft cabin. After the September 11, 2001 attacks, all major airlines fortified their cockpits against access by hijackers. The word cockpit seems to have been used as a nautical term in the 17th century, without reference to cock fighting. It referred to an area in the rear of a ship where the cockswain's station was located, the cockswain being the pilot of a smaller "boat" that could be dispatched from the ship to board another ship or to bring people ashore. The word "cockswain" in turn derives from the old English terms for "boat-servant" (coque is the French word for "shell"; and swain was old English for boy or servant). The midshipmen and master's mates were later berthed in the cockpit, and it served as the action station for the ship's surgeon and his mates during battle. Thus by the 18th century, "cockpit" had come to designate an area in the rear lower deck of a warship where the wounded were taken. The same term later came to designate the place from which a sailing vessel is steered, because it is also located in the rear, and is often in a well or "pit". However, a convergent etymology does involve reference to cock fighting. According to the Barnhart Concise Dictionary of Etymology, the buildings in London where the king's cabinet worked (the Treasury and the Privy Council) were called the "Cockpit" because they were built on the site of a theater called The Cockpit (torn down in 1635), which itself was built in the place where a "cockpit" for cock-fighting had once stood prior to the 1580s. Thus the word Cockpit came to mean a control center. The original meaning of "cockpit", first attested in the 1580s, is "a pit for fighting cocks", referring to the place where cockfights were held. This meaning no doubt influenced both lines of evolution of the term, since a cockpit in this sense was a tight enclosure where a great deal of stress or tension would occur. From about 1935, cockpit came to be used informally to refer to the driver's cabin, especially in high performance cars, and this is official terminology used to describe the compartment that the driver occupies in a Formula One car. In an airliner, the cockpit is usually referred to as the flight deck, the term deriving from its use by the RAF for the separate, upper platform in large flying boats where the pilot and co-pilot sat.[clarification needed][clarification needed] In the USA and many other countries, however, the term cockpit is also used for airliners. The first airplane with an enclosed cabin appeared in 1912 on the Avro Type F; however, during the early 1920s there were many passenger aircraft in which the crew remained open to the air while the passengers sat in a cabin. Military biplanes and the first single-engined fighters and attack aircraft also had open cockpits, some as late as the Second World War when enclosed cockpits became the norm. The largest impediment to having closed cabins was the material used to make the windows. Prior to Perspex becoming available in 1933, windows were either safety glass, which was heavy, or cellulose nitrate (i.e.: guncotton), which yellowed quickly and was extremely flammable. In the mid-1920s many aircraft manufacturers began using enclosed cockpits for the first time. Early airplanes with closed cockpits include the 1924 Fokker F.VII, the 1926 German Junkers W 34 transport, the 1926 Ford Trimotor, the 1927 Lockheed Vega, the Spirit of St. Louis and the passenger aircraft manufactured by the Douglas and Boeing companies during the mid-1930s. Open-cockpit airplanes were almost extinct by the mid-1950s, with the exception of training planes, crop-dusters and homebuilt aircraft designs. Cockpit windows may be equipped with a sun shield. Most cockpits have windows that can be opened when the aircraft is on the ground. Nearly all glass windows in large aircraft have an anti-reflective coating, and an internal heating element to melt ice. Smaller aircraft may be equipped with a transparent aircraft canopy. In most cockpits the pilot's control column or joystick is located centrally (centre stick), although in some military fast jets the side-stick is located on the right hand side. In some commercial airliners (i.e.: Airbus—which features the glass cockpit concept) both pilots use a side-stick located on the outboard side, so Captain's side-stick on the left and First-officer's seat on the right. Except for some helicopters, the right seat in the cockpit of an aircraft is the seat used by the co-pilot. The captain or pilot in command sits in the left seat, so that they can operate the throttles and other pedestal instruments with their right hand. The tradition has been maintained to this day, with the co-pilot on the right hand side. The layout of the cockpit, especially in the military fast jet, has undergone standardisation, both within and between aircraft, manufacturers and even nations. An important development was the "Basic Six" pattern, later the "Basic T", developed from 1937 onwards by the Royal Air Force, designed to optimise pilot instrument scanning. Ergonomics and Human Factors concerns are important in the design of modern cockpits. The layout and function of cockpit displays controls are designed to increase pilot situation awareness without causing information overload. In the past, many cockpits, especially in fighter aircraft, limited the size of the pilots that could fit into them. Now, cockpits are being designed to accommodate from the 1st percentile female physical size to the 99th percentile male size. In the design of the cockpit in a military fast jet, the traditional "knobs and dials" associated with the cockpit are mainly absent. Instrument panels are now almost wholly replaced by electronic displays, which are themselves often re-configurable to save space. While some hard-wired dedicated switches must still be used for reasons of integrity and safety, many traditional controls are replaced by multi-function re-configurable controls or so-called "soft keys". Controls are incorporated onto the stick and throttle to enable the pilot to maintain a head-up and eyes-out position – the Hands On Throttle And Stick or HOTAS concept,. These controls may be then further augmented by control media such as head pointing with a Helmet Mounted Sighting System or Direct voice input (DVI). Advances in auditory displays allow for Direct Voice Output of aircraft status information and for the spatial localisation of warning sounds for improved monitoring of aircraft systems. The layout of control panels in modern airliners has become largely unified across the industry. The majority of the systems-related controls (such as electrical, fuel, hydraulics and pressurization) for example, are usually located in the ceiling on an overhead panel. Radios are generally placed on a panel between the pilot's seats known as the pedestal. Automatic flight controls such as the autopilot are usually placed just below the windscreen and above the main instrument panel on the glareshield. A central concept in the design of the cockpit is the Design Eye Position or "DEP", from which point all displays should be visible. Most modern cockpits will also include some kind of integrated warning system. In the modern electronic cockpit, the electronic flight instruments usually regarded as essential are MFD, PFD, ND, EICAS, FMS/CDU and back-up instruments. A Mode control panel, usually a long narrow panel located centrally in front of the pilot, may be used to control heading, speed, altitude, vertical speed, vertical navigation and lateral navigation. It may also be used to engage or disengage both the autopilot and the autothrottle. The panel as an area is usually referred to as the "glareshield panel". MCP is a Boeing designation (that has been informally adopted as a generic name for the unit/panel) for a unit that allows for the selection and parameter setting of the different autoflight functions, the same unit on an Airbus aircraft is referred to as the FCU (Flight Control unit). The primary flight display is usually located in a prominent position, either centrally or on either side of the cockpit. It will in most cases include a digitized presentation of the attitude indicator, air speed and altitude indicators (usually as a tape display) and the vertical speed indicator. It will in many cases include some form of heading indicator and ILS/VOR deviation indicators. In many cases an indicator of the engaged and armed autoflight system modes will be present along with some form of indication of the selected values for altitude, speed, vertical speed and heading. It may be pilot selectable to swap with the ND. A navigation display, which may be adjacent to the PFD, shows the route and information on the next waypoint, wind speed and wind direction. It may be pilot selectable to swap with the PFD. The Engine Indication and Crew Alerting System (used for Boeing) or Electronic Centralized Aircraft Monitor (for Airbus) will allow the pilot to monitor the following information: values for N1, N2 and N3, fuel temperature, fuel flow, the electrical system, cockpit or cabin temperature and pressure, control surfaces and so on. The pilot may select display of information by means of button press. The flight management system/control and/or display unit may be used by the pilot to enter and check for the following information: flight plan, speed control, navigation control, and so on. In a less prominent part of the cockpit, in case of failure of the other instruments, there will be a battery-powered integrated standby instrument system along with a magnetic compass, showing essential flight information such as speed, altitude, attitude and heading. In the U.S. the Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA) have researched the ergonomic aspects of cockpit design and have conducted investigations of airline industry accidents. Cockpit design disciplines include Cognitive science, Neuroscience, Human–computer interaction, Human Factors Engineering, Anthropometry and Ergonomics. Aircraft designs have adopted the fully digital "glass cockpit". In such designs, instruments and gauges, including navigational map displays, use a user interface markup language known as ARINC 661. This standard defines the interface between an independent cockpit display system, generally produced by a single manufacturer, and the avionics equipment and user applications it is required to support, by means of displays and controls, often made by different manufacturers. The separation between the overall display system, and the applications driving it, allows for specialization and independence. |Wikimedia Commons has media related to Aircraft cockpits.|
<urn:uuid:61554abe-040f-43ad-80a5-80d6d99a0965>
CC-MAIN-2021-43
https://www.knowpia.com/knowpedia/Cockpit
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00590.warc.gz
en
0.958879
2,302
3.875
4
Did you know that the average user has around 90 online accounts to manage? There are many different ways an individual can gain access to an account, but not all authentication methods were made equal. While password-based login systems are the most popular choice, they’re far from the best. Here at Swoop, we think traditional login tools are huge liabilities for websites— both because they’re insecure and provide a terrible user experience. That’s why we’ve developed a more secure, token-based authentication service that can reduce the reliance on weak password systems. Take control of your login process and provide a better experience for your users! If you’re a web developer or system designer, you’ll need to implement some token-based authentication into your site. Modern users have come to expect the more streamlined experience that tokens provide. Plus, they offer a level of security that simpler systems just can’t beat. That’s why we’ve put together this helpful guide to token-based authentication. We’ll walk through some of the most common questions about this process and how it can improve your site: - What is a token? - What is token-based authentication? - Why is token-based authentication better than a password? - What are the different types of token-based authentication? - How do I get started with token-based authentication? Token-based authentication methods can dramatically improve online usability and security by providing a more streamlined and highly secure process. As a developer yourself, it’s your responsibility to provide users with the best and most secure experience possible. Use the list above to jump straight to what you need, or read along from the top. Let’s get started. 1. What is a token? A token is a highly secure format used to transmit sensitive information between two parties in a compact and self-contained manner. Tokens are often used to strengthen authentication processes, whether that be within a website or application. A typical token consists of three key elements: - A header that defines the type of token and algorithm used. - A payload that contains information about the user and other metadata. - A signature that verifies the identity of the sender and the authenticity of the message. When sensitive data is transmitted via token, users can rest assured knowing their private information is treated as such. This is crucial for any sort of payment information, medical data, or login credentials. 2. What is token-based authentication? Token-based authentication is a web authentication protocol that allows users to verify their identity a single time and receive a uniquely-generated encrypted token in exchange. For a designated period time, this token is how users access protected pages or resources instead of having to re-enter their login credentials. Here’s how the token-based authentication process works: The user enters their username and password. 2. Login Verification & Token Generation The server verifies that the login information is correct and generates a secure, signed token for that user at that particular time. 3. Token Transmission The token is sent back to the user’s browser and stored there. 4. Token Verification When the user needs to access something new on the server, the system decodes and verifies the attached token. A match allows the user to proceed. 5. Token Deletion Once the user logs out of the server, the token is destroyed. The token proves you’ve been allowed in and allows you to view other resources and make further requests. This is an improvement from traditional processes that require users to verify their identities at every step. This enables the website to add more layers of security without forcing you to prove you are who you are time and time again. Thus, the process works to improve user experience and security simultaneously. 3. Why is token-based authentication better than password logins? Traditional passwords have one huge weakness: they’re human-generated. Human-made passwords tend to be pretty weak and easy to crack. We’ve all reused old passwords again and again because they’re easy to remember. Not only that, but a password-based login system requires users to continuously enter and re-enter their credentials, essentially wasting valuable time. The traditional password-based login process looks something like this: - The user arrives at the target domain. - They enter their login credentials. - The server verifies the match and lets them in. - The user is authenticated to access that domain. For steps 5-8, users need to repeat the entire process in order to access anything hosted on a different server, domain, or subdomain. Common examples include viewing or editing account details and beginning an eCommerce checkout. So what’s wrong with this process? A few things. For one, it’s unintuitive and wastes the user’s time. Who wants to enter their credentials over and over again in order to complete multiple tasks? Every user has better things to do than waste their time on a repetitive and unnecessary process. Plus, the login system above is likely already vulnerable because the password is weak to begin with! In this simple authentication setup, each login step is a weak link that’s open to attack. So how does token-based authentication offer new solutions? Token-based authentication is more secure. Token-based authentication, on the other hand, uses ultra-secure codes to prove that you’ve already been authenticated. They’re specific to the user, the particular log-in session, and the security algorithm that the system uses. In other words, the server can identify when a token’s been tampered with at any step and can block access. Tokens essentially act as an extra layer of security and serve as a temporary stand-in for the user’s password. Most importantly, tokens are machine-generated. Encrypted, machine-generated code is significantly more secure than any password you might create yourself. For example, our tools here at Swoop create digital tokens with 2,048-bit encryption that would take even the best hackers billions of years to crack. Token-based authentication offers a streamlined process. Rather than having to re-verify your identity every time you arrive at a new page, tokens are temporarily stored in the browser, providing you access to information on the domain for a specified time period. This allows you to jump from server to server or subdomain to subdomain without constantly being slowed by the authentication process. For users, this allows you to easily navigate a website and efficiently find the resources you’re looking for. For developers, it keeps individuals on your site for longer by decreasing the risk that users become aggravated and click out. When the user is finished with their browsing session, they simply have to log out and the stored token is destroyed forever. This way, users don’t risk leaving their accounts open to attack. 4. What are the different types of token-based authentication? You probably have experience using token-based authentication methods, whether you realized it at the time or not. Here are a few common, everyday examples of token-based authentication you might see in the real world: - Accessing an account from a one-time text or email link. - Using a fingerprint or facial scanning methods to unlock a smartphone. - Logging into a new web page using your Facebook login credentials. Although unique usernames and passwords remain one of the most widely used authentication methods for websites and applications, token-based alternatives are quickly becoming a norm. Keep an eye out for the times you might be using a password alternative to access restricted resources! There are many different ways a user can verify their identity using token-based authentication methods. How can you choose the best one for your website or web application? Let’s walk through some of the top user authentication processes! Swoop’s secure token-based login system can help you eliminate passwords from your site once and for all. We have two different initial mechanisms for authentication—Magic Link™ and Magic Message™. Here’s a brief overview of each: Magic Link™ utilizes an automatically generated link to provide highly secure password-free authentication to users. It’s the easiest way to improve security on your site and lose passwords in one simple step. The process looks like this: - Swoop’s Magic Link™ is sent to a user’s email to verify their identity. - The user clicks the link from their email. - The user’s credentials are verified and a token is created. - The user is authenticated. Our Magic Link™ authentication system is becoming increasingly common, but the downside is that it requires a user to switch between the authentication service and a mail client to find and follow the link. On the other hand, we also offer an even better mechanism with our Magic Message™ that does a similar function in only two simple clicks. Here’s how Magic Message™ works: - The user is redirected to the Swoop service via the OAuth 2.0 protocol for authentication. - From a browser window, the user pushes the “Send Magic Message™” button: The button activates a mailto link, which generates a pre-written email for the user to send. - The user sends the email: This is where the magic happens. Once the email is sent, the outgoing email server generates and embeds a 1024/2048 bit, fully encrypted digital key into the header of the email. Swoop’s authentication server follows the public key cryptographic procedure to decrypt this key. Each email sent receives a unique key for that message. The level of security for these encrypted keys is beyond comparison to traditional passwords. - The user is logged into their account: When the key decrypts and passes all layers of verification, the Swoop authentication server directs the website to open the user’s account and begin a session. This all takes place in a matter of seconds and makes for an extremely streamlined user experience. Basically, this gives you the option to replace passwords entirely. Swoop’s token-based authentication is an ideal solution for websites, web apps, or other online resources that need to be protected. Biometric authentication techniques use a concrete, unchangeable biological characteristic in place of a machine-generated token. This variation of token-based authentication has become more popular in recent years, but it still has a long way to go. Biometric authentication processes can be fooled, so they’re not currently as secure as an encrypted digital key. However, they’re still a great security option for hardware, like phones and computers, since the device doesn’t necessarily need internet access in order to verify a match. Biometric tokenization can take the authentication process a step further by storing users’ sensitive biological data in the form of unique and randomized tokens. This makes it significantly harder for cybercriminals to gain access to critical information. Because data is divided and stored in a variety of locations, it’s less likely that a hacker is able to fake the biometric characteristics required for authentication. Social sign-in authentication This is a common authentication technique that relies on tokens. You’ve likely seen this before when a website gives you the option to log in via an existing social media account or when an application wants to post something to your social profile on your behalf. This process is also called open authorization. Basically, this process works by creating a uniquely-generated token that only the website and the social media platform can decode. The token serves as an intermediary. Since the token acts as a secure stand-in for a password, this authentication option is useful for when you don’t necessarily want to share your login credentials with multiple apps or sites. 5. How do I get started with token-based authentication? Previously, implementing a token-based security system required a lot of work from a developer or team of developers. You can still use an in-house team or tech consultant to custom-develop a token system, but this route can be costly and time-consuming. Easier options now exist that let any site or web app use ultra-secure tokens to authenticate users. For instance, Swoop can be set up on just about any site in as little as 5 minutes. Here’s how the basic implementation process works: - Integrate with a plugin/app (such as a WordPress plugin) or write a custom integration. - Specify the login link that users will click to initiate the token-generation process. - Specify the endpoint link that the authentication tool will redirect users to once their token is verified. - Set up a trigger for users to begin the process, like placing a ‘Log in with Swoop’ button. It’s as simple as that! Thanks to carefully crafted authentication software like Swoop, it’s never been easier to implement passwordless authentication alternatives across your website. Digital tokens are the perfect way to reduce your website’s reliance on passwords. It’s easy to take token-based authentication for granted, but as online users, it’s one of the key features involved in a secure and intuitive online experience. So if you’re looking to make drastic user improvements to your website, be sure to explore your options and get to know some of the top token-based password alternatives available. Don’t forget to keep educating yourself, too! Here are a few related resources we suggest checking out: - A Modern Password: 6 Top Tips for A Secure Login Process. One of the many benefits of using tokens is that it keeps your users’ passwords protected. Learn more about how you can improve your website’s password security with our ultimate guide. - Top 3 Password Alternatives for Better Website Security. If security is your main objective, you might want to consider using password alternatives. Read more to learn why passwords aren’t the best at keeping information secure. - Security Authentication vs. Authorization | What’s the Difference? These two terms are commonly confused, although the differences are quite important. Check out our guide that compares authentication and authorization in web practices!
<urn:uuid:6f205cbd-b31b-4331-a90e-25af9a1e294e>
CC-MAIN-2021-43
https://swoopnow.com/token-based-authentication/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00391.warc.gz
en
0.904028
2,976
2.546875
3
Beyond the pale on Titanium Dioxide Industry fights sunscreen cancer warning label Industry lobbyists are spending millions of euros to influence an upcoming EU decision on labelling titanium dioxide – found in everyday products like sunscreen – a “suspected carcinogen”. The lobbying is led by an unregistered trade association and a public relations consultancy; nonetheless, they appear to have the ear of member states and the European Commission. *** Read our 25 September 2018 update on the titanium dioxide lobby battle *** Corporate lobbyists are waging a fierce battle to prevent the European Union labelling titanium dioxide – a chemical ubiquitous in many everyday items including sunscreen – a “suspected carcinogen”. In the face of the lobbying, EU member states' willingness to instate warning labels on products like sunscreen and paint is weakening. Among those working against the labelling are the Titanium Dioxide Manufacturers’ Association (TDMA), which is going under the radar by avoiding the lobby transparency register, and its lobby consultancy Fleishman-Hillard. Titanium dioxide is a chemical whitener which can be found in sunscreen, toothpaste, foodstuffs like candies, as well as plastics, paints, and many other products. In 2006 the World Health Organisation’s International Agency for Research on Cancer (IARC) declared titanium dioxide a “possible carcinogen for humans” after tests on animals, with even larger risks in its nano form. Nano-materials are extremely tiny versions of existing chemicals and the concern is that they can accumulate in the body and enter human cell membranes and affect their function. In late September 2018 the 28 EU member states via the Commission’s chemicals regulatory committee (set up under Europe’s main law to regulate chemicals, REACH – the Registration, Evaluation, Authorisation and Restriction of Chemicals) will take a decision about whether all forms of titanium dioxide (including the nano form) should be classified as a “suspected carcinogen” when inhaled, and be labelled as such on products where it is used. Despite demanding action on nano-chemicals a few years ago, the EU’s member states appear to be increasingly lukewarm when it comes to regulating titanium dioxide today, specifically the liquid version found in products like sunscreens, paints and cosmetics that can be sprayed and hence inhaled. What can explain this about-turn? Two words: corporate lobbying. Officials have reported serious industry lobbying as the classification process on titanium dioxide has proceeded. In a May 2018 Politico article, an EU official was reported as saying that “well-organized pressure” has come from industry, with lobbyists apparently asking for meetings with authorities in every country. The official continued: “We always have lobbying, but it’s particularly heavy for this particular substance”. Meanwhile Le Monde reported that when an official at a member state environment ministry agreed to meet with the industry to discuss titanium dioxide, no less than 24 people arrived at their office! The key lobby group is the Titanium Dioxide Manufacturers’ Association (TDMA) which has embarked on a major influencing operation. TDMA’s members are titanium dioxide producers including: Cinkarna Celje, a major company from Slovenia; Cristal, the world’s second biggest producer of titanium dioxide, with a presence in UK, France, and Belgium; Evonik, a German chemicals company declaring annual EU lobby spend of €1,750,000 - €1,999,999; Venator, a global company based in the US; and others from Poland and the Czech Republic. On 15 May 2018 a letter and attachments from TDMA were sent to member states’ officials; these have been seen by Corporate Europe Observatory and give insights into the lobby group's positioning. The TDMA documents repeatedly demand that the classification of titanium dioxide should be “put on hold” until further information is considered. The letter also says that TDMA has set up “a serious 14m euro science programme which will build the scientific basis to help discuss and resolve the many issues that present themselves in the current, unique situation”. It is clear that industry is spending €14 million on funding scientific studies into titanium dioxide which will then be promoted to decision-makers. But these studies have not been set up at the behest of the EU decision-makers, but are instead designed by industry, who presumably will also decide what gets published and what doesn’t (including whether the raw data behind the studies will be made public). Industry-funded science research is a classic lobbying tactic, seen on other files. It seems clear that the titanium dioxide industry is trying to lead the classification process and this research programme can be seen as part of their influencing strategy. TDMA’s strap-line might be “for a brighter future” but it does not welcome the spotlight of transparency. Remarkably, TDMA is not in the EU lobby transparency register, even though it appears to be an active lobbyist in Brussels right now. TDMA’s chairman is Robert Bird, who is apparently from member company Venator, and his name and signature were on the 15 May letter sent to member state officials. Yet Venator does not appear in the EU lobby register either; neither do other TDMA members Cinkarna Celje and Cristal. If TDMA is not in the register, then in theory at least, it cannot sit on any Commission expert groups and it cannot meet with Commissioners or their Cabinets. But that still leaves a major lobbying loophole as thousands of Commission officials fall outside these rules, as well as MEPs and of course member state officials. For example a source told Corporate Europe Observatory that TDMA was present at an April 2018 REACH sub-group meeting hosted by Commission officials to hear "exchanges" on the proposed titanium dioxide classification. Also present were 20+ officials representing member states and 15+ other industry representatives (including the chemical industry lobby group CEFIC and Brussels’ biggest lobbyist BusinessEurope). No NGOs were present. TDMA has also been very active in lobbying CARACAL, the key Commission expert group on REACH. The lobby group has submitted several position statements prior to specific meetings discussing titanium dioxide and even participated in, and spoke at, the 12 June 2018 meeting of the CARACAL. This falls outside the spirit of the Commission’s transparency rules. Vice President Frans Timmermans recently claimed that “Being on the Transparency Register must be a pre-condition for lobbyists to get access to lawmakers. The Commission has applied this principle for almost four years, and it works.” Yet the unregistered lobby group TDMA seems to have no problem in accessing Commission officials and can even participate in Commission-organised meetings and expert groups. Additionally, it seems likely that its close relationship with CEFIC (see below) is also useful when it comes to accessing parts of the regulatory process. TDMA is not acting alone. It 2017 it paid Brussels’ biggest lobby firm Fleishman-Hillard €400,000-€499,999 a year for lobbying services. According to LobbyFacts, Fleishman-Hillard has represented the sector lobby group for at least one year, so plenty of time for it to ensure that TDMA (which today is its second-most lucrative client) is part of the EU lobby register in its own right. Fleishman-Hillard is actively involved in directly lobbying officials on titanium dioxide on behalf of TDMA. When TDMA wrote to all member states on 15 May 2018 (as outlined above) the letter and attachments were emailed by a lobbyist from Fleishman-Hillard. The lobbyist in question, Peter Holdorf, previously worked as an assistant at Denmark’s Permanent Representation to the EU in the Environmental Section, a handy background for lobbying member state officials working on EU matters. Fleishman-Hillard’s website lists at least one other of its lobbyists as working with TDMA, Cillian Totterdell.* It’s health and the environment, stupid It is a well-known lobbyists' tactic to try to reframe an issue to benefit industry players, for example, by referring to potential job losses and dire economic impacts. Fleishman-Hillard have done so with issues like ePrivacy. With titanium dioxide, they and fellow industry-lobbyists are reframing chemicals regulation away from the core topic of health and environmental protection, using arguments about the economy and jobs to try to persuade member state officials. This is despite the fact that the EU has a clear responsibility to ensure that such concerns do not take precedence over the protection of human health and the environment when addressing potentially harmful chemicals. Take one TDMA document which argues that classifying titanium dioxide as a suspected carcinogen would “affect the jobs of millions of workers in Europe and beyond, in a wide range of industry sectors from paper, plastics, paints, cosmetics and automotive. It would threaten the billions of euros of value added to the EEA, across the industries using [titanium dioxide] in their products.” The original version of this document originated from the computer of Aaron McLoughlin who heads up CEFIC’s public affairs team, and who used to work at Fleishman-Hillard. A four-page briefing (the original version of which originated from a Fleishman-Hillard computer) explains more about TDMA’s economic arguments. It's worth noting that the dire economic figures quoted are from a report commissioned by the Titanium Dioxide Industry Consortium (TDIC). The report’s authors based their analysis “to a large degree on information collected from numerous actors along the TiO2 supply chain”, so from industry with a strong interest in opposing classification. TDIC is a member of the EU lobby register and was set up by members of TDMA. Meanwhile, an attachment sent on behalf of TDMA to member states by Fleishman-Hillard's Holdorf in May 2018 claimed that classification of titanium dioxide as a carcinogen “would be harmful to the EU’s circular economy and burdensome to Member States” because it would lead to more waste being classified as hazardous, although this is disputed by others. An industry coalition of users of titanium dioxide has demanded an impact assessment of the classification proposal under the auspices of Better Regulation. Such an analysis would give plenty of opportunities for industry to raise economic arguments. Separately, BusinessEurope’s Dutch member, the employers’ federation VNONCW has proposed that “a social economic analysis” could be an instrument for member states and the Commission to decide which legal instrument should be used following a chemical classification. That would surely undermine the EU’s duty to protect human health and the environment as the overriding objective. In the light of that duty, it is an especially insidious tactic for all these industry lobby groups to float such economic arguments. But a member state official speaking to Corporate Europe Observatory said: This official said that industry lobbyists were often accompanied by a representative of a national chemical company, to try to press home this point. Fleishman-Hillard is no stranger to working for clients with a dodgy reputation: for years it has represented Monsanto and ExxonMobil among many others. Specifically in the chemicals sphere, it represents CEFIC (the European Chemical Industry Council), as well as a range of other associations which promote particular types of chemicals. These include Crop Life and the European Crop Protection Association representing pesticides producers, and an organisation referred to as BPA (PlasticsEurope). BPA or bisphenol A is a chemical used in many plastics; it is banned from babies’ bottles and campaigners argue it should be restricted in far more products. TDMA refers to itself as a sector group of CEFIC and they share an office building, alongside another major lobby group PlasticsEurope. CEFIC is one of the EU’s biggest lobbyists, with a declared lobby spend of €12,300,000 for 2017 and the equivalent of 47 full-time lobbyists. According to its lobby register entry its overall budget is €41 million, with 150 staff. Additionally, CEFIC holds 27 European Parliament access passes and has held 70 meetings with high-level Commission staff. It sits on, or observes at, no less than 35 expert groups, including holding observer status at the key expert group on REACH, CARACAL; observers can be and are called to speak at such meetings. A look at the lobby materials sent to EU member state officials in May 2018 by TDMA via Fleishman-Hillard show that the three policy attachments originated from the computer of a CEFIC public affairs manager. It seems clear that TDMA is close to CEFIC, but CEFIC does not mention TDMA or titanium dioxide in its lobby register entry. Fleishman-Hillard lists both CEFIC and TDMA as clients, implying that TDMA is a separately-contracted client. In that case, TDMA should be registered in the lobby register in its own right. Corporate Europe Observatory has now made a formal complaint about the absence of TDMA and Venator in the (voluntary) EU lobby register, and this case shows the need for urgent lobby reforms. Only a legally-binding lobby register would prevent such unregistered lobbying from taking place, a move demanded by the European Parliament but rejected by the Commission. Until such a robust register is in place, it is imperative to close the current loopholes by ensuring that unregistered lobbyists cannot lobby any level of the EU institutions, including the Commission. The EU28 member states should, as a matter of urgency, adopt the EU lobby register, or equivalent national rules. Crucially, this case also raises the issue of privileged access and the extent of industry lobbying of the European institutions. All officials and decision-makers working on EU issues must become far more vigilant about the risk of corporate lobbying; far more aware of how industry lobbyists often massively out-number and out-spend those representing the public interest; and ultimately ensure that the public interest remains centre-stage in all policy-making. We are awaiting responses from the Commission’s Industry and Environment departments for full lists of lobbying meetings held on the classification of titanium dioxide, to try to obtain a full picture of lobbying underway on this dossier. There is an urgency that these requests are answered before the decision in September. Decision time for the EU28 It is expected that the REACH committee will make a decision on the classification of titanium dioxide at its meeting in September 2018. This is nine years on from when the European Parliament first called on the Commission to ensure REACH properly addressed nano-technology. In the intervening years, some member states have also demanded action and the Commission has been found wanting. But now that it is decision time, at least on the normal and nano forms of titanium dioxide including those found in sunscreens, will member states stand up to the corporate lobbyists? France has been at the forefront of demands to regulate titanium dioxide. Earlier this year it announced a ban on using titanium dioxide in food. France is also among the member states which have implemented their own nanotechnology traceability systems. In 2017 the French Government’s own scientific assessment found that titanium dioxide (including the nano form) is a carcinogen by inhalation. Of course TDMA opposed this. The European Chemicals Agency (ECHA) then ran a consultation on the possible classification of titanium dioxide which received over 500 responses, almost all of these from industry and most opposed to the classification of the chemical. Eventually ECHA opted to broadly support France but proposed classifying all forms of titanium dioxide as a “suspected carcinogen” (rather than an outright carcinogen) when inhaled. This represents a downgrade to France’s original proposal, but nonetheless such a classification would represent a step forward. In this category, titanium dioxide would need to be labelled, but could only be restricted in cosmetics, not other products. However the draft proposal by the Commission, in line with recommendations by Slovenia and the UK, has further diluted the original proposal by France and proposes that liquid forms of titanium dioxide be excluded from the classification. Campaigners argue that this is highly problematic as liquids containing titanium dioxide such as sunscreen or paint can still be inhaled when sprayed. Slovenia (which has an economic interest in the matter via the manufacturer Cinkarna Celje) and the UK are opposed to classification. Traditionally the UK is an ally of the chemical industry, so it is not a surprise that it has adopted some of the industry’s arguments, including on the circular economy as part of its positioning. A UK/ Slovenia paper on the matter queries “what would be the regulatory, environmental and socio-economic consequences of classification?” but the UK’s stance on this file seems particularly hypocritical. Michael Gove, the UK’s Environment Minister and a prominent Brexiteer, has tried to brand himself as a champion on green causes and has been keen to score points against the European Commission on some environmental issues in the public eye. In 2017 Gove made a speech in which he said: “it’s important that as we look at the history of EU policy, we recognise that environmental policy must also be insulated from capture by producer interests who put their selfish agenda ahead of the common good. And here the EU has been weak recently.” Gove goes on to reference the Commission’s Dieselgate debacle without any sense of irony when it comes to his own government’s position on titanium dioxide, or other chemicals issues such as glyphosate. More recently, other member states have also raised concerns about the proposed classification including Germany, Greece, Poland, and others. And the recent article in Le Monde makes clear that even the French Government might now be wavering. The final vote of the REACH committee on ECHA’s recommendation is expected in September 2018 and it is thought to be tight. Will a qualified majority of the elected governments of the 28 member states choose to listen to unregistered lobby groups and PR companies, and prioritise economic concerns over those of health, the environment, and science? That is now a real danger. All EU lobby transparency register and LobbyFacts references and data correct as of 6 July 2018. Read our update on the titanium dioxide lobby battle published 25 September 2018 * Update 20 July 2018, since publishing this article Mr Totterdell has contacted CEO to clarify that while he used to work on TDMA, the last work he did for them was over a year ago. Read previous Corporate Europe Observatory work on titanium dioxide: Food lobby fights labelling of nano ingredients – March 2014. Chemical conflicts – September 2014. For the Commission’s Scientific Committee on Consumer Safety (SCCS), which assessed the nano form of titanium dioxide, 70 per cent (14 members) of the committee’s members were found to have a conflict of interest, with only 30 per cent free of conflicts.
<urn:uuid:fe4c85bf-aa08-40eb-929b-75ce33ee7550>
CC-MAIN-2021-43
https://corporateeurope.org/en/power-lobbies/2018/07/beyond-pale-titanium-dioxide
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00030.warc.gz
en
0.95701
3,952
2.625
3
Though backgrounds, cultures, and histories are a part of our classrooms every day, September 15-October 15 is officially National Hispanic American Heritage Month. Teachers can use this opportunity to shine a light on the critical contributions, rich culture, and long history of Hispanic and Latino Americans. With these resources, students can read, listen, watch, and go off-screen for activities that will give them a window into the enormous impact that Hispanic and Latino people have had on our world.\nBelow, we've broken our list down into grade bands, and by activity type, so you can check out the resources most relevant to your classes first. But be sure to check out all of the resources, since there's plenty of overlap between grade levels!\nResources for Grades Pre-K to 2\n\nThe offline activities here will get kids making and coloring crafts from Hispanic and Latino cultures. They can also watch videos that highlight traditional music and illustrate how Spanish is a language spoken in many countries. Make sure to give kids space to talk about their own related knowledge and experiences.\nVideos:\nEditor's note for all of the YouTube videos listed in this article: Pressing play on the YouTube video will set third-party cookies controlled by Google if you are logged in to Chrome. See Google's cookie information for details.\n\nUse this YouTube video from Sesame Street to talk about different countries where Spanish is spoken. Let your bilingual kids show off some words and phrases in a language other than English.\nThis Sesame Street video from PBS LearningMedia showcases some percussion instruments used in South America.\nShow this entire 20-minute YouTube video from 123 Andr\u00e9s -- or just some clips -- to highlight different types of instruments and music. Students can sing along in Spanish, dance, and listen!\nIn this YouTube video from Global Read Aloud week, the Mexican American author Yuyi Morales talks about her idea for her book Just a Minute: A Trickster Tale and Counting Book. Then she reads it aloud.\n\nHands-on activities:\n\nGet some simple materials together so students can make a Mexican cuff bracelet using these directions from SpanglishBaby.\nIf you want an activity that's more open-ended, have kids use the directions from Kid World Citizen to make some Ecuadorian clay -- migaj\u00f3n -- and create something.\nThese coloring pages from Education.com can open up a great discussion about each famous person's contributions.\nExplore ancient history by talking about -- and then making -- Ta\u00edno petroglyphs using this resource from Kid World Citizen.\nWork together as a class or in groups to make your own pi\u00f1atas with guidance from HITN.\n\nResources for Grades 3-5\n\nThird through fifth graders can watch musicians play traditional instruments, learn about prominent Latino and Hispanic people from the present day and the past, read stories about immigration experiences, and more.\nVideos:\n\nIn this YouTube video from Inka Gold, listen to and watch musicians from the group El Dorado play the music of the Andes, including melodies from the Peruvian pan flute. To weave in cross-curricular ideas, talk about why the different pipe lengths on the flute affect the sound, read about the Andes mountains, learn about the Indigenous people of Peru, or have students use figurative language to describe the music!\nThis 25-minute YouTube video from the Lincoln Center features the Villalobos brothers and their friends playing music, singing, and dancing. Along the way, they talk about some of the instruments and songs. Afterward, have students write about their favorite segments.\nThis YouTube video from the Disney Channel features various Disney stars who kids may recognize explaining and celebrating their Latino and Hispanic backgrounds. The video can be a great jumping-off point for students to share their own backgrounds in some way.\n\nTexts:\n\nFrom Education.com, read and talk about Supreme Court Justice Sonia Sotomayor, the first Latina -- and woman of color -- to ever be appointed to the highest court in the United States. Students can then explore more about the judicial branch, research other prominent Latinas, or do some math about percentages of representation in the government in contrast with the U.S. population.\nExplore these poems from Central America, provided in both English and Spanish by Teaching Central America. Read them all together, or have students choose their favorite to illustrate, read aloud, or present in some other way. Of course, students can also write their own poems!\nTeaching Central America also has a host of other downloadable texts and teaching guides to explore -- you'll need to register with an email address for access.\n\nHands-on activities:\n\nPatterns within textiles are often a hallmark of a culture, and with this activity from Education.com, kids can explore that idea. First, they color an Incan pattern, and then they can create it themselves.\nThe Nazca lines in Peru are sure to fascinate students, so have them learn about what we know, then create their own designs using simple materials and instructions from Spanish Mama! Then they can research more about the ancient people who made them, and make a case for what they think their purpose was.\n\nResources for Grades 6-8\n\nExplore ancient civilizations and fine art, or learn about leaders like Cesar Chavez. Tackle the appropriation of the taco, or read literature from Latino authors. And you can explore lessons like this one about Maria Moreno at PBS LearningMedia, or these at Zinn Education Project, too.\nVideos:\n\nPaired with reflection questions you can use for discussion or written response, this video from Re-Imagining Migration features Latino people talking about their perceptions of race. After viewing, students can produce their own videos.\nThese videos from NBC offer profiles of women -- specifically Latinas -- working in STEM fields. The first is an engineer at Boeing, and the second is an electrical engineer who also mentors young girls. Talk about the importance of diversity in these highly technical fields.\nWatch this video about Cesar Chavez from TeachWithMovies.org to find out why he's a critical figure in the labor movement. Pair with some history or a short story about similar issues, or do some math around how much migrant farmworkers are typically paid.\nThis short YouTube video from In This Together profiles one man's experience as a farmworker, and could be a great companion to the video above.\n\nTexts:\n\nPair this text about Dolores Huerta with the video about Cesar Chavez above to highlight another important icon. Other teacher resources are available with a Facing History & Ourselves account.\nFrom CommonLit, this set of texts features Hispanic, Latino, and Chicano authors. Consider assigning specific pieces for a jigsaw activity, or letting kids choose a text that appeals to them.\nStarting with tacos and addressing appropriation, this lengthy article from the New York Times is a great way to get kids thinking about how pieces of Hispanic and Latino cultures -- among others -- are often appropriated for profit. Can they think of other examples of this type of appropriation?\nTogether, read this New York Times article about Gwen Ifill, an Afro-Latina icon of journalism, and how her success inspired the author's students. Expand the discussion to consider the concept of representation, why it matters, and who inspires your students.\n\nInteractives:\n\nFrom Google Arts & Culture, this interactive presentation about Tikal, Guatemala, an ancient Mayan kingdom, can give students an appreciation for the vast cities and cultures that existed before our present day. Ask students which discoveries they're most surprised about.\nThis feature, also from Google Arts & Culture, provides a treasure trove of information about Mexico. It highlights various places and lets students interact with art. Find out if students are familiar with some of the featured places, and share if you are!\nWith this Google Doc from the Kennedy Center, students can research leaders of the Mexican Revolution.\n\nResources for Grades 9-12\n\nFrom the ancient Aztec empire to the fabulous Frida Kahlo, high school students can jump into the art, literature, and representation of Hispanic and Latino people.\nVideos:\n\nWatch this short introduction to Frida Kahlo from TED-Ed (via YouTube) and then, to explore further, jump over to Google Arts & Culture to learn more and see her art. Have students determine what pieces of her life they see reflected in her art.\nThough it's hosted on YouTube, this audio-only podcast from the Fall of Civilizations is about the Aztec empire. In its entirety, it would span several class periods, so it's probably best in shorter segments. Listening is a great opportunity for students to practice their note-taking skills.\n\nTexts:\n\nRead this poem by Juan Felipe Herrera and discuss the imagery he uses. Then you can let students explore more poems curated by Poets.org for Hispanic Heritage Month. Have students choose one or more to present or use as inspiration to write their own.\nUse this article from the LA Times to spark a conversation about Latino representation in the media. Are there any surprising statistics? Students can discuss the importance of representation and potentially identify an example of when they "saw" themselves in the media.\n\nInteractives:\n\nClick through this Google Arts & Culture collection of Latino musicians with embedded audio of interviews and music. Have students share some of their favorite Latino and Hispanic artists and bands.\nPair this feature on the Library of Congress website with actual texts, and students can hear Hispanic and Latino authors reading their work to make it come alive.\n\nImage courtesy of Allison Shelley/The Verbatim Agency for American Education: Images of Teachers and Students in Action.
<urn:uuid:1bdd7721-da9b-48bc-ad76-75a8d14c4585>
CC-MAIN-2021-43
https://www.commonsense.org/education/articles/help-your-students-stand-up-to-social-media-pressure
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00270.warc.gz
en
0.927849
2,110
3.828125
4
File Name: power electronics devices circuits and matlab simulations .zip Skip to Main Content. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. Power electronics is the application of solid-state electronics to the control and conversion of electric power. The first high power electronic devices were mercury-arc valves. In contrast to electronic systems concerned with transmission and processing of signals and data, in power electronics substantial amounts of electrical energy are processed. The power range is typically from tens of watts to several hundred watts. In industry a common application is the variable speed drive VSD that is used to control an induction motor. The power range of VSDs start from a few hundred watts and end at tens of megawatts. The power conversion systems can be classified according to the type of the input and output power. Power electronics started with the development of the mercury arc rectifier. From the s on, research continued on applying thyratrons and grid-controlled mercury arc valves to power transmission. Uno Lamm developed a mercury valve with grading electrodes making them suitable for high voltage direct current power transmission. In selenium rectifiers were invented. Julius Edgar Lilienfeld proposed the concept of a field-effect transistor in , but it was not possible to actually construct a working device at that time. In Shockley's invention of the bipolar junction transistor BJT improved the stability and performance of transistors , and reduced costs. By the s, higher power semiconductor diodes became available and started replacing vacuum tubes. In the silicon controlled rectifier SCR was introduced by General Electric , greatly increasing the range of power electronics applications. Middlebrook made important contributions to power electronics. In , he founded the Power Electronics Group at Caltech. Generations of MOSFET transistors enabled power designers to achieve performance and density levels not possible with bipolar transistors. The power MOSFET is the most common power device in the world, due to its low gate drive power, fast switching speed, easy advanced paralleling capability, wide bandwidth , ruggedness, easy drive, simple biasing, ease of application, and ease of repair. In , the insulated-gate bipolar transistor IGBT was introduced. It became widely available in the s. This component has the power handling capability of the bipolar transistor and the advantages of the isolated gate drive of the power MOSFET. The capabilities and economy of power electronics system are determined by the active devices that are available. Their characteristics and limitations are a key element in the design of power electronics systems. Formerly, the mercury arc valve , the high-vacuum and gas-filled diode thermionic rectifiers, and triggered devices such as the thyratron and ignitron were widely used in power electronics. As the ratings of solid-state devices improved in both voltage and current-handling capacity, vacuum devices have been nearly entirely replaced by solid-state devices. Power electronic devices may be used as switches, or as amplifiers. Semiconductor devices used as switches can approximate this ideal property and so most power electronic applications rely on switching devices on and off, which makes systems very efficient as very little power is wasted in the switch. By contrast, in the case of the amplifier, the current through the device varies continuously according to a controlled input. The voltage and current at the device terminals follow a load line , and the power dissipation inside the device is large compared with the power delivered to the load. Several attributes dictate how devices are used. Devices such as diodes conduct when a forward voltage is applied and have no external control of the start of conduction. Power devices such as silicon controlled rectifiers and thyristors as well as the mercury valve and thyratron allow control of the start of conduction, but rely on periodic reversal of current flow to turn them off. Devices such as gate turn-off thyristors, BJT and MOSFET transistors provide full switching control and can be turned on or off without regard to the current flow through them. Transistor devices also allow proportional amplification, but this is rarely used for systems rated more than a few hundred watts. The control input characteristics of a device also greatly affect design; sometimes the control input is at a very high voltage with respect to ground and must be driven by an isolated source. As efficiency is at a premium in a power electronic converter, the losses that a power electronic device generates should be as low as possible. Devices vary in switching speed. Some diodes and thyristors are suited for relatively slow speed and are useful for power frequency switching and control; certain thyristors are useful at a few kilohertz. Vacuum tube devices dominate high power hundreds of kilowatts at very high frequency hundreds or thousands of megahertz applications. Faster switching devices minimize energy lost in the transitions from on to off and back, but may create problems with radiated electromagnetic interference. Gate drive or equivalent circuits must be designed to supply sufficient drive current to achieve the full switching speed possible with a device. A device without sufficient drive to switch rapidly may be destroyed by excess heating. Practical devices have non-zero voltage drop and dissipate power when on, and take some time to pass through an active region until they reach the "on" or "off" state. These losses are a significant part of the total lost power in a converter. Power handling and dissipation of devices is also a critical factor in design. Power electronic devices may have to dissipate tens or hundreds of watts of waste heat, even switching as efficiently as possible between conducting and non-conducting states. In the switching mode, the power controlled is much larger than the power dissipated in the switch. The forward voltage drop in the conducting state translates into heat that must be dissipated. High power semiconductors require specialized heat sinks or active cooling systems to manage their junction Temperature ; exotic semiconductors such as silicon carbide have an advantage over straight silicon in this respect, and germanium, once the main-stay of solid-state electronics is now little used due to its unfavorable high temperature properties. Semiconductor devices exist with ratings up to a few kilovolts in a single device. Where very high voltage must be controlled, multiple devices must be used in series, with networks to equalize voltage across all devices. Again, switching speed is a critical factor since the slowest-switching device will have to withstand a disproportionate share of the overall voltage. Mercury valves were once available with ratings to kV in a single unit, simplifying their application in HVDC systems. The current rating of a semiconductor device is limited by the heat generated within the dies and the heat developed in the resistance of the interconnecting leads. Semiconductor devices must be designed so that current is evenly distributed within the device across its internal junctions or channels ; once a "hot spot" develops, breakdown effects can rapidly destroy the device. Certain SCRs are available with current ratings to amperes in a single unit. Topologies for these converters can be separated into two distinct categories: voltage source inverters and current source inverters. Voltage source inverters VSIs are named so because the independently controlled output is a voltage waveform. Similarly, current source inverters CSIs are distinct in that the controlled AC output is a current waveform. DC to AC power conversion is the result of power switching devices, which are commonly fully controllable semiconductor power switches. The output waveforms are therefore made up of discrete values, producing fast transitions rather than smooth ones. For some applications, even a rough approximation of the sinusoidal waveform of AC power is adequate. Where a near sinusoidal waveform is required, the switching devices are operated much faster than the desired output frequency, and the time they spend in either state is controlled so the averaged output is nearly sinusoidal. Common modulation techniques include the carrier-based technique, or Pulse-width modulation , space-vector technique , and the selective-harmonic technique. Voltage source inverters have practical uses in both single-phase and three-phase applications. Single-phase VSIs utilize half-bridge and full-bridge configurations, and are widely used for power supplies, single-phase UPSs, and elaborate high-power topologies when used in multicell configurations. They are also used in applications where arbitrary voltages are required as in the case of active power filters and voltage compensators. Current source inverters are used to produce an AC output current from a DC current supply. This type of inverter is practical for three-phase applications in which high-quality voltage waveforms are required. A relatively new class of inverters, called multilevel inverters, has gained widespread interest. Normal operation of CSIs and VSIs can be classified as two-level inverters, due to the fact that power switches connect to either the positive or to the negative DC bus. If more than two voltage levels were available to the inverter output terminals, the AC output could better approximate a sine wave. It is for this reason that multilevel inverters, although more complex and costly, offer higher performance. Each inverter type differs in the DC links used, and in whether or not they require freewheeling diodes. Either can be made to operate in square-wave or pulse-width modulation PWM mode, depending on its intended usage. Square-wave mode offers simplicity, while PWM can be implemented several different ways and produces higher quality waveforms. Voltage Source Inverters VSI feed the output inverter section from an approximately constant-voltage source. The desired quality of the current output waveform determines which modulation technique needs to be selected for a given application. The output of a VSI is composed of discrete values. In order to obtain a smooth current waveform, the loads need to be inductive at the select harmonic frequencies. Without some sort of inductive filtering between the source and load, a capacitive load will cause the load to receive a choppy current waveform, with large and frequent current spikes. The single-phase voltage source half-bridge inverters, are meant for lower voltage applications and are commonly used in power supplies. Low-order current harmonics get injected back to the source voltage by the operation of the inverter. This means that two large capacitors are needed for filtering purposes in this design. If both switches in a leg were on at the same time, the DC source will be shorted out. Inverters can use several modulation techniques to control their switching schemes. If the over-modulation region, ma, exceeds one, a higher fundamental AC output voltage will be observed, but at the cost of saturation. For SPWM, the harmonics of the output waveform are at well-defined frequencies and amplitudes. This simplifies the design of the filtering components needed for the low-order current harmonic injection from the operation of the inverter. The maximum output amplitude in this mode of operation is half of the source voltage. If the maximum output amplitude, m a , exceeds 3. As was true for Pulse Width Modulation PWM , both switches in a leg for square wave modulation cannot be turned on at the same time, as this would cause a short across the voltage source. Therefore, the AC output voltage is not controlled by the inverter, but rather by the magnitude of the DC input voltage of the inverter. Using selective harmonic elimination SHE as a modulation technique allows the switching of the inverter to selectively eliminate intrinsic harmonics. The fundamental component of the AC output voltage can also be adjusted within a desirable range. Since the AC output voltage obtained from this modulation technique has odd half and odd quarter wave symmetry, even harmonics do not exist. The full-bridge inverter is similar to the half bridge-inverter, but it has an additional leg to connect the neutral point to the load. Any modulating technique used for the full-bridge configuration should have either the top or the bottom switch of each leg on at any given time. Power electronics is the application of solid-state electronics to the control and conversion of electric power. The first high power electronic devices were mercury-arc valves. In contrast to electronic systems concerned with transmission and processing of signals and data, in power electronics substantial amounts of electrical energy are processed. The power range is typically from tens of watts to several hundred watts. In industry a common application is the variable speed drive VSD that is used to control an induction motor. The power range of VSDs start from a few hundred watts and end at tens of megawatts. The power conversion systems can be classified according to the type of the input and output power. Written for undergraduate and graduate students in the electrical and engineering fields, Power Electronics: Devices, Circuits and MATLAB Simulations is intended for a one-semester course on power electronics. Each chapter begins with the essential theory and related waveforms, then describes the MATLAB programs with their corresponding outputs. Topics covered include power semiconductor devices, triggering methods, devices and circuits, commutation and protection, and phase-controlled rectifiers. Whether you are transitioning a classroom course to a hybrid model, developing virtual labs, or launching a fully online program, MathWorks can help you foster active learning no matter where it takes place. Select a Web Site. devices for occasional need. So they don't want Traditionally two approaches are used to simulate power electronic systems: . The first, so The second, so called variable topology, assimilates the switches to open-circuits or short-circuits. This fifth edition presents vital information on control valve performance and the latest technologies. Flag for inappropriate content. Digital Filter Designer's Handbook. These circuits should also be simulated on Pspice. Стратмор кивнул. Она не выглядела взволнованной. - Новая диагностика. Что-нибудь из Отдела обеспечения системной безопасности. Стратмор покачал головой: - Это внешний файл. Она вспомнила свою первую реакцию на рассказ Стратмора об алгоритме, не поддающемся взлому. Сьюзан была убеждена, что это невозможно. Угрожающий потенциал всей этой ситуации подавил . - Средняя цена определяется как дробь - общая стоимость, деленная на число расшифровок. - Конечно. - Бринкерхофф рассеянно кивнул, стараясь не смотреть на лиф ее платья. Ну… вообще-то никто не давал мне ваш номер специально. - В голосе мужчины чувствовалось какая-то озабоченность. - Я нашел его в паспорте и хочу разыскать владельца. Сердце Ролдана упало. Он с трудом открыл глаза и увидел первые солнечные лучи. Беккер прекрасно помнил все, что произошло, и опустил глаза, думая увидеть перед собой своего убийцу. Но того человека в очках нигде не . Cambridge ielts 6 answer key pdf artificial intelligence a modern approach 3rd edition solutions pdfDebbie M. 11.12.2020 at 11:27 PDF | On Jan 1, , Alok Jain published Power Electronics: Devices, Circuits and MATLAB Simulations | Find, read and cite all the research.Ben E. 13.12.2020 at 02:09 Power Electronics: Devices, Circuits and MATLAB Simulations. Written for undergraduate and graduate students in the electrical and engineering fields, Power.
<urn:uuid:d075de27-5eb9-4955-8c93-b0eb5b676dba>
CC-MAIN-2021-43
https://ikafisipundip.org/and-pdf/54-power-electronics-devices-circuits-and-matlab-simulations-pdf-476-311.php
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00550.warc.gz
en
0.874404
3,963
3.234375
3
Initially a coal seam can be core-drilled and the core recovered for study. After a hole is drilled and the core is recovered, the hole is geophysically logged using a gamma/gamma (slim density) geophysical tool. This geophysical tool is small enough to be lowered down the center of the drill rods. GEOLOGY Vol. V Coal Exploration and Mining Geology - Colin R. Ward Encyclopedia of Life Support Systems (EOLSS) Coal exploration programs typically involve the following components Obtain legal title to explore the area Evaluate the geological Some geophysical methods, such as gamma-ray spectrometry and remote sensing, measure surface attributes others, such as thermal and some electrical methods are limited to detecting relatively shallow features but may help Geophysical methods have been used for many years in the search for metallic ore bodies and petroleum fields, and are also useful at different scales in many coal exploration programs. If the coal basin is underlain by rocks that are denser than or have different magnetic properties from those associated with the coal seams, maps showing the pattern of variation across the area in the earths Geophysical methods now play a critical role in many coalfield investigations .The techniques used at an early stage in the exploration program are normally those that give broad-scale information on a large area at relatively little cost. In fact, near-surface gamma ray logs of cased oil and gas wells are a prime source of data for identifying and measuring the thickness of shallow coal beds in the Northern Great Plains region. The gamma ray logs can detect shale partings in a coal bed, but generally the thickness of thin partings is exaggerated. ISBN0-86499-863-5 Pub.No.T/254 Disclaimer Thecontentsofthispublicationarebasedonthe informationanddataobtainedfrom,andtheresults andconclusionsof ... Coal exploration projects are focused to accurately predict the geology of a coal field in a cost-effective manner. More geologic data gene rally yields a better geological model of the coal field. Advances in geophysical methods may provide tools to supplement traditional methods of coal exploration. Geophysical method of exploration of coal Products. As a leading global manufacturer of crushing, grinding and mining equipments, we offer advanced, reasonable solutions for any size-reduction requirements including, Geophysical method of exploration of coal, quarry, aggregate, and different kinds of minerals. Jul 30, 2013 Despite this favourable situation for geophysics, exploration for coal in many countries mainly relies on drilling. The main reason for this is that when compared to petroleum exploration, coal exploration is concerned with much shallower targets (typically less than 2300 m for open pit mining and less than about 1000 m for underground mining). Advances in geophysical methods may provide tools to supplement traditional methods of coal exploration. Two seismic methods 1) multi-channel analysis of surface waves (MASW), and 2) shear wave (SH-wave) analysis were evaluated to determine their usefulness as coal exploration tools. Geophysical and remote sensing technologies have become essential to the discovery and assessment of gold and silver, base metal, coal and uranium, and strategic mineral deposits. Airborne geophysical methods are employed to map large areas and increase the efficiency and success of the exploration program. Ground and borehole geophysical methods are deployed to map the subsurface geology Geophysics applicability matrix for pertinent coal mining and exploration problems Exploitable property Typical required Typical required Most applicable Workable survey contrast range resolution geophysical methods geometry Delineation of old workings Air-filled density, 050 m Accuracy 5 m TDEM Grid surveys on surface There are several surface geophysical methods that are applicable to detect subsurface ... ies are more expensive than the deep surveys conducted for oil and gas exploration be-cause of the need to make closely spaced measurements. ... 1000 feet and it is difficult and expensive to characterize a coal seam with borings. The method has also been ... Integration of Downhole Geophysical and Lithological Data from Coal Exploration Drill Holes . Brett J Larkin . GeoCheck Pty. Ltd. Forresters Beach, NSW, Australia . brettgeocheck.com.au . SUMMARY. The primary variable of interest in a coal resource study is the volume of coal as estimated from the coal thicknesses in eac. h drill. hole. Geophysical Methods Applications SubSurface Surveys Associates, Inc., established in 1988, specializes in near-surface geophysics and utility locating services and is dedicated to establishing strong client relationships. SubSurface Surveys extensive education and experience carbonaceous material such as coal (Figure 7). ... R.E., 1991. Encyclopedic Dictionary of Exploration Geophysics, Society of Exploration ... Geophysical methods also have the potential to map this ... There are many ways to derive coal quality parameters from geophysical logs. A common method is to establish the relationships between laboratory derived proximate analysis and multiple geophysical logging parameters so that future coal parameters can be directly estimated from the geophysical logging measurements through established relationships. AGS can provide a number of surveys for the geothermal, coal and gas industries such as surface TEM, Magnetotelluric and resistivity surveys. Information on Geophysical Methods Active member of Mining and Mineral Exploration. Mining is the extraction of valuable minerals or other geological materials from the earth, usually from an ore body, lode, vein, seam, reef or placer deposit. These deposits form a mineralized package that is of economic interest to the miner. Non-invasive geophysical methods such as seismic reflection and ... articleosti_5382423, title Geophysical applications in coal exploration and mine planning electromagnetics, author Bartel, L C and Dobecki, T L, abstractNote Geologic features such as structure (faults, folds, fractures, etc.), water bearing zones, aquifers, permeability variations, and porosity variations are important for mine planning and in situ energy recovery processes. The focus of these coal exploration projects are to accurately and cost effectively predict the geology of a coal field. More geologic data generally yield a better geological model of the coal field. Advances in geophysical methods may provide tools to supplement the traditional methods of coal exploration. CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda) Coal exploration projects are focused to accurately predict the geology of a coal field in a cost-effective manner. More geologic data generally yields a better geological model of the coal field. Advances in geophysical methods may provide tools to supplement traditional methods of coal exploration. Jun 12, 2017 Geophysical exploration can be effective in detecting and monitoring potential sources of coal mine water in-rushes and underground watercourses. Generally, in-mine seismic, DC resistivity, and transient electromagnetic methods are used for such purposes in China. However, such technologies can be influenced by many factors, such as roadways, fissures in the surrounding rocks, exploration techniques. The survey in no way will replace good drilling information or detailed geologic mapping. Combined with these and other exploration methods it is a way for us to provide our clients with accurate reserve numbers in a timely manner. At this time I wish to thank Energy Fuels Coal, Inc. and Geophysics International for their SURFACE GEOPHYSICAL INVESTIGATIONS Introduction Surface geophysical surveys have been applied to mineral and petroleum exploration for many years. A magnetic compass was used in Sweden in the mid-1600s to find iron ore deposits. The lateral extent of the Comstock ore body was mapped using self-potential methods in the 1880s. A Coal reserves are discovered through exploration. Modern coal exploration typically involves extensive use of geophysical surveys, including 3D seismic surveys aimed at providing detailed information on the structures with the potential to affect longwall operations, and drilling to Coal is a low-cost commodity and coal miners profit margins are slim, such that mining companies must maintain tight cost control to achieve acceptable returns. The challenge to coal geophysics is to adapt to these economic constraints and provide cost-effective, reliable methods for exploration and mining requirements. The geophysical methods which have been used in the reconnaissance, pre ... character of the earth, the geophysical method can be classified into electrical method, magnetic method, seismic method, geothermal method. In fact, there are no differences between common geo-physical methods and coal mining geophysical method in the principle. However, the coal mining geophysical methods are restricted in the coal mining tunnel. GEOPHYSICAL METHODS IN EXPLORATION AND MINERAL ENVIRONMENTAL INVESTIGATIONS by Donald B. Hoover, Douglas P. Klein, and David C. Campbell INTRODUCTION In the following discussion, the applicability of geophysical methods to geoenvironmental studies of Mine geophysical prospecting is an important means for geological exploration of coal seam working faces, among which the mine radio wave tomography is one of the most common and effective methods. Geophysical logs can be used not only for qualitative interpretation such as strata correlation but also for geotechnical assessment through quantitative data analysis. In an emerging digital mining age, such a use of geophysical logs helps to establish reliable geological and geotechnical models, which reduces safety and financial risks due to geological and geotechnical uncertainty for new ... Aug 08, 2019 2.2. Geophysical Characteristics. In order to avoid the multisolution of the single geophysical exploration method, improve the accuracy of exploration, and have a qualitative understanding of the geophysical characteristics of the exploration area, the resistivity logging method 24, 25 is used to study the resistivity distribution law of different depth strata in the exploration area. Feb 23, 2012 Airborne geophysical methods for mineral exploration are means to collect geophysical data that can be used to prospect directly or indirectly for economic minerals that are characterized by anomalous magnetic, conductive or radiometric responses. Sep 24, 2021 A coal mine in Datong is an integrated mine. At present, there is goaf in the upper and lower part of the mining coal seam. There is a lot of ponding in the goaf, which has great potential safety hazards for production. In order to find out the scope and location of ponding in goaf, the comprehensive geophysical exploration method combining transient electromagnetic method and high-density ... We immediately communicate with youGet Quote Online If you have any needs or questions, please click on the consultation or leave a message, we will reply to you as soon as we receive it!
<urn:uuid:21132b87-2aad-404b-9d9a-0153020038d6>
CC-MAIN-2021-43
https://www.vanduin-cv.nl/sand-washing-machine/Jan-28_31724.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00711.warc.gz
en
0.917198
2,216
3.40625
3
Today is World Alzheimer’s Day! It’s a day on which Alzheimer’s organizations around the world concentrate their efforts on raising awareness about Alzheimer’s and dementia. Events are happening in multiple cities throughout the month of September where people come together to remember a loved one or a friend who has or had died from Alzheimer’s and help raise money for the cause. A diagnosis of Alzheimer’s is not ever something someone wants to hear. We can’t go back in time, but I sure wish I knew then what I know now. You see, we lost my husband’s sweet mother to Alzheimer’s, so we sadly know first hand more than we ever wanted to know about this insidious disease. It was terrible watching her seemingly fade away, not remember who people were, and eventually unable to ever dress herself. At the time there was no hope – nothing that could be done. That’s changed! This blog post is longer than I usually write, but important information cannot always be covered in just a couple paragraphs and this topic is too important to gloss over. As we’ve been exploring the steps we can take to avoid Alzheimer’s, I came across important information and a book about breakthrough research that I am compelled to share with everyone! So let’s get started! What is Alzheimer’s? Alzheimer’s disease is the most common form of dementia, a group of disorders that impairs mental functioning. Surveys have shown that there is something that Americans fear more than death…more than cancer. It is Alzheimer’s disease. For most of us, losing our personhood–those characteristics, which makes us who we are–is a fate worse than death. Named after Alois Alzheimer, a German psychiatrist and neuropathologist, who discovered the pathological condition of dementia in 1906, Alzheimer’s is a disease that affects the function of the brain by causing the brain’s neurons and synapses to degenerate, resulting in cognitive decline and eventual death. The first symptoms of the disease usually show up as forgetfulness, but as it worsens, more long-term memory loss occurs, along with other symptoms such as mood swings, irritability, the inability to function with day-to-day self-care. The progression of the disease leads to eventual death. No one should have to die from this disease. Nothing, including the latest drugs, has been able to stop, slow, or reverse the progression. While strides have been made to cure other diseases, Alzheimer’s has been incurable. Until recently, there has been no hope for a cure. Today, there is hope: more than 200 patients have been successfully treated with a new protocol, developed by Dr. Dale E. Bredesen. [i] Is Alzheimer’s Inevitable? Alzheimer’s affects 5.3 million Americans, and it is predicted that by 2050, 1 in 8 Americans will be stricken with it. The Medicare system spends three times as much on Alzheimer’s treatment as it does on any other disease. Because of the prevalence of Alzheimer’s disease in our country, many people view it as a normal and inevitable part of the aging process. But this is not so. In fact, in spite of it being so common in America, there are societies in which dementia and Alzheimer’s is rare, even for people in their 90’s and beyond. The elders in these cultures maintain clear thinking without the burden of dementia that we have come to associate with aging. “Alzheimer’s disease does not arise from the brain failing to function as it evolved to.” – Dale Bredesen, M.D. In fact, as Dr. Bredesen’ breakthrough research proves, Alzheimer’s is a disease that is actually the result of a protective response in the brain. Now there’s light at the end of the tunnel! This is exciting news! In his new book, “The End of Alzheimer’s” released just this year, Dr. Dale E. Bredesen, an internationally recognized “expert in the mechanisms of neurodegenerative diseases such as Alzheimer’s” shares how Alzheimer’s disease and cognitive decline can not only be prevented but, in many cases, be reversed. My husband and I are reading this book together. While we haven’t yet finished reading it, we have grasped what Dr. Bredesen’s research and protocols have proved, that Alzheimer’s is not just one condition as it is currently treated. His book goes into detail about the three causes, the biochemical markers, and explains the risks of the gene ApoE4. The newest research of Alzheimer’s has shown that this is not a single disease but is “actually three distinguishable syndromes.” According to Dr. Bredesen and his research team, there are many different mechanisms that can result in Alzheimer’s affecting people in different ways and at different ages. Understanding where you are out of balance and how to rebalance these mechanisms by “adjusting lifestyle factors, including micronutrients, hormone levels, stress, exercise, and sleep quality, can make a huge difference – in the prevention and the reversal of Alzheimer’s!” Following are some steps you can take right now to start protecting yourself from getting Alzheimer’s. According to Dr. Bredesen’s research and successful treatment protocol, ReCODE, “so many of the conditions that increase our risk for Alzheimer’s disease – from prediabetes and obesity to vitamin D deficiency and a sedentary lifestyle – are the result of what and how much we eat and exercise. The basics for addressing each neurothreat include: prevent and reduce inflammation; optimize hormones, trophic factors[ii], and nutrients; and, eliminate toxins.” So what do you focus on specifically to address Alzheimer’s? 1. Eat a healthy diet Diet also plays a crucial role. Eating a diet consisting of a variety of fresh, whole and primarily plant-based foods is key to providing your body with the nutrients it needs to thrive. Limiting or eliminating processed and artificial sugars, fast food, and processed foods will be a bonus! The best diet for preventing dementia is one that is very low in animal-derived foods and high in fresh plant foods. As an overview, highly beneficial foods to consume frequently in your diet, according to Dr. Bredesen’s protocol for prevention, include: - Cruciferous vegetables, such as broccoli, cauliflower and Brussels sprouts - Leafy green vegetables - Resistant starches, such as sweet potatoes, rutabagas, parsnips, green bananas - Probiotic foods like sauerkraut and kimchi - Prebiotic foods such as jicama and leeks - Sulfur-containing vegetables, such as onions and garlic - Herbal teas, black tea, green tea He suggests the following foods to avoid: Sugar and simple carbohydrates (pasta, rice, cookies, soda, etc.), grains, gluten, dairy (organic cheese, whole or raw milk, or plain yogurt is okay occasionally), processed foods (packaged with a list of ingredients), high mercury fish (tuna, shark, swordfish), fruits with high glycemic index such as pineapple. Wait! What about coffee and wine? These are on the “eat less frequently” list! Wine is limited to a few times a week. Coffee is on Dr. Bredesen’s “yellow light” list, as are pasture-raised chicken, grassfed beef, fruits with low glycemic values, legumes, and nightshades – eat these in moderation. More information is in Dr. Bredesen’s book! Anti-oxidants neutralize free radicals. This is important because free radicals are in part responsible for the damage that causes dementia. Avoiding Inflammation is critical as inflammation greatly affects the brains synapses. And, high insulin and high glucose are two of the most important risk factors for Alzheimer’s disease. A healthy diet also helps you avoid other health problems such as obesity, high cholesterol, high blood pressure, diabetes, and arteriosclerosis. In another study cited by Robbins, researchers found that persons who are obese in middle age are twice as likely to develop dementia in their later years as those people who had normal weights. Further, if these people also have high cholesterol and high blood pressure, their risk for dementia in old age escalates to six times higher than normal weight people! 2. Get plenty of physical exercise Diet is not the only thing that can reduce your risk of Alzheimer’s disease. In his book, Healthy at 100: The Scientifically Proven Secrets of the World’s Healthiest and Longest-Lived Peoples, John Robbins cites study after study that demonstrate the stunning effect of exercise on the brain’s ability to function well, even at advanced ages. In one such study, documented in the Archives of Neurology (March 2001), it was found that the people with the highest activity levels were only half as likely as inactive people to develop Alzheimer’s. Further, these active people were also substantially less likely to develop any form of dementia or impairment in mental functioning. In another study[iii], some mice were bred to develop the type of plaque that is associated with Alzheimer’s in their brains. Some of the mice were allowed to exercise and some were not. Two important findings emerged. - The mice that exercised developed 50-80 percent less plaque in their brains that the non-exercising mice developed. - The exercising mice produced more of the enzyme that prevents the buildup of plaque in the brain. Not only does research validate the benefits of exercise, it shows that sitting is detrimental to physical/cardiovascular health, as well as cognitive health! The takeaway? People who exercise regularly are less likely to develop Alzheimer’s disease or any other kind of dementia than people who are sedentary. Be sure to combine aerobic exercise like, walking or jogging, spinning or dancing, with weight training! 3. Avoid toxins! Exposure to neurotoxins also play a role in Alzheimer’s since the protective response can lead to losing critical cognitive abilities. When our body is exposed to high levels of toxic substances such as metals or biotoxins (from molds), everything is at risk, especially the brain! Avoiding toxins is just darn good for your health. There are plenty of non-toxic options…more to come on that in a future blog! What are you waiting for? Start now to defend yourself against health issues of all kinds: get moving, avoid toxins, and eat a truly healthy diet, consuming organics as much as possible. You will reap the benefits literally for years to come! If you or a loved one suspects they might have symptoms of Alzheimer’s, I strongly encourage you to get a copy of Dr. Bredesen’s “The End of Alzheimer’s” to learn more. You’ll discover that unlike what was previously thought, the amyloidal response that causes plaque is a defense mechanism of the brain. It’s important to know the cause of our brain trying to defend itself so the causation factors can be removed. Dr. Bredesen’s book also goes into detail about ApoE5, the strongest known genetic factor for Alzheimer’s disease, which increases the risk for Alzheimer’s by 30-50%. The earlier a proper evaluation is done to identify and correct the causes of synapse loss and cognitive decline, the better the chances someone has to avoid full blown Alzheimer’s. Dr. Bredesen’s book provides resources for proper evaluation and detailed understanding of the approach to the ReCODE protocol, including over 200 peer-reviewed publications. Seek medical attention if you are experiencing symptoms of cognitive decline. Even if you don’t have memory issues now, you probably know someone who does. And, I’d read it anyway – the American diet and lifestyle is the perfect recipe for many health related problems, including Alzheimer’s. How to get started with a focus on healthy and mindful eating program! A focus on healthy and mindful eating is the first place to start to improve and set yourself up for long-term health. Not sure how to really get started? It’s all baby steps – one step at a time. Find out more about my Focus on Healthy Eating Program. I am launching this 12-week course as a pilot program on October 24. You can find out more here www.katmaeda.com/focus-on-healthy-eating – join my special VIP list for additional details and special registration discount! Note: If you are interested in reading The End of Alzheimer’s,” by Dale E. Bredesen, M.D., it’s available at local bookstores and Amazon.com. I am promoting this book for educational purposes as it may save someone from suffering or dying from Alzheimer’s; I do not make any affiliate fees or commission on the sale of this book. [i] “The End of Alzheimer’s” by Dale E. Bredesen, M.D, copyright 2017 [ii] Helper molecules that allow a neuron to develop and maintain connections with its neighbors are called trophic factors This article is for information and educational purposes only and is not meant to diagnose, treat, or cure illness. If you have concerns about your cognitive or physical health, seek medical attention from a licensed medical practitioner. GO TO YOUR FAVOURITE TOPIC: Be sure to check your Junk or Spam folder for your confirmation email after you submit to accept your subscription. We respect your email privacy.
<urn:uuid:20f25a31-d8c0-4053-a800-a43d5dcba39d>
CC-MAIN-2021-43
https://katmaeda.com/protect-your-brain-from-alzheimers-disease/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00670.warc.gz
en
0.949137
2,954
2.796875
3
Merle dog coats have irregular patches of darker hair are laid over a lighter, or diluted, shade of the same pigment. The pattern has been in dogs for many years, but wasn’t called merle until the early 2000s. Sometimes it is also called dapple. Dogs need a single merle gene to get this coat pattern. Dogs with two merle genes are more prone to the health problems associated with merle coloring. Let’s take a closer look at how merle dogs get their coat and what health implications it has. Merle Dog Breeds Contents Here’s a quick look at everything we’re going to cover in this guide to merle colored dogs. - Merle dog breeds - Appearance of merle coat dogs - Merle dog coat colors - Genetics of the merle color dog - Double merle dog - Merle colored dogs’ health - Reducing health risks - Merle dog breeds’ temperament - Training and exercising merle dogs - Grooming merle dogs - Merle dog puppies Merle Dog Breeds There is a surprisingly long list of dogs that can display this interesting coat pattern. These can be blue merle dog breeds or red merle dog breeds. But we will look a big more at these different colors later. Other breeds that may show this color coat include: - Border Collies - Pyrenean Shepherds - Catahoula Leopard Dogs - Bergamasco Sheepdogs - American Staffordshire Terriers (pit bulls) - Old English Sheepdogs - American Cocker Spaniels Is This Pattern Always Desirable? In some breeds, such as the Australian Shepherd, the color is a distinguishing characteristic. But, in others, such as the Dachshund, merle coloration isn’t considered desirable because of the associated genetic weakness. Merle Color Dog Appearances The random patches of color on top of the lighter color in this pattern is unusual and distinctive. In blue merle dogs, the color is mottled black atop black-and-white dilute hair. In red merles, the color is a mottled brown on top of lighter brown hair. You’ll still see patches of undiluted pigment over the dog’s body. The merle gene seems to affect mostly the black pigment. In a ‘Mm’ dog, a tan color is not necessarily diluted. So, a blue merle dog may still have tan points. Merle Dog Colors Merles are generally split into the blue merle dog and the red merle dog based upon the type of melanin produced. Some breeds also show: - fawn, and - chocolate merle patterns. Genetics of the Merle Coat The gene that causes merle in dogs is called PMEL17 or SILV. This color pattern is what scientists call “incompletely dominant.” It shows up when a dog gets just a single copy of the merle allele. It basically causes a dilution of color. Researchers have isolated three different alleles, or variants, for merle. These are the merle allele (M), the cryptic merle (Mc), and non-merle (m). Merle dogs have one allele for merle and one for non-merle, which is expressed as Mm. Cryptic merle refers to a pattern called phantom or ghost merle. Often, these dogs have the M genotype but don’t express it. Cryptic merles are usually either liver or black, with some small areas of merle. In fact, some don’t look like merles at all. The inheritance of M and Mc is unstable. Sometimes M may produce Mc, and vice versa. Double Merle Dog Dogs with two copies of the M allele, called double merle (MM), tend to be white with patches of color. If you’ve heard the term “lethal white,” it (somewhat misleadingly) refers to the MM genotype. Unfortunately, double merle dogs are more likely to suffer from some serious health problems, including deafness and blindness. We will take a closer look at this in a moment. What makes the merle color even more complicated is that there are modifying genes that work with the merle gene to create different phenotypes (the look of the dog based on its genes). These include the harlequin merle, in which the “blue” is replaced with white to create a white dog with black patches. It also includes patchwork or tweed merle, in which the “blue” or “red” becomes gray, tan, and brown. Patches in tweeds may be bigger in size, range, and dilution intensity. Health of Merle Colored Dogs The merle gene is unfortunately linked with impaired function of the auditory, ophthalmologic systems, and immune systems of dogs. That’s because color and color pattern in dogs is associated with the development of the nervous system in the dog embryo. They all come from the same cells. The problems are caused in part by the suppression of pigment cells in the inner ear and the iris of the eye. Merle dogs are known to be vulnerable to a wide range of defects in the eyes and ears. The blue eyes sometimes make it more difficult to diagnose eye problems, as well. Double Merle Dog Health One study found that deafness affected 9.2 percent of dogs with the merle allele, with 3.5 percent in single merles and 25 percent in the double merle dog. Other studies have found similar results, showing also that double merle (MM) dogs experience ear and eye effects at a much higher rate than single merle dogs. Some double merle dogs have been known to be born without eyes at all. There may be differences based upon breed, too. Collie-type breeds seem more affected by deafness than others. One of the conditions merle dogs may suffer from is microphthalmia with coloboma. This is a recessive trait that may show up in merles with a predominately amount of white hair (as with MMs), in which the eyes are abnormally small and may have anatomic malformations in the lens, iris, or retina. Other conditions include: - distortion of the eye’s appearance - night blindness - a cleft in the iris, and - third eyelids. Reducing Health Risks Veterinarians recommend genetic testing for merle dogs, because the genetics of the merle coloring can be complicated. The variations of merle coloring can result in a variety of appearances, so testing may be the only way to understand the merle dog’s true genetic makeup. Also, please don’t breed your merles, especially with other merles. Certain dogs that don’t look like merles may actually still carry the M gene. For example, cryptic merles or sable-colored dogs may be indistinguishable from non-merle dogs. And, if not identified through genetic testing, someone unaware of the genetic background of their dogs might inadvertently mate two merles together, resulting in a litter that includes double merle dogs. Merle Dog Temperament The merle color gene does not have an effect on temperament, as far as researchers know. If you are looking for a dog with this type of coloring, we recommend you learn about the temperament of the breed in question, rather than the pattern of coloring. We can’t generalize in this respect, because the breeds that show merle coloration are all so different! Some of the breeds that have merle coloring as one of their distinguishing characteristics are known to be quite intelligent! However, there doesn’t seem to be any relationship between intelligence and merle coloration. Merle Coat Dog Training and Exercise Whatever dog you end up getting, training will be important for your pup’s overall socialization and happiness. We recommend basic obedience and agility training for larger, active merle dogs, many of which were bred for herding other animals. For smaller dogs such as chihuahuas, training is still important to minimize nervous and destructive behaviors. Grooming Merle Dogs Again, this is something that depends on the breed. Many merle dog breeds have long hair that requires a fair amount of maintenance. Australian Shepherds, for example, have a waterproof, double-layered coat that sheds seasonally. It requires thorough weekly brushing. On the other hand, pit bulls have a short, stiff coat that doesn’t need much care, and only sheds occasionally. Merle Dog Puppies Merle coloring can become darker with age. So, be aware that those white areas on your merle puppy may start to look grayer as your dog ages. But other than that, a merle dog puppy will have all the attributes of an adult of the breed. Color isn’t necessarily going to determine your dog’s longevity, temperament or the joy you take in being with her. However, the merle gene itself does have health issues associated with it. If you want a puppy with this coloring, do your homework. Get your puppy from an experienced breeder, and know its genetics. How you care for your new merle puppy will definitely affect his quality of life, so make sure you are ready! Merle Dog Summary If you have a merle dog, we would love to hear about your experiences in the comments! What is the personality and coat like on your puppy? And have you ever had to cope with any of the health problems we’ve mentioned? Readers Also Liked - Merle Great Dane - Blue Merle Border Collie - Red Merle Australian Shepherd - Brindle Dog Breeds - Blue Dog Breeds References and Resources - ‘Health and the Merle Pattern’, American Dog Breeders Association (2016) - American Kennel Club - Chappell, J. ‘Merle (M series)’, Dog Coat Color Genetics - Davis, U.C. ‘Merle’, Veterinary Medicine Veterinary Genetics Laboratory - ‘Cryptic Merles’, Australian Shepherd Health & Genetics Institute (2017) - Bowling, S. A. ‘Elementary Merle Genetics for Newcomers’, Sheltie Bloodlines, (2010) - Clark, L. A. (et al), ‘Retrotransposon Insertion in SILV is Responsible for Merle Patterning of the Domestic Dog, Proceedings of the National Academy of Sciences of the United States of America (2006) - Gelatt, K. N. (et al), ‘Inheritance of Microphthalmia with Coloboma in the Australian Shepherd Dog’, American Journal of Veterinary Research (1981) - Sponeneberg, P. & Lamoreux, M. L. ‘Inheritance of Tweed, a Modification of Merle, in Australian Shepherd Dogs’, Journal of Heredity - Strain, G. M. (et al), ‘Prevalence of Deafness in Dogs Heterozygous or Homozygous for the Merle Allele’, Journal of Veterinary Internal Medicine (2009)
<urn:uuid:ffca8c80-00ba-4078-b186-1e6d320bf76a>
CC-MAIN-2021-43
https://thehappypuppysite.com/merle-dog/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00429.warc.gz
en
0.919741
2,434
2.96875
3
The term military-industrial complex (MIC) refers to the combination of the U.S. armed forces, its arms industry, and the associated political and commercial interests that grew rapidly in scale and influence in the wake of World War II and throughout the Cold War to the present. The term, often used pejoratively, refers to the institutionalized collusion among private defense industry, the military services, and the United States government (especially the Department of Defense). Such collusion includes the awarding of no-bid contracts to campaign supporters and the earmarking of disproportionate spending to the military. Many observers worry this alliance is driven by a quest for profits rather than a pursuit of the public good. In recent decades, the collusion has become even more prevalent, putting the United States' economy, some argue, permanently on a "war" footing; instead of defense spending in response to armed aggression, current government policy guarantees "readiness" by maintaining worldwide bases and spending large sums of money on the latest military technology. Furthering the problem is increased regional dependence on the defense industry for jobs and tax revenues. If the U.S. government were to drastically reduce its military spending, many Americans working in defense manufacturing plants around the country would lose their jobs; this reality makes it politically difficult for U.S. congressmen to vote against unnecessary defense spending. The increasingly global nature of the U.S. military-industrial complex has led some to charge that the United States is intent on establishing a new, worldwide empire based on military power. Nonetheless, the term MIC can also be applied to similar arrangements elsewhere in the world, both past and present. Origin of the term The term military-industrial complex was first used publicly by President of the United States (and former General of the Army) Dwight D. Eisenhower in his farewell address to the nation on January 17, 1961. Written by speechwriter Malcolm Moos, the speech addressed the growing influence of the defense industry: [The] conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence—economic, political, even spiritual—is felt in every city, every statehouse, every office of the federal government. We recognize the imperative need for this development. Yet we must not fail to comprehend its grave implications. Our toil, resources, and livelihood are all involved; so is the very structure of our society. In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists and will persist. We must never let the weight of this combination endanger our liberties or democratic processes. We should take nothing for granted. Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals so that security and liberty may prosper together. In the penultimate draft of the address, Eisenhower initially used the term "military-industrial-congressional complex," indicating the essential role that the U.S. Congress plays in supporting the defense industry. But the president was said to have chosen to strike the word congressional in order to avoid offending members of the legislative branch of the federal government. Although the term was originally coined to describe U.S. circumstances, it has been applied to corresponding situations in other countries. It was not unusual to see it used to describe the arms production industries and political structures of the Soviet Union, and it has also been used for other countries with an arms-producing economy, such as Wilhelminian Germany, Britain, France, and post-Soviet Russia. The expression is also sometimes applied to the European Union. Background in the United States At its creation, the American Constitution was unique for its inherent separation of powers and system of checks and balances among those powers. The founders feared that one branch or one office would gain a disproportionate amount of power, so systems were put into place to prevent it. Changing times, however, have limited the effectiveness of these systems. For one, when the Constitution was written, the few corporations that existed had little power in American affairs, but today, corporate money has more and more influence in Washington, D.C. For another, when the founders prepared the document, the United States was an isolated state protected by two vast oceans with little need to involve itself in world affairs. In light of the relative simplicity of American foreign policy at the time, the Constitution granted the executive branch almost absolute power in that area. In today's globalized world, however, the fact that the executive branch wields enormous power and military might can lead to excessive militarization. These issues have contributed to the formation of the American military-industrial complex. World War II The pre-December 1941 Lend-Lease deal, which provided aid and equipment to the United Kingdom and preceded the entry of the United States into World War II, led to an unprecedented conversion of civilian industrial power to military production. American factories went into high gear, producing tanks, guns, ammunition, and the other instruments of war at an astonishing rate. Increased industrial production, however, was not the only change in American life brought on by the war. The military participation ratio—the proportion of people serving in the armed forces—was 12.2 percent, which was the highest that the U.S. had seen since the American Civil War. World War II did not, however, cause the shift to a permanent military-industrial complex. For all practical purposes, the military demobilized after the war, and the American economy shifted back to peacetime production. After World War II, political scientist Chalmers Johnson writes, "…the great military production machine briefly came to a halt, people were laid off, and factories were mothballed. Some aircraft manufacturers tried their hands at making aluminum canoes and mobile homes; others simply went out of business." Cold War/Korean War The U.S. military-industrial complex as it is known today really began with the onset of the Cold War between the United States and the Soviet Union. When North Korea invaded South Korea in 1950, the previously "cold" war turned hot, and the Truman administration decided to back its previously announced policy of containment with military action. That conflict provided the impetus for massive increases in the U.S. defense budget, though little was earmarked to fund the actual fighting. Rather, "most of the money went into nuclear weapons development and the stocking of the massive Cold War garrisons then being built in Britain, [West] Germany, Italy, Japan, and South Korea." In simple numbers (2002 purchasing power), "defense spending rose from about $150 billion in 1950…to just under $500 billion in 1953," a staggering increase of over 200 percent. The public's intense fear of the Soviet Union, and a now unleashed armaments industry, inflicted intense pressure on politicians to "do something" to protect Americans from the Soviets. In the 1960 presidential race, for example, Democratic candidate John F. Kennedy claimed that the U.S. had fallen behind the Soviets in terms of military readiness, an issue that he had previously raised in a 1958 speech to the Senate. The charge was mainly for political opportunism; officials in the Eisenhower administration had images taken by U-2 spy-planes that confirmed American superiority in both missile numbers and technology, but the president worried that publicizing the data would lead to the Soviets ramping up their own weapons programs. During the Cold War and immediately after, defense spending sharply peaked upwards four times: First, during the Korean War; second, during the Vietnam War; third, during Ronald Reagan's presidency; and fourth, in response to the September 11 attacks in 2001. During those periods, defense spending per year often exceeded $400 billion. The perceived need for military readiness during the Cold War created a new, permanent and powerful defense industry. That industry quickly became so entrenched in the American consciousness that it became normal for the government to spend large sums of money on defense during peacetime. The long duration of the Vietnam War required that the United States establish bases and semi-permanent infrastructure in Vietnam for the support of its troops. To do this, the U.S. government largely turned to private contractors, some of which maintained extensive ties to U.S. politicians. Often, during the Vietnam-era, American citizens supported high defense spending because it was required for the struggle against communism. Also, increased military spending brought economic prosperity to regions of the United States that supported it. California, for example, led the nation in military contracts and also featured the military bases to match. Technological advances in weaponry and the required rebuilding of Iraqi infrastructure after the 2003 American invasion have enhanced concern over the U.S. military-industrial complex in the eyes of some. One corporation in particular, Halliburton Energy Services, has had a high profile in the Iraqi war effort. Halliburton (NYSE: HAL) is a multinational corporation with operations in over 120 countries, and is based in Houston, Texas. In recent years, Halliburton has become the center of several controversies involving the 2003 Iraq War and the company's ties to U.S. Vice President Dick Cheney. Preventing conflicts of interest, corruption, and collusion In an era of increasing militarization and congressional corruption, serious reform is necessary. After the WorldCom and Enron scandals of the early 2000s, Congress passed the Sarbanes-Oxley legislation to better regulate business and accounting practices. That act, however, does not address the military-industrial complex specifically and how it can adversely affect American society. Reform will have to come in the form of legislation specifically designed to define the legal relationship between private defense contractors and the government and also the role that American foreign policy plays in the world. Legislation could specifically address: - Conflict of interests in campaign financing and awarding of contracts - The award of contracts through votes where individual representatives and senators are identified (not committees) - Disclosure and transparency at a level which the IRS requires of non-profits - Competitive bidding of contracts, to include bids from corporations from other countries when on foreign soil - Disentangle foreign aid from conditions that dictate suppliers and products for which aid is given - Principles of foreign policy consistent with domestic policy - Limitation of executive power in management of foreign policy - Dwight D. Eisenhower, Public Papers of the Presidents, 1035-40. 1960. - Chalmers Johnson, The Sorrows of Empire: Militarism, Secrecy, and the End of the Republic (New York: Metropolitan Books, 2004), 52. - Chalmers Johnson, The Sorrows of Empire: Militarism, Secrecy, and the End of the Republic (New York: Metropolitan Books, 2004), 55. - Chalmers Johnson, The Sorrows of Empire: Militarism, Secrecy, and the End of the Republic (New York: Metropolitan Books, 2004), 56. - Oakland Museum of California, Picture this: Vietnam War/Cold War era. Retrieved September 4, 2008. - Dallek, Robert. An Unfinished Life : John F. Kennedy, 1917–1963. New York: Brown, Little, 2003. ISBN 0316172383 - Eisenhower, Dwight D. Public Papers of the Presidents. 1035–40. 1960. - "Dwight D. Eisenhower's Farewell Address to the Nation." In The Annals of America, vol. 18. 1961–1968: The Burdens of World Power, 1–5. Chicago: Encyclopedia Britannica, 1968. - Gottlieb, Sanford. Defense Addition: Can America Kick the Habit? Boulder, CO: Westview Press, 1997. ISBN 978-0813331201 - Hartung, William D. "Eisenhower's Warning: The Military-Industrial Complex Forty Years Later." World Policy Journal 18(1). Retrieved October 27, 2014. - Johnson, Chalmers. The Sorrows of Empire: Militarism, Secrecy, and the End of the Republic. New York: Metropolitan Books, 2004. ISBN 0805070044 - Kurth, James. "Military-Industrial Complex." In The Oxford Companion to American Military History, edited by John Whiteclay Chambers II, 440–42. Oxford & New York: Oxford University Press, 1999. ISBN 978-0195071986 - Nelson, Lars-Erik. "Military-Industrial Man." In New York Review of Books 47(20): 6. - Nieburg, H. L. In the Name of Science. Quadrangle Books, 1970. ASIN B00005W63K - Singer, P. W. Corporate Warriors: The Rise of the Privatized Military Industry. Ithaca, NY: Cornell University Press, 2007. ISBN 978-0801474361 New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia: Note: Some restrictions may apply to use of individual images which are separately licensed.
<urn:uuid:bf9ff9fb-52d8-43dc-9b37-dac19bde4ec4>
CC-MAIN-2021-43
https://www.newworldencyclopedia.org/entry/Military-industrial_complex
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00349.warc.gz
en
0.934478
2,809
2.65625
3
Illegal dumping, also called fly dumping or fly tipping, is the dumping of waste illegally instead of using an authorized method such as kerbside collection or using an authorized rubbish dump. It is the illegal deposit of any waste onto land, including waste dumped or tipped on a site with no license to accept waste. The United States Environmental Protection Agency developed a “profile” of the typical illegal dumper. Characteristics of offenders include, local residents, construction and landscaping contractors, waste removers, scrap yard operators, and automobile and tire repair shops. Illegal dumping is typically distinguished from littering by the type and amount of material and/or the manner in which it is discarded. An example of littering could be throwing a cigarette on the ground. However, emptying a rubbish bin with no permission in a public or private area can be classified as illegal dumping. Types of materials dumped Illegal dumping involves the unauthorized disposal of numerous types of waste. Typical materials dumped include building materials from construction sites, such as drywall, roofing shingles, lumber, brick, concrete, and siding. Other frequently dumped materials include automobile parts, household appliances, household waste, furniture, yard scraps, and medical waste. Causes of illegal dumping The reasons people illegally dump vary; however, research indicates that lack of legal waste disposal options is a primary factor. A shortage of legal disposal options drives demand for waste removal service, increasing prices. Studies also have found unit pricing, which involves charging a set price per bag of garbage thrown out, may contribute to illegal dumping. Although the intent of unit pricing is to encourage people to use other forms of waste disposal such as recycling and composting, people may turn to disposing of waste in unauthorized areas to save money. Additionally, weak enforcement of laws prohibiting illegal dumping and a lack of public awareness regarding the environmental, health, and economic dangers of illegal dumping contribute. Effects of illegal dumping Effects of illegal dumping include health, environmental, and economic consequences. While legal waste disposal locations, such as landfills, are designed to contain waste and its byproducts from infiltrating the surrounding environment, illegal dumping areas do not typically incorporate the same safeguards. Due to this, illegal dumping may sometimes lead to pollution of the surrounding environment. Toxins or hazardous materials infiltrating soil and drinking water threaten the health of local residents. Additionally, illegal dump sites that catch fire pollute the air with toxic particles. Environmental pollution due to illegal dumping causes short term and long term health issues. Short term issues include asthma, congenital illnesses, stress and anxiety, headaches, dizziness and nausea, and eye and respiratory infections. Long term concerns include cancer and kidney, liver, respiratory, cardiovascular, brain, nervous, and lymphohematopoietic diseases. Beyond negative health outcomes due to pollution and toxic waste, illegal dumps pose a physical threat. Unstable piles of material and exposed nails threaten harm to humans, specifically children who may be attracted to illegal dumps as play areas. Illegal dumps also attract vermin and insects. Tires, a material frequently illegally disposed of as most municipalities ban their disposal in landfills, provide an ideal breeding ground for mosquitos due to stagnant water collected within the wheels. Mosquitoes transfer life-threatening diseases, such as encephalitis and West Nile virus, to humans. Materials disposed of in illegal dumps, specifically tires and electronic waste, are combustible. Outbreaks of fire at illegal dump sites can lead to forest fires, causing erosion and destroying habitat. Illegal dumping also negatively affects surrounding property values. Unattractive and odorous accumulations of waste discourage commercial and residential developers from improving communities. Additionally, existing residents may have difficulty “taking pride” in their neighborhoods. In addition to decreasing property values and, therefore, tax revenue for governments, illegal dumping costs governments millions of dollars in clean up costs. In the United Kingdom, the Environmental Protection Agency spends £100–150 million annually to investigate and clean up illegal dump sites. The United States Environmental Protection Agency estimates several million in costs each year nationwide. How to combat illegal dumping Efforts to combat illegal dumping vary in each situation as solutions are crafted with specific community dynamics in mind. However, common approaches include a combination of limiting access to illegal dumping sites, surveillance, enforcement, and increasing access to legal waste disposal opportunities. Listed below are common techniques employed by governing bodies: The majority of illegal dumpers engage in illicit waste disposal at night, as darkness helps them avoid detection. Installing lighting around known or potential illegal dumping sites deters the practice. In Canada Bay, New South Wales, the city installed solar powered lights in dumping “hot spots”. Following installation of the lights, the city received less complaints regarding illegal dumping in those areas. Other methods of limiting access include re-landscaping and beautifying illegal dump sites. Adding aesthetic amenities such as grass, flowers, and benches demonstrates that the site is well maintained, discouraging dumpers. Additionally, increasing community use of the area will adjust locals’ perception of the site from dumping ground to valued open space. Adding barriers such as fencing, rocks, locked gates, and concrete blocks prevents offenders from accessing dump sites with their vehicles, completely deterring illegal dumping or reducing the volume of disposed materials. For example, Maitland, New South Wales erected fences around rural dumping sites prevented vehicles from gaining access. Continued monitoring 12 months later showed that 80% of dump sites protected by the fences experienced negligible illegal waste disposal activity. Increase surveillance and enforcement Increasing offenders’ risk of getting caught is also a way to combat illegal dumping. The most common way to accomplish this is through surveillance measures, such as video cameras. Camera footage can help law enforcement officials identify dumpers while also collecting data on peak dumping periods. Installation of fake cameras has also been shown to be a deterrent. Police patrols, helicopter and plane surveillance, and community surveillance are also options for increasing risk. Police presence generally deters illegal activity, while community surveillance depends upon residents reporting known illegal dumpers to law enforcement for a monetary reward. The cities of Los Angeles, Sacramento, and Oakland all implement similar reporting schemes. Cities can implement periodic compliance campaigns, which involve randomly conducted “crackdowns” by law enforcement. Increased police patrols, anti-dumping signage posted in known illegal disposal sites, random inspections of property, and publicity regarding convicted illegal dumpers and the use surveillance can deter illegal dumping. Removing illegal dumpers’ reasons for improperly disposing of waste is also an option for governing bodies. Offenders often dump to save money. Cities can offer free or subsidized waste services to residents to encourage legal disposal. If free and subsidized programs are not feasible due to funding limitations, cities must ensure affordability of waste disposal services. Offering alternate disposal options like recycling and compost centers is also recommended. Giving fines or assigning liability for clean-up costs to those caught illegally disposing of waste can also act as a deterrent. Combating illegal dumping also involves promoting legal waste disposal avenues. Offering Kerbside collection and improving waste storage in high density residential areas provides residents with convenient trash disposal options. Communication of available services is important to the success of such programs. Offering similar accommodations for commercial and industrial waste generated by office buildings, restaurants, schools, and factories will also decrease instances of illegal dumping. Cities can also combat illegal dumping by offering disposal options for materials and substances banned from landfills, such as tires, toxic and hazardous waste, and medical waste. The Massachusetts Department of Environmental Protection recommends chipping or shredding tires so that they can be recycled in other uses such as highways, playgrounds, and running tracks. The United States Environmental Protection Agency recommends disposing of household hazardous and toxic waste in the nearest community drop off location. For example, Boston, Massachusetts holds drop off days four times per year. Similar rules apply to disposal of medical waste. In Boston, officials recommend storing syringes in Sharps Containers and disposing in a designated community site. The city also recommends utilizing mail back services to dispose of used syringes. City governments can implement education campaigns to further mitigate illegal dumping. For example, cities can inform residents and businesses of legal waste disposal avenues through mailed flyers, newspaper and radio announcements, and posters. Posting signs near known illegal dumping sites can also help deter offenders. Cleaning up existing dumps According to the United States Environmental Protection Agency, waste attracts more waste. Therefore, cleaning up existing illegal dumps is a helpful deterrent for additional illegal dumping. The United States Environmental Protection Agency instituted a program to cap open dumps in tribal communities. 1,100 of these dumps exist in the United States and pose health and environmental risks to the surrounding communities. The open dumps are closed off with a clay liner and soil depth accounting for infiltration and erosion. “Native dryland grass” is planted on top of the newly covered dump to prevent erosion and water monitoring wells are installed nearby. Illegal dumping in Campania, Italy The Triangle of death (Italy) in Campania, Italy is Europe's “largest illegal waste dump”. The area, which encompasses Italian municipalities Acerra, Marigliano, and Nola, experiences illegal waste disposal practices by the Camorra such as unauthorized burying of toxic waste under places frequented by humans. Frequent fires at dumping sites and illegal waste fires set by residents have resulted in contamination of the air and drinking water. Additionally, the land has deteriorated due to the illegal waste. The environmental pollution caused by the illegal dumping has resulted in elevated instances of cancer and cancer mortality in the region. In 2014 and 2015, the Italian government funded health screenings to track the rise in illnesses in Campania. Studies conducted using the data collected from these screenings found elevated instances of leukemia, lymphoma, and colorectal and liver cancer mortality in one of Campania's districts. The study attributed this increase in cancer and cancer mortality with toxic exposures from the illegal waste. Electronic waste in China Illegal dumping of electronic waste, or e-waste, presents environmental and health concerns in China. The informal e-waste sector recycles the majority of e-waste in China, which is supplied through consumption, importation, and production. Foreign governments often send e-waste to China as the informal sector offers cheaper recycling services. China is not only the “largest e-waste dumping site”, it also generates large amounts of e-waste. In 2006, China produced 1.3 kg of e-waste per capita. The informal e-waste sector lacks formal government oversight and pays its workers low wages while using recycling practices that expose both workers and the environment to toxic materials. Toxic substances are found in leachates, particulate matter, ashes, fumes, wastewater, and effluents generated during dumping, dismantling, and burning throughout the recycling process. Particles emitted are carried through the air and deposited nearby recycling centers and in surrounding areas. Leachates and wastewater infiltrate the soil, drinking water, livestock, and fish, exposing humans to toxic substances. In recent years, China has begun to address the informal e-waste sector. At the governmental level, improvements have been made to waste management practices through adoption of Western management schemes such as those found in Japan, the United States, and the European Union. Additionally, the Chinese government has invested in improved e-waste collection and processing. Locally, various Chinese cities have constructed “recycling industrial parks” where e-waste can be processed efficiently and without harm to the environment. Regulations on e-waste have been implemented in the Chinese regions of Beijing, Shanghai, Jiangsuprovince, Zhejiang province and Guangdong province. Rubbish disposal in the UK is heavily regulated, with most households having on average one 240 litre bin for recyclable waste and one similar bin of non recyclable waste every week; some areas have additional similar or smaller bins for garden, food, or specific recycling waste. Any large rubbish e.g. old furniture and mattresses - may need to be taken to the local waste depot by the home owner at their own expense, although many councils will collect certain items for free, or for a small fee. This leads to some people simply leaving their waste in open public spaces or untended public gardens. This is called fly tipping. In addition, commercial or industrial users may fly-tip to avoid waste handling charges, as will unofficial and unlicensed waste disposal firms. Taxes on landfill in the UK have led to illegal waste dumping. Materials illegally disposed of can range from green waste and domestic items to abandoned cars and construction waste, much of which may be hazardous or toxic. As the cost of disposing of household rubbish and waste increases, so does the number of individuals and businesses that fly-tip, and the UK government has made it easier for members of the public to report fly-tipping. The fine or punishment is normally defined by the local council that operates in the local area in which the rubbish was dumped. According to the BBC, fly-tipping costs councils in England and Wales more than £50m annually (2016). Open dumps are locations where illegally dumped, abandoned piles of waste and debris are left in noticeable quantities. Fines are a common punishment for a person caught dumping at an open dump. Open dumps are commonly found in forests, backyards and abandoned buildings. Open dumps are sometimes removed shortly after they are created, but most will persist for an indefinite period of time when the site is situated in the wilderness or in public space without adequate public services. ... a multi-family dumpsite of any size or content. Open dumping is illegal under the Resource Conservation and Recovery Act (RCRA). The hazards of open dumping can include the release of toxics and heavy metals to the air and water; the increased presence of disease vectors such as rodents and insects; and physical hazards such as hypodermic needles, poisonous gases, and/or piercing objects. - "Illegal Dumping". Brampton.ca. 2010-10-07. Retrieved 2013-07-28. - "Illegal Dumping Info". Rdek.bc.ca. 2011-08-30. Retrieved 2013-07-28. - "City of Chicago :: Environment". egov.cityofchicago.org. Retrieved 2010-08-18. - Fly tipping and the law a guide for the public, crimereduction.co.uk - "UK fly-tipping 'on massive scale'". BBC. 2007-03-19. Retrieved 2008-12-16. Fly-tipping is taking place on a "massive scale" across the UK, the Countryside Alliance has warned. Some 2.5m cases of illegal dumping were recorded between April 2005 and 2006, it said, with 1,249,527 incidents reported in Liverpool alone. - "Document Display | NEPIS | US EPA". Retrieved 2018-06-12. - fly-tipping, n. Oxford English Dictionary, online edition, November 2010. Retrieved: 2011-01-28. - Ichinose, Daisuke; Yamamoto, Masashi (January 2011). "On the relationship between the provision of waste management service and illegal dumping". Resource and Energy Economics. 33 (1): 79–93. doi:10.1016/j.reseneeco.2010.01.002. ISSN 0928-7655. - "NC DEQ: Illegal Dumping". deq.nc.gov. Retrieved 2018-06-12. - Triassi, Maria; Alfano, Rossella; Illario, Maddalena; Nardone, Antonio; Caporale, Oreste; Montuori, Paolo (February 2015). "Environmental Pollution from Illegal Waste Disposal and Health Effects: A Review on the "Triangle of Death"". International Journal of Environmental Research and Public Health. 12 (2): 1216–1236. doi:10.3390/ijerph120201216. ISSN 1661-7827. PMC 4344663. PMID 25622140. - Batty, Stuart. "Open Dumping". epa.illinois.gov. Retrieved 2018-06-12. - "Illegal Dumping". www.dep.pa.gov. Retrieved 2018-06-12. - "Prevent illegal dumping". NSW Environment & Heritage. Retrieved 2018-06-12. - "Bureau of Street Services - Illegal dump report form". bss.lacity.org. Retrieved 2018-06-12. - "Illegal Dumping - City of Sacramento". www.cityofsacramento.org. Retrieved 2018-06-12. - "Get a Reward for Reporting Illegal Dumping | City of Oakland". www.oaklandca.gov. Retrieved 2018-06-12. - "Waste tire management" mass.gov. Retrieved 2018-06-24. - EPA,OSWER, US. "Household Hazardous Waste (HHW) | US EPA". US EPA. Retrieved 2018-06-25. - "Get rid of household hazardous waste". Boston.gov. Retrieved 2018-06-25. - "Proper Use and Disposal of Waste and Syringes" mass.gov. Retrieved 2018-06-24. - EPA,OSWER, US. "Tribal Waste Management Program | US EPA". US EPA. Retrieved 2018-06-12. - Senior, Kathryn; Mazza, Alfredo (September 2004). "Italian "Triangle of death" linked to waste crisis". The Lancet Oncology. 5 (9): 525–527. doi:10.1016/s1470-2045(04)01561-x. ISSN 1470-2045. - Chi, Xinwen; Streicher-Porte, Martin; Wang, Mark Y.L.; Reuter, Markus A. (April 2011). "Informal electronic waste recycling: A sector review with special focus on China". Waste Management. 31 (4): 731–742. doi:10.1016/j.wasman.2010.11.006. ISSN 0956-053X. PMID 21147524. - Sepúlveda, Alejandra; Schluep, Mathias; Renaud, Fabrice G.; Streicher, Martin; Kuehr, Ruediger; Hagelüken, Christian; Gerecke, Andreas C. (January 2010). "A review of the environmental fate and effects of hazardous substances released from electrical and electronic equipments during recycling: Examples from China and India". Environmental Impact Assessment Review. 30 (1): 28–41. doi:10.1016/j.eiar.2009.04.001. ISSN 0195-9255. - "Fly Tipping in the United Kingdom". Bournemouth Echo. 17 May 2017. Retrieved 18 May 2017. - "Report fly-tipping or illegal waste dumping". Gov.uk. 2017-04-06. Retrieved 2017-04-11. - "'It fell off the back of the van' - fly-tipping excuses". Bbc.com. Retrieved 2017-04-11. - "Solid Waste Management | Pacific Southwest: Waste Programs | US EPA". Epa.gov. 2009-12-16. Retrieved 2017-04-11. |Wikimedia Commons has media related to Illegal dumping.|
<urn:uuid:d786d256-73ba-421b-b2be-e79bbd37aed6>
CC-MAIN-2021-43
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Illegal_dumping
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00190.warc.gz
en
0.905778
4,055
3.3125
3
Rhyming songs, proverbs, riddles and games for children present fun ways to play with words and meaning, to develop language learning, and to remember and pass on culture and history from generation to generation. They also provide opportunities to introduce young children to cultures other than their own. Following is a comparison of two children’s books that contain rhymes for young children. We highly recommend one of them and do not recommend the other. author: Susan Middleton Elya illustrator: Juana Martinez-Neal La Madre Goose is Elya’s reworking of 18 classic Mother Goose “nursery” rhymes, with Spanish words thrown in. The idea seems to have been to introduce Spanish words to English-speaking children who are already familiar with these rhymes; and if there’s any appeal at all, it’s to parents looking to add Spanish words to their young children’s vocabularies. But the subtitle of this volume, “Nursery Rhymes for los Niños” notwithstanding, it was not intended for Spanish-speaking youngsters. Tossing a few Spanish words or phrases into European nursery rhymes does not transform them into something that’s “multicultural” or “bilingual” or “code switching” or anything other than opportunistic and appropriative. The result of what might have been envisioned as creative and fun wordplay is a hodgepodge of phrases that make no sense in Spanish or English. La Madre Goose is confusing, not to mention insulting, to young hablantes—the Spanish-speaking children who are never considered in this kind of project. In addition, classic Mother Goose “nursery” rhymes were never meant for the nursery. Rather, they were hidden political commentary satirized as children’s rhymes. Take, for instance: Peter, Peter pumpkin eater had a wife but couldn't keep her. He put her in a pumpkin shell, and there he kept her very well. This poem is about infidelity and murder. The gruesome message here is that “Peter” couldn’t “keep” his wife at home because she kept running away to have numerous encounters with other men, so he killed her and hid her body in a ludicrously large pumpkin shell, in which he “kept” her “very well.” Here is Elya’s version, which she entitles “Peter, Peter Calabasa,” and which is not only sanitized, but doesn’t make sense at any level: Peter, Peter Calabasa, got a wife for his new casa. When she saw the round casita, she repainted it—bonita! Peter Pumpkin got a wife for his new house, which she repainted. Wouldn’t he have gotten a house for his wife? Does Peter consider his wife like a piece of furniture? If Elya had something in mind beyond rhyming when she created this, it’s hard to tell. Then there’s the famous “Baa, Baa, Black Sheep.” One of the first versions reads: Baa, baa, black sheep, have you any wool? Yes, sir, yes, sir, three bags full. One for the master One for the dame And one for the little boy Who cries down the lane. In this poem that originated in medieval times, the three bags of wool are said to represent one-third for King Edward I (the “master”), one-third for the Church of England (the “dame”), and the rest (after he’d paid the 66% wool tax) for the poor shepherd, whose children were probably starving (“the little boy who cries down the lane”). And here is Elya’s version: “Baa, Baa, Black Oveja”: Baa, baa, black oveja, have you any lana? Sí, sir, sí, sir, three bags llenas. One for my sister, una for mi madre, and one to be shared by my brother and mi padre! What is this? Is Elya’s poem about the necessity of sharing with your family? If so, then why isn’t the wool divided into four equal parts, instead of three? Why do sister and mom each have one bag full, while brother and dad have to divide one between them? Is this a lesson about fractions? Or about making amends for centuries of inequality? Or is it possible that Elya’s sheep is distributing her own wool—in bags—to her own family? While Elya’s gratingly awkward combinations of English and Spanish don’t make any linguistic or cultural sense, Martinez-Neal’s luminous artwork makes this book lovely to look at. The softness of her warm, gentle paintings, on an earthy palette of blues and teals, beiges and yellows, highlighted by purples and oranges, appears to be a true labor of love. In describing the collage-mixing process for her debut picture book, Martinez-Neal told me that she selected different papers with a variety of textures and additionally hand-textured them, using matte medium as a glue that dries clear and soaks in the color, adding and drying one layer at a time. She then painted in her images in acrylic, colored pencil and graphite—or sometimes all acrylics—that, together, adds a gouache feeling to them; and then brushed on a white layer to soften the images. Martinez-Neal slowly developed this technique “organically experimenting,” she said, and makes changes in every book because she often “gets bored.” The results here are depictions, many in groups, of chubby-cheeked multiethnic children and cuddly animals with childlike expressions to whom the youngest kids can relate. Martinez-Neal’s own babies, she told me, were models for some of the children here. It’s unfortunate that Martinez-Neal’s beautiful artwork cannot save this contrived, formulaic and unimaginative book. La Madre Goose: Nursery Rhymes for los Niños is not recommended. author: Alma Flor Ada illustrator: Maribel Suárez editor (English): Tracy Heffernan Hyperion Books for Children, 2004 Mamá Goose: A Latino Nursery Treasury / Un Tesorio de Rimas Infantiles is everything that La Madre Goose: Nursery Rhymes for los Niños is not. Ada and Campoy planned the sections together and then chose the selections in this extensive compendium that contains lullabies, finger games, lap games, sayings, nursery rhymes, jump-rope songs, song games, proverbs, riddles, tall tales, and much more (including an outrageous ballad about a dead cat—“El señor don Gato”—who is brought back to life by the smell of sardines). The entire volume is lovingly compiled and beautifully illustrated with exuberant children and animals; and the poetic Spanish is “creatively edited” into English. Each section has a thoughtful and educational introduction and, although I would like to have seen a note about each piece’s cultural origin, it’s clear that great inspiration, planning, talent and execution went into every aspect of this project. Suárez’s deceptively simple watercolor illustrations, on a bright, earthy palette of mostly browns, greens, blues and golds, portray multicultural and multiethnic children and their relatives, and the design elements reflect the diversity of Latin-American and Spanish traditions as well. For instance, young readers will see children and adults in Peruvian, Mexican—and even medieval Spanish—dress, and there are Mexican adobe houses and Cuban and Puerto Rican tropical beaches and huge Spanish edifices along with borders that feature Spanish tiles and Mexican piñatas and papel picado, just to name a few. And children will love the illustrations of smiling animals behaving in species-anomalous ways—such as carefree rabbits, playing and chasing carrots in the sea, while smiling fishes happily cavort on dry ground. Colorful chapter headings and borders, along with white space that leaves lots of room for both the Spanish and English texts as well as the illustrations, make for an attractive, uncrowded, child-friendly design. One thing rarely portrayed in this kind of anthology for children—and something that deserves special mention—is the depiction of a variety of social classes, some in poems that abut each other. Here, for instance, are two upper-class girls and a boy in Spain. The girls, in red dresses and mary jane flats, carry white parasols and the boy, in an expensive-looking suit and hat, carries his schoolbooks. They are on “el paseíto de oro”—the golden path—without a care in the world. Here is a poor vendor (“la carbonerita”), also in Spain, wheeling a heavy coal-filled cart. Her face and apron covered with soot, she asks the reader, “How can you expect me to keep my face clean, if I am a coal-seller?” And here is a hard-working young servant girl, who washes, irons, cooks, cleans, sews, and sweeps—until Sunday, when she can go out to play. Since Ada and Campoy selected pieces recognized all over the Spanish-speaking world, it seemed natural for them to offer the Spanish versions first—which encourages Spanish-speaking parents and grandparents (as well as teachers) to share their own remembered rhymes and stories with their young hablante relatives and students. As well, following the Spanish pieces with those that have been “creatively edited” in English—and set off in italics—will have English-speaking children enjoying them as well and, by looking over to the Spanish, learning some Spanish and connecting with cultures other than their own. A particular rima or canción brings back memories of when my child was a toddler. Next to a brightly colored full-page painting of a green mama frog in a blue dress comforting her youngster who has fallen off a tricycle, the rima or canción reads: dame un besito y vete la cama. Send the pain Down the road. I remember often laying a hand over my own son’s scraped knee and singing, colita de rana, si no sanas hoy, Little frog’s tail. If you don’t heal today, You’ll heal tomorrow. Until one day (mid-“sana, sana”), he interrupted me and said, “It’s OK, mom, I’ll just go get a bandaid.” Unfortunately, the introductions for each section appear in English only. This is frequently an issue that publishers justify by design space limitations and assumptions about “secondary” languages’ (although, in this case, Spanish is the primary language) being unnecessary in prefatory material. In education, while language equality is improving, la lucha continúa. Mamá Goose: A Latino Nursery Treasury / Un Tesorio de Rimas Infantiles is indeed a treasure. It’s highly recommended. Míl gracias a mi amiga y colega, María Cárdenas.
<urn:uuid:59fef02e-e869-456f-83b4-e0fc8c752e4e>
CC-MAIN-2021-43
https://decoloresreviews.blogspot.com/2018/03/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00231.warc.gz
en
0.941687
2,493
3.5
4
Morning Star Travels takes to the Hyderabad pronounced is the capital of the Indian state of Andhra Pradesh. It also goes by sobriquet "City of Pearls". It is largest city in Andhra Pradesh & sixth largest in India with a population of 6.38 million. Hyderabad was founded by Muhammad Quli Qutb Shah in 1591 on the banks of Musi. Today the city covers an area of approximately 621.28 km². The city has been classified as an A-1 city in terms of development priorities, due to its size, population and impact. The twin cities, Hyderabad and Secunderabad comes under the ambit of a single municipal unit, The Greater Hyderabad Municipal Corporation. Hyderabad has developed into one of the major hubs for the information technology industry in India which has earned it additional sobriquet "Cyberabad" ". In addition to IT industry, various biotechnology and pharmaceutics companies have set up their operations in Hyderabad. You can visit the city through famous travels called Morning Star Travels The Hyderabad Morning Star Travels city is home to Telugu Film Industry, the second-largest in India, known popularly as Tollywood.Residents of Hyderabad are generally called Hyderabadis. Located at the crossroads of North & South India, Hyderabad has developed a unique culture, refelcted in its language & architecture. Situated on the Deccan Plateau, Hyderabad has an average elevation of about 536 metres above sea level (1,607 ft).Most of the area has a rocky terrain and some areas are hilly. Crops are commonly grown in the surrounding paddy fields. The original city of Hyderabad Morning Star Travels was founded on the banks of river Musi. Now known as the historic Old City, home to the Charminar and Mecca Masjid, it lies on the southern bank of the river. The heart of the city saw a shift to the north of the river, with the construction of many government buildings and landmarks there, especially south of the Hussein Sagar lake. The rapid growth of the city, along with the merging of Hyderabad, 12 municipal circles and the Cantonment has resulted in a large, united and populous area. Still so many villages near by are getting a facelift to merge in the twin cities in the near future. Hyderabad has a tropical wet and dry climate that borders on a semi-arid climate, with hot summers from late February to early June, the monsoon season from late June to early October and a pleasant winter from late October to early February. In the evenings and mornings, the climate is generally cooler because of the city's good elevation. Hyderabad gets about 32 inches (about 810 mm) of rain every year, almost all of it concentrated in the monsoon months. The highest maximum (day)temperature ever recorded was 45.5 o C (113.9 °F) on 2 June 1966, while the lowest minimum (night) recorded temperature was 6.1o C (43 °F) on 8 January 1946. So, you can book the ticket tjhrough Morning Star Travels .Hyderabad is the financial, economic and political capital of the state of Andhra Pradesh. The city is the largest contributor to the state's gross domestic product, state tax and excise revenues. Hyderabad ranks 93rd (as of 2008) in the List of richest cities in the world by GDP (PPP) with US$60 bn and sixth in India. In terms of GDP per capita (PPP), Hyderabad ranks 4th in India with US$6,428. The workforce participation is about 29.55%. Starting in the 1990s, the economic pattern of the city has changed from being a primarily service city to being one with a more diversified spectrum, including trade, transport, commerce, storage, communication etc. Service industry is the major contributor, with urban workforce constituting 90% of the total workforce.Hyderabad was ranked the 2nd best Indian city for doing business in 2009.Hyderabad is known as the city of pearls, lakes and, lately, for its IT companies. The bangles market known as Laad Bazaar is situated near Charminar. Products such as silverware, saris, Nirmal and Kalamkari paintings and artifacts, unique Bidri handcrafted items, lacquer bangles studded with stones,silk ware, cotton ware and handloom-based clothing materials are made and traded through the city for centuries.Hyderabad is a major centre for pharmaceuticals with companies such as Dr. Reddy's Laboratories, Matrix Laboratories, Hetero Drugs Limited, Divis Labs, Aurobindo Pharma Limited, Lee Pharma and Vimta Labs being housed in the city. Initiatives such as Genome Valley, Fab City and the Nano Technology park are expected to create extensive infrastructure in bio-technology. This places can be viewed through Morning Star Travels Hyderabad's language which is a mix of Urdu, Hindi, telugu and marathi, which creates Hyderabadi. There is a set of movies that are made using this language which started with Ankur, then Hyderabad Blues, The Angrez, Hyderabad Nawaabs, Hyderabadi Bakra Hungama in Dubai, Half Fry, FM - Fun Aur Masti, Aadab Hyderabad, Salam Hyderabad,Kal Ka Nawab, Thriller the movie,gullu dada return. (need complete list here)The Andhra Pradesh State Road Transport Corporation runs a fleet of 19,000 buses, the largest in the world. Hyderabad has the third largest bus station facility in Asia, with 72 platforms for 89 buses to load passengers at a time. Officially named as the Mahatma Gandhi Bus Station, it is locally known as the Imlibun Bus Station, Jubilee Bus Station at Secunderabad runs buses to various parts of the state and to some parts of South India also avail the transportation through Morning Star Travels Book online bus tickets to Hyderabad by Morning Star Travels Morning Star Travels takes you to the Pune also known as 'Punawadi' or Punya-Nagari, is the eighth largest city and eighth largest metropolis in India, and the second largest in the state of Maharashtra, after Mumbai. Once the center of power of the Maratha Empire, situated 560 metres above sea level on the Deccan plateau at the confluence of the Mula and Mutha rivers, Pune is the administrative capital of Pune district. Pune Morning Star Travels is known to have existed as a town since 937 AD.Chhatrapati Shivaji Maharaj, the founder of the Maratha Empire, lived in Pune as a young boy, and later oversaw significant growth and development of the town during his reign. In 1730, Pune became an important political centre as the seat of the Peshwa, the prime minister of the Chhatrapati of Satara. After the town was annexed to British India in 1817, it served as a cantonment town and as the "monsoon capital" of the Bombay Presidency until the independence of India. Today, Pune is known for its educational facilities, with more than a hundred educational institutions and nine universities.Pune is primararily a Hindu city and one can see temples all over the city,the people of Pune are religious and proud of their religion and their launguage Marathi.Pune has well-established manufacturing, glass, sugar and forging industries since the 1950-60s. It has a growing industrial hinterland, with many information technology and automotive companies setting up factories in Pune district. The city is known for various cultural activities like classical music, spirituality, theater, sports, and literature. These activities and job opportunities attract migrants and students from all over India and abroad, which makes for a city of many communities and cultures. Beaches near Pune Morning Star Travels Places to be visit in Pune is Diveagar is among few beaches of konkan. It is approximately 180 kms away from Pune. It takes around 5 hours to reach there. This is a very beautiful beach. It is a small village surrounded by greenery. It is better to go for two days (one night stay). Good veg and non veg food is available. Don't forget to have veg food of Mr Bapat.His speciality is Modak (a veg sweet item)(yummy tasty). You can visit pune by journey with Morning Star Travels Situated in the outskirts of Pune, ahead of Ambrosia and around 5 kms from Pashan is this beautiful bird sanctuary. It is a private collection of birds of Dr. Suhas Jog. The birds here have been collected by Dr. Jog over a period of 30 years from different parts of the world. This aviary cum birds research centre houses the most unique and beautiful species of birds. One cannot but feel overwhelmed by the beauty of these birds. Photography is not allowed inside the park and nothing can replace the joy of seeing the birds in person. These one of the places to be seen and can viewed in Pune via Morning Star Travels Apart from these, you can also viewed some of the places in Pune like- Raja Dinkar Kelkar Museum in Pune Raja Dinkar Kelkar Museum is housed in a quaint Rajasthani-style building. It holds a one-man collection of the most fascinating Indian artifacts. Thirty-six sections of this museum are used to display a plethora of antiques, carved palace doors, pottery, a priceless collection of lamps and musical instruments of the Mughal and Maratha periods. A masterpiece is the 'Mastani Mahal' brought and erected as it was from its original place! Bal Gandharv Mandir in Pune The home of Marathi Theatre, both commercial and experimental. Throughout the year there are different cultural happenings like exhibitions, theatre, orchestra - instrumental and vocal,... Tilak Smarak Mandir in Pune Tilak Smarak Mandir on Tilak Road is a building commemorating the great freedom fighter and social reformer Lokmanya Tilak. On the ground floor is a small museum describing Tilak's public life and a theatre on the upper floors. The above all these places in Pune can be viewed Morning Star Travels Book online bus tickets to Pune By Morning Star Travels
<urn:uuid:4f7b3fa4-b170-45e4-b893-3beab3fda073>
CC-MAIN-2021-43
https://www.morningstartravels.in/routes-directory/hyderabad-to-pune
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00550.warc.gz
en
0.954576
2,182
2.640625
3
As you walk the beach in Pemuteran, a tiny fishing village on the northwest coast of Bali, Indonesia, be careful not to trip on the power cables snaking into the turquoise waves. At the other end of those cables are coral reefs that are thriving with a little help from a low-voltage electrical current. These electrified reefs grow much faster, backers say. The process, known as Biorock, could help restore these vital ocean habitats at a critical time. Warming waters brought on by climate change threaten many of the world’s coral reefs, and huge swaths have bleached in the wake of the latest El Niño. Skeptics note that there isn’t much research comparing Biorock to other restoration techniques. They agree, however, that what’s happening with the people of Pemuteran is as important as what’s going on with the coral. Dynamite and cyanide fishing had devastated the reefs here. Their revival could not have succeeded without a change in attitude and the commitment of the people of Pemuteran to protect them. Pemuteran is home to the world’s largest Biorock reef restoration project. It began in 2000, after a spike in destructive fishing methods had ravaged the reefs, collapsed fish stocks and ruined the nascent tourism industry. A local scuba shop owner heard about the process and invited the inventors, Tom Goreau and Wolf Hilbertz, to try it out in the bay in front of his place. Herman was one of the workers who built the first structure. (Like many Indonesians, he goes by just one name.) He was skeptical. “How (are we) growing the coral ourselves?” he wondered. “What we know is, this belongs to god, or nature. How can we make it?” A coral reef is actually a collection of tiny individuals called polyps. Each polyp lays down a layer of calcium carbonate beneath itself as it grows and divides, forming the reef’s skeleton. Biorock saves the polyps the trouble. When electrical current runs through steel under seawater, calcium carbonate forms on the surface. (The current is low enough that it won’t hurt the polyps, reef fish or divers.) Hilbertz, an architect, patented the Biorock process in the 1970s as a way to build underwater structures. Coral grows on these structures extremely well. Polyps attached to Biorock take the energy they would have devoted to building calcium carbonate skeletons and apply it toward growing, or warding off diseases. Hilbertz's colleague Goreau is a marine scientist, and he put Biorock to work as a coral-restoration tool. The duo says that electrified reefs grow from two to six times faster than untreated reefs, and survive high temperatures and other stresses better. Herman didn’t believe it would work. But, he says, he was “just a worker. Whatever the boss says, I do.” So he and some other locals bought some heavy cables and a power supply. They welded some steel rebar into a mesh frame and carried it into the bay. They attached pieces of living coral broken off other reefs. They hooked it all up. And they waited. Within days, minerals started to coat the metal bars. And the coral they attached to the frame started growing. “I was surprised,” Herman says. “I said, damn! We did this!” “We started taking care of it, like a garden,” he adds. “And we started to love it.” Now, there are more than 70 Biorock reefs around Pemuteran, covering five acres of ocean floor. But experts are cautious about Biorock’s potential. “It certainly does appear to work,” says Tom Moore, who leads coral restoration work in the U.S. Caribbean for the National Oceanic and Atmospheric Administration. However, he adds, “what we’ve been lacking, and what’s kept the scientific community from embracing it, is independent validation.” He notes that nearly all the studies about Biorock published in the scientific literature are authored by the inventors themselves. And very little research compares growth rates or long-term fitness of Biorock reefs to those restored by other techniques. Moore’s group has focused on restoring endangered staghorn and elkhorn corals. A branch snipped off these types will grow its own branches, which themselves can be snipped and regrown. He says they considered trying Biorock, but with the exponential expansion they were doing, “We were growing things plenty fast. Growing them a little faster wasn’t going to help us.” Plus, the need for a constant power supply limits Biorock’s potential, he adds. But climate change is putting coral reefs in such dire straits that Biorock may get a closer look, Moore says. The two endangered corals his group works on “are not the only two corals in the [Caribbean] system. They’re also not the only two corals listed under the Endangered Species Act. We’ve had the addition of a number of new corals in the last two years.” These slower-growing corals are harder to propagate. “We’re actively looking for new techniques,” Moore adds. That includes Biorock. “I want to keep a very much open mind.” But there’s one thing he’s sure about. “Regardless of my skepticism of whether Biorock is any better than any of the other techniques,” he says, “it’s engaging the community in restoration. It’s changing value sets. [That’s] absolutely critical.” Pemuteran was one of Bali’s poorest villages. Many depend on the ocean for subsistence. The climate is too dry to grow rice, the national staple. Residents grow corn instead, but “only one time a year because we don’t get enough water,” says Komang Astika, a dive manager at Pemuteran’s Biorock Information Center, whose parents are farmers. “Of course it will not be enough,” he adds. Chris Brown, a computer engineer, arrived in Pemuteran in 1992 in semi-retirement. He planned to, as he put it, trade in his pinstripe suit for a wetsuit and become a dive instructor. There wasn’t much in Pemuteran back then. Brown says there were a couple good reefs offshore, “but also a lot of destruction going on, with dynamite fishing and using potassium cyanide to collect aquarium fish.” A splash of the poison will stun fish. But it kills many more, and it does long-lasting damage to the reef habitat. When he spotted fishermen using dynamite or cyanide, he’d call the police. But that didn’t work too well at first, he says. “In those days the police would come and hesitantly arrest the people, and the next day they’d be [released] because the local villagers would come and say, ‘that’s my family. You’ve got to release them or we’ll [protest].’” But Brown spent years getting to know the people of Pemuteran. Over time, he says, they grew to trust him. He remembers a pivotal moment in the mid-1990s. The fisheries were collapsing, but the local fishermen didn’t understand why. Brown was sitting on the beach with some local fishermen, watching some underwater video Brown had just shot. One scene showed a destroyed reef. It was “just coral rubble and a few tiny fish swimming around.” In the next scene, “there’s some really nice coral reefs and lots of fish. And I’m thinking, ‘Oh no, they’re going to go out and attack the areas of good coral because there’s good fish there.’” That’s not what happened. “One of the older guys actually said, ‘So, if there’s no coral, there’s no fish. If there’s good coral, there’s lots of fish.’ I said, ‘Yeah.’ And he said, ‘So we’d better protect the good coral because we need more fish.’ “Then I thought, ‘These people aren’t stupid, as many people were saying. They’re just educated differently.’” It wasn’t long before the people of Pemuteran would call the police on destructive fishermen. But sometimes, Brown still took the heat. Once, when locals called the police on cyanide fishers from a neighboring village, Brown says, people from that village “came back later with a big boat full of people from the other village wielding knives and everything and yelling, ‘Bakar, bakar!’ which means ‘burn, burn.’ They wanted to burn down my dive shop.” But the locals defended Brown. “They confronted these other [fishermen] and said, ‘It wasn't the foreigner who called the police. It was us, the fishermen from this village. We’re sick and tired of you guys coming in and destroying [the reefs].’” That’s when local dive shop owner Yos Amerta started working with Biorock’s inventors. The turnaround was fast, dramatic and effective. As the coral grew, fish populations rebounded. And the electrified reefs drew curious tourists from around the world. One survey found that “forty percent of tourists visiting Pemuteran were not only aware of village coral restoration efforts, but came to the area specifically to see the rejuvenated reefs,” according to the United Nations Development Program. The restoration work won UNDP’s Equator Prize in 2012, among other accolades. Locals are working as dive leaders and boat drivers, and the new hotels and restaurants offer another market for the locals' catch. “Little by little, the economy is rising,” says the Biorock Center’s Astika. "[People] can buy a motorbike, [children] can go to school. Now, some local people already have hotels.” Herman, who helped build the first Biorock structure, now is one of those local hotel owners. He says the growing tourism industry has helped drive a change in attitudes among the people in Pemuteran. “Because they earn money from the environment, they will love it,” he says.
<urn:uuid:18a37efa-2782-488e-bcfd-f262b8b27634>
CC-MAIN-2021-43
https://www.smithsonianmag.com/science-nature/coral-restoration-technique-electrifying-balinese-village-180959206/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00590.warc.gz
en
0.958341
2,335
3
3