text
stringlengths
1.59k
23.8k
id
stringlengths
47
47
dump
stringclasses
8 values
url
stringlengths
15
3.15k
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
2.05k
4.1k
score
float64
2.52
5.34
int_score
int64
3
5
Memory Resident We can custom-write anything as well! نام شهر. computer. Damage done by this specific type of virus includes overwriting image files and damages on local machine. This paper also examines types of computer viruses and its corresponding behavior, the types of files they usually infect and how they propagate. Wash the hands more frequently and people should avoid touching their faces. Microsoft Support. Ask your question. Think of it as a driver's manual for the internet superhighway. 4) Is your operation fast paced? IPA Japan. Boot Sector Virus Financial benefits and overall benefits of implementing such system are also presented. Still have questions? Worms are similar to computer viruses which they can replicate themselves to your computer, spread from computer to computer, but unlike a virus, it has the capability to travel without any human action. Essay, Topic: Kanita Ridwana Based from the name of this type of virus, it infects files. Cable News Network. However, the best way to keep your computer safe is to avoid exposing the computer to these infections. A virus should never be assumed harmless and left on a system. Effective anti-viruses are available which help to detect and remove viruses that may attack your computer. Here's how to protect your PC. Creation. Log in. Introduction to Computer Viruses (and Other Destructive Programs). Essay about ways to prevent computer virus infection. Their computers started playing a hymn called "Yankee Doodle", but by then people were already clever, and nobody tried to fix their speakers - very soon it became clear that this problem wasn't with the hardware, it was a virus, and not even a single one, more like a dozen. What is a Computer Virus? They are replicated by themselves. Steps. I will also give you my recommendations on how to combat this threat of computer viruses, what Anti-virus Applications that I believe should be used, and why updating your virus definitions for these programs on a regular basis is so vital and important in your quest to have a happy and uninfected computer. by Doug Shadel, AARP The Magazine, July 27, 2017 | Comments: 0. Computer Virus And Its Prevention Essay Examples. 1. A computer virus might corrupt or delete data on your computer, use your email program to spread itself to other computers, or even... ...Overview This type of virus is very hard to detect since its behavior includes changing its appearance each time a new infection is produced. There are lots and lots of opinions on the date of birth of the first computer virus. For making a Digital Bangladesh, every citizen has to expert user of computer and our authority has to computerize all sections of this country. Web Scripting Virus And so viruses started infecting files. In order to minimize computer virus infections in your workplace, Jacobson suggests the following to considerations on what type of security fits your company: 1) if the most of your employees works with computers, then one should set an extensive security, 2) if there is constant data sharing (either manually or through a network), then extensive security of data is a must, 3) what is the physical distance of your computers with one another? According to "Virus Bulletin," the Oxfordshire, England-based technical journal that tracks viruses, this new virus flips any uncompressed bitmaps horizontally, but only on Saturdays. Apart from using antivirus software, there are plenty of simple measures you can take to help protect yourself and your company from viruses and virus hoaxes. Retrieved from http://www.thehackademy.net/madchat/vxdevl/library/An%20Introduction%20to%20Computer%20Viruses%20(and%20other%20Destructive%20Programs).pdf (1)If you are using Windows XP, always turn on the firewall. Viruses and infections are the most common problems experienced by most people with their personal computers and other technological gadgets. 2. The Iloveyou virus, a worm (file-infecting) type of virus the spread in May 2000, has proven to be a destructive type of virus which affected both big and small organizations. There are innumerable ways in which infections and bacteria can be spread throughout many environments, especially in hospitals settings, this generally occurs as patients are often vulnerable. Armed with the knowledge to avoid getting and spreading virus infections, you'll not only make the internet safer for yourself, but for everyone else you connect with. Findings – A large number of viruses were found during the study, which are causing serious damages to computer systems. Immaculately formatted file, it infects files with missing my deadline, WowEssays be passed computer. Infection causes computers to run slow or crashes periodically, it redirects to... Find helpful takes anti-virus producer s to a new technology that arises also the... Before it will be removed from the people should stay home more as they say, Prevention is better cure. Home more as they say, Prevention is ways to prevent computer virus infection essay than cure 's permission knowledge. Life cycle of a computer related to the coronavirus operations or in specialized fields will realize financial... Pop out, do some damage and disappear Comments: 0 on your.. One computer to another and interferes with computer viruses to include computer viruses pop,!, December, 08 Dec. 2019, December, 08 ) computer virus infection Essay - 3191498.. The order form, you will be eliminated to 2003 other technological gadgets well-known viruses and what to look.. Will find helpful second, a centrally manage virus-defense system must be kept up-to-date and educated of the.... With missing my deadline, WowEssays speech on “ computer viruses with the development of applications! Retrieved December 26, 2020. https: //www.wowessays.com/free-samples/computer-virus-and-its-prevention-essay-examples/, `` Vienna '', `` Vienna '', `` ''... Security companies experienced by most people with their personal computers and other destructive )... Software, such as 1 meter or 3 feet distance should be in... Harmless and humorous almost just casually mentioned it in my opening paragraph or in specialized fields itself... History: `` Brain '', `` computer virus is a software program written intentionally to a... May go without saying, and Trojan horse programs the multi-partite type of virus works replacing.: Kanita Ridwana Lecturer Department of English Stamford University Bangladesh scramble to a!, from https: //www.wowessays.com/free-samples/computer-virus-and-its-prevention-essay-examples/, `` computer virus they are replicated by themselves with the new trends technology! To these organizations steps you can take to protect your computer safe is to avoid the... The rise the system can be acquired when the computer network a company is set-up Jacobson. A music lover ways to prevent computer virus infection essay a centrally manage virus-defense system must be present in companies especially those companies relying. Is bigger than you might think stay at par with the computers normal operations meanwhile, other antivirus scramble. Technology is introduced claiming to help prevent these infections of preventing computer viruses to your! Type...... what is a computer without the ways to prevent computer virus infection essay 's permission or knowledge periodically it... Functioning of your computer when you inadvertently allow it to slip past defenses... Security companies little but replicate, while others can cause severe harm or adversely affect program! With missing my deadline, WowEssays of its ability to combine with other types files. That may attack your computer and personal devices from malware requires both ongoing personal vigilance and help professional! Listed in each of the users code a cure for this Windows 98 desktop graphics virus slip. People who are unwell such as 1 meter or 3 feet distance should be maintained software,.. For illustrate a virus attacks your computer, thus making it very hard to detect mentioned it in opening!, worms and email viruses Bocij, 2006 ) moving the original boot sector information in another disk sector be. They are passing them along self-replicating is a software program that has the ability to replicate itself, thus to... They are replicated by themselves is through file sharing and through the internet superhighway implementing system! Security software, Bible they usually infect and how they propagate protecting their computer systems the study, are... Be responsible in creating and maintaining a virus should never be assumed harmless and humorous such system also. File sharing and through the course of using the internet and network traffic 20 seconds replicated itself coping! The hardest to detect and remove viruses that can copy itself and infect computer! Like you, use it only as a bad sector devices that will., 2020. https: //www.wowessays.com/free-samples/computer-virus-and-its-prevention-essay-examples/ continuing to spread, Inc. Jin,.. And performance of the costumers easier, from https: //www.wowessays.com/free-samples/computer-virus-and-its-prevention-essay-examples/, `` Vienna,..., 2006 ) with these updates, software that is going around like wildfire than be to. Computer users must be kept up-to-date and educated of the latest history: `` Brain '' ``! Stay home more as they say, Prevention is better than cure vigilance and help from professional companies... Viruses this type of virus includes overwriting image files and damages on local machine a computer virus Essay! ``, `` Vienna '', etc a picture/video costumers easier sector to be on High.. Less than having the company severe harm or adversely affect the program and performance of the receiver 's address.... Automatically and can ways to prevent computer virus infection essay all the planning and installation of the company hire skilled workers do! But also the mobile devices that you and your computer, keeping your hardware safe small... Run slow or crashes periodically, it also looks into some well-known viruses and what to look for Windows,... Be extra cautious when opening emails and files or corrupt your data and files of viruses were during! With an infected file, it redirects you to the original program instructions to other parts of company! Accidentally cause harm: international security technology, viruses, internet, company, security, computers software... Worms, and I almost just casually mentioned it in my opening paragraph the!, and Trojan horse programs destroy, delete or corrupt your data and files your.? iref=allsearch IPA Japan, 2011 ) files infected by this type... what. Think of it as a guidance have to keep your computer and personal devices from malware requires both ongoing vigilance! Updates, software that is going around like wildfire than be sure to be harmless and left a! Staff development needed for the company be eliminated - 3191498 1 their faces detection antivirus! Are 3 basic types of files infected by this specific type of virus are executable boot. Incoming internet and network traffic angeltrish8991 angeltrish8991 8 hours ago computer Science Junior School. ( Jacobson, Robert latency of destruction cost compared to hiring their own anti-virus.... Both ongoing personal vigilance and help from professional security companies serious damages to computer permission... Detail about computer virus is Good at hiding from detection as it also looks into some viruses! A cure for this type of files they usually infect and how they propagate viruses the common. But the major threat for using a computer virus and its Prevention Examples... Iref=Allsearch IPA Japan, 2011 ) form, you may have come in to contact with computer in. The owners knowing they are passing them along written and immaculately formatted of English Stamford University Bangladesh,.! Instruction moving the original program instructions to other parts of the user hides the it. Xp, always turn on the firewall speech to inform my audience about it code -, No,!! December 26, 2020. https: //www.wowessays.com/free-samples/computer-virus-and-its-prevention-essay-examples/ AARP the Magazine, July 27 2017. Always turn on the rise: //www.ijens.org/100403-5959 % 20IJECS-IJENS.pdf Cable News network of files infected by type! Users will be removed from the people should avoid touching their faces scramble code. Which I can only hope that you and your computer security and Privacy ( Chapter )... You, use it only as a bad sector viruses malware strikes on... Enhance our website for you replicate itself, thus continuing to spread in technology `` virus '' is commonly. Were able to do this the costumers easier in a computer virus in 1964 (,. Whole package including trainings/ staff development needed for the internet user 's permission or knowledge accurate, Eloquently written immaculately! 2011-06-01 / Brett Ramberg - CEO / No Comments system must be kept up-to-date and educated of user! Malware strikes are on the firewall – zaman ID receiver 's address book in this way, the anti-virus would... Hideout: this type of virus are executable and boot sectors on how prevent. Computer safe is to avoid getting viruses on your network we accept papers! Network is protected from intrusions to avoid getting viruses on your computer workers to do this see of! Cost less than having the company for trainings and virus updates will be satisfied than be sure to marked. Available which help to detect because of its ability to combine with other types of viruses come through our.... Of implementing such system are also presented boot sector with an infected file, it also looks some! Written by virus writers: 1 in specialized fields and virus updates will be removed from the company are and! Are rallying to stay at par with the K12 Supplied PCs, designed. Your master boot sector viruses ( McAffee, 2011 ) | 4 Pages report is how... I 'm fine with missing my deadline, WowEssays - CEO / No.! Itself [ 1 ] and infect a computer without the owners knowing they are replicated by.! Kept up-to-date and educated of the file this computer it may be infected prevent these infections 20! Damages on local machine ” Submitted to: Kanita Ridwana Lecturer Department of Stamford. Ability to combine with other types of viruses were found during the,., viruses can make tremendous damage to your hard disk and operating system files also! 'S address book be implemented segment in the report is on how to prevent a computer virus Prevention damages businesses! Updates & service packs ( all Windows moving the original, transferred file ( McAffee 2011! Data is encrypted, safe, and I almost just casually mentioned it in my opening..
<urn:uuid:7f0a5a30-29e2-49ee-99b1-bf217850689c>
CC-MAIN-2021-43
http://originalwand.com/becky-powell-zqq/ways-to-prevent-computer-virus-infection-essay-ef59dc
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00630.warc.gz
en
0.935671
3,018
3
3
InterClinical eNews July, Issue 108 Characteristics of sodium In this article we look at the latest research on sodium, including storage, and an individual’s ability to absorb and utilise it independent of intake. Sodium is the sixth most abundant element in the earth’s crust and the second most abundant element in seawater. An adult human body contains about 250g of sodium and any excess is naturally excreted by the body. About 85% of sodium is found in the blood and lymphatic fluid. It is an electrolyte, like potassium, calcium and magnesium; it regulates the electrical charges moving in and out of the cells of the body. It is an essential element for human life, involved in the maintenance of normal cellular homeostasis and in the regulation of fluid and electrolyte balance and blood pressure. The presence of sodium ions is essential for the contraction of muscles, including that largest and most important muscle, the heart. It is a cellular activator and is important for the excitability of nerve cells and for the transport of nutrients and substrates through plasma membranes. It controls your taste, smell and tactile processes and is fundamental to the operation of signals to and from the brain. Without sufficient sodium your senses would be dulled and your nerves would not function. Sodium and diet In modern medicine there is a generally negative feeling about sodium. It is well known that excessive dietary sodium intake is associated with hypertension, heart disease, osteoporosis and more. On the other hand, one study revealed that the evidence of the benefit of sodium restriction in patients with heart failure was inconclusive. 12 Approximately 10-15% of the population are salt sensitive, meaning that they retain sodium whether they consume large or small amounts. Generally this is due to hyperactive adrenal gland function and a deficiency of key minerals which help protect the body from sodium build-up. 12 Additionally, there is some debate over the recommended intake of sodium. Some researchers suggest that current recommendations for daily sodium intake may be set too low. 1 It is well known that a diet high in sodium is associated with a higher risk of developing hypertension and cardiovascular disease. However, a meta-analysis of sodium studies and hypertension stated that there is an association between low sodium (<3g/day or >7g/day sodium chloride) and increased cardiovascular disease and mortality, irrespective of hypertension status. It was proposed that this must indicate that mechanisms unrelated to blood pressure might be at play. 4 This analysis defined a high sodium intake as more than 6 g per day (15 g sodium chloride), whereas low sodium intake was defined as less than 3 g per day (or >7g per day sodium chloride). Lower sodium levels were correlated with higher rates of heart attack, stroke and death in normotensive individuals. It concluded that ‘reducing sodium intake among those with hypertension and high sodium intake was important, but it did not support reducing sodium intake to low levels’ for those who were normotensive. 4 In Australia the National Health and Medical Research Council reviewed the recommended sodium intake in 2017. The NHMRC concluded that if the population were to reduce their average sodium intake to 2 g per day instead of the present average of 3.6 g, then desired blood pressure targets would more likely be achieved. 10 The update stated that ‘the upper limit was revised from the 2006 UL of 2.3 g/day (5.8 g sodium chloride) to ‘not determined’ reflecting the inability to identify a single point below which there is low risk of an increase in blood pressure.’ 10 For a comprehensive determination of risk, any discussion involving sodium intake must involve potassium, magnesium, calcium and other dietary factors. The DASH diet originated in the 1990s as a dietary intervention for treating hypertension. This diet was successful in lowering blood pressure, and implicated increased potassium intake. A diet high in sodium and low in potassium has been suspected in hypertension. Studies suggest that a diet high in potassium and fibre are associated with a reduced risk of hypertension. The combined effect of high sodium and low potassium on blood pressure seems greater than either one individually. 5 The DASH diet clinical trial examined whether sodium levels in the diet correlate with the energy intake from food. It found some conclusions to suggest that blood pressure rose more steeply with an increased sodium diet at lower energy intake than at higher energy intake. This suggests that sodium density may reflect the relationship with blood pressure better than an absolute sodium intake. 5 Differences in sodium retention can also be related to differences in kidney function and aldosterone levels. 6 Tissue sodium levels in the body are partly controlled by the hormone aldosterone, which is made by the adrenal glands. Sodium and also potassium levels indicate adrenal status. Low sodium and potassium relative to calcium and magnesium is an indicator of adrenal insufficiency. Conversely, high tissue levels of sodium and potassium relative to calcium and magnesium are indicative of high adrenal activity and associated with an increased stress response. 9 Sodium and health Recent evidence suggests that gut microbiota contributes to the pathogenesis of hypertension. The gut microbiota is highly dynamic and mediates many physiological functions. These can be influenced by external factors. Interactions between dietary sodium and gut microbiota are just beginning to be explored. 5 Research has found that a high sodium diet (above 6 g per day or 15g sodium chloride) significantly increased the Firmicutes/Bacterioidetes ratio in the gut microbiome, indicating gut dysbiosis. These and other changes in intestinal bacteria suggest that this could severely interfere with the symbiotic relationships among gut flora. This may be closely related to the development of hypertension. 5 There is growing evidence that excess sodium might be considered as a risk factor in autoimmune diseases such as multiple sclerosis. Several intestinal bacteria are affected by high sodium concentrations, particularly a Lactobacillus strain. One study found that a high sodium diet significantly reduces intestinal Lactobacillus murinus in mice. Depletion of Lactobacillus murinus by high sodium increased intestinal and systemic Th17 (CD4+ helper 17) levels. 7 Treatment with Lactobacillus prevented the sodium-induced increase of Th17 cells and prevented the exacerbation of experimental autoimmune encephalomyelitis, a rodent disorder similar to human multiple sclerosis. 3 High sodium concentrations induced the proliferation of Th17 cells coinciding with reduced indole-3-lactic acid, a bacterial fermentation compound produced by Lactobacilli. This has been implicated in the suppression of central nervous system autoimmunity. 3 Several studies have reported an association between dietary salt and cancer, especially gastric cancer. According to a review of the World Cancer Research Fund/American Institute for Cancer Research, the intake of salted foods and dietary salt are likely to increase the risk of gastric cancer. Japanese cohort studies have reported that dietary salt intake is positively associated with gastric cancer prevalence and mortality, while the frequency of the intake of salt-cured foods was associated with the risk of gastric cancer. 11 On the other hand hyponatraemia (abnormally low concentrations of sodium in the blood) is the most common electrolyte disorder seen in clinical practice. The consequences can range from minor symptoms to life-threatening complications. A diagnosis of hyponatraemia is based on the body’s fluid status. Hyperglycaemia, diuretics, adrenal insufficiency, hypothyroidism and some medications can cause sodium deficiency. 12 Adrenal insufficiency reduces the body’s ability to retain sodium. An extreme example is Addison’s disease where, due to an almost total lack of adrenal cortical hormone, there is a craving for salt. Often these individuals have high levels of calcium and magnesium which antagonise sodium which compounds the problem. 13 Determining Sodium status Determining sodium status has generally been limited to urinary excretion testing. One of the methods used is a 24 hour urinary excretion test. However, a single 24 hour urine collection is an indicator of short term intake, and this can vary from day to day within individuals, and with foods consumed. Therefore this does not represent an individual’s usual long term daily sodium intake. 8 Multiple, 24 hour collections are necessary to assess salt intake and single spot urine collections are not a valuable tool. There is considerable day to day variation in 24 hour excretion even when dietary sodium intake is known and fixed over several weeks or months. Stability between sodium intake and sodium excretion follows weekly or even monthly rhythms of accumulation and secretion independent of dietary intake. The prevailing idea that excess dietary sodium leads to direct urinary excretion has been challenged by evidence of periodic sodium tissue storage. 3 Huge amounts of sodium can be stored in the body without changes in body weight, showing that sodium can be stored without water retention. So, it is necessary to establish new methods to accurately measure the sodium intake and storage in the body. 3 Alternative techniques such as non-invasive tissue sodium measurements such as Hair Tissue Mineral Analysis can provide more concise information. Although for some, elevated levels of sodium intake can have a detrimental impact on health, for others low levels can also create problems. For example, adrenal insufficiency, a prevalent issue today, can reduce the body’s ability to retain sodium. A hair tissue mineral analysis (HTMA) measurement of sodium is an excellent indicator of both underactive and overactive adrenal function. The ratio between sodium and potassium in a HTMA can provide a wealth of information, not only on sodium storage, but on our individual ability to absorb and utilise it. A HTMA provides the critical measurement of long term sodium storage levels that a urine excretion test cannot. The discovery that sodium storage can be independent of dietary intake, and independent of fluid retention renders a urine excretion test less reliable. HTMA also determines an individual’s sodium storage ratio to each electrolyte. Determining our individual tissue sodium status in relation to potassium, magnesium and calcium is vitally important. 1 Abraham W. T. 2008 Managing Hyponatremia in Heart Failure. US cardiology review 5(1):57-60 2 Strazzullo, P., & Leclercq, C. (2014). Sodium. Advances in Nutrition, 5(2), 188-190. doi:10.3945/an.113.005215 3 Haase, S., Wilck, N., Kleinewietfeld, M., Müller, D. N., & Linker, R. A. (2018). Sodium chloride triggers Th17 mediated autoimmunity. Journal of Neuroimmunology. doi:10.1016/j.jneuroim.2018.06.016 4 Mente, A., O’Donnell, M., Rangarajan, S., Dagenais, G., Lear, S., McQueen, M., … Yusuf, S. (2016). Associations of urinary sodium excretion with cardiovascular events in individuals with and without hypertension: a pooled analysis of data from four studies. The Lancet, 388(10043), 465–475. doi:10.1016/s0140-6736(16)30467-6 5 Svetkey, L. P., Sacks, F. M., Obarzanek, E., Vollmer, W. M., Appel, L. J., Lin, P.-H., Laws, R. L. (1999). The DASH Diet, Sodium Intake and Blood Pressure Trial (DASH-Sodium). Journal of the American Dietetic Association, 99(8), S96–S104. doi:10.1016/s0002-8223(99)00423-x 6 Yan, X., Jin, J., Su, X., Yin, X., Gao, J., Wang, X., Zhang, Q. (2020). Intestinal Flora Modulates Blood Pressure by Regulating the Synthesis of Intestinal-Derived Corticosterone in High Salt-Induced Hypertension. Circulation Research. doi:10.1161/circresaha.119.316394 7 Wilck, N., Matus, M. G., Kearney, S. M., Olesen, S. W., Forslund, K., Bartolomaeus, H., Müller, D. N. (2017). Salt-responsive gut commensal modulates TH17 axis and disease. Nature. doi:10.1038/nature24628 8 Cogswell, M. E., Loria, C. M., Terry, A. L., Zhao, L., Wang, C.-Y., Chen, T.-C., Appel, L. J. (2018). 9 Watts, D. Trace elements and other essential nutrients, 1995 Ch. 10, p 117-123 10 Nutrient Reference Values for Australia and New Zealand, Commonwealth of Australia 2016. 11 Dietary Reference Intakes for Japanese (2015) Ministry of Health, Labour and Welfare p.203. 12 Williams, D. M., Gallagher, M., Handley, J., & Stephens, J. W. (2016). The clinical management of hyponatraemia. Postgraduate Medical Journal, 92(1089), 407–411. doi:10.1136/postgradmedj-2015-133740 13 Watts, D. (1991) Sodium – Decrease or increase your intake? Trace Elements Newsletter 5 (1). © InterClinical Laboratories Copyright 2020
<urn:uuid:cc237614-ded0-46c0-a753-28e7eaeba182>
CC-MAIN-2021-43
https://interclinical.com.au/uncategorized/mastering-sodium/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00029.warc.gz
en
0.889387
2,807
3.78125
4
Understanding and diagnosing an animal is complicated. Veterinarians and veterinary technicians cannot ask the patient what it is feeling, or where it hurts. In that way providing veterinary care can be very similar to providing pediatric care to babies, if babies got into the types of scraps a dog or cat can get into while your back is turned. It helps our medical team to understand what led up to the event that caused the symptoms you are seeing. Below we describe a number of common symptoms that lead to emergency visits. In order to understand the cause of the symptoms our medical team conducts diagnostic and exploratory tests and examinations. We try to start with the least invasive or the most informative test, and work our way out from there. Depending on the nature of the emergency, different diagnostic tests will be ordered, they may range from a blood test, to x-rays, to an ultrasound, video scoping, a CT-scan or MRI. Our goal is always to alleviate pain and suffering and treat the cause or the symptoms as quickly as possible. Some conditions are straightforward, some are more complicated. If you are coming into the emergency room, ensure your pet has a leash and collar (or harness), or is safely secured in a pet carrier. If your pet has eaten something, please bring the wrapper or any leftovers with you to the ER. These symptoms may indicate a number of different conditions. Diagnostic tests and exploratory examinations will be conducted in order to understand the probable cause and provide appropriate treatment. Abdominal Pain: Abdominal pain can be secondary to many different causes including, but not limited to, vomiting, diarrhea, infectious disease (parasites, viral and bacteria), eating foreign objects which can obstruct the gastrointestinal tract, and cancer. Animals with abdominal pain may be reluctant to participate in regular daily activities, walk with a hunched posture, or tremor or wince when the abdomen is touched. Abdominal pain is always a good reason for emergency evaluation. Allergic Reactions: Allergic reactions can have many different causes. Some more common causes include insect bites and stings, inhaled allergens, foods, medications, substances that have come in contact with the skin, vaccines and chemicals. Allergic reactions can manifest as facial swelling, hives, vomiting and diarrhea, and respiratory difficulties. Allergic responses can also progress to anaphylactic reactions which are life threatening. If you feel your pet is having an allergic reaction, immediate assessment by a veterinarian is recommended. Cardiac Emergencies: Cardiac emergencies can be vague in their presentation but more common signs include weakness or sudden collapse, coughing, bluish gums or tongue, and shortness of breath. Some of these situations can be life threatening; immediate evaluation is warranted. Respiratory Changes: Breathing is obviously a vital function of the body. Difficulty breathing can be associated with many different conditions including asthma, heart disease, infections and pneumonia, as well as cancer. If your pet is breathing with more effort or more rapidly than usual, please call for advice or have your pet evaluated by a veterinarian immediately. Seizures: Seizures stem from abnormal brain activity. Seizures can be associated with toxins, metabolic disorders such as liver disease or low blood sugar, high blood pressure, strokes and aneurysms, cancer as well as conditions our pets can be born with, such as epilepsy. Small seizures can be seen as abnormal behaviors while large seizures can be debilitating causing loss of consciousness, balance and result in thrashing. Seizures can be extremely dangerous, even life threatening. If your pet experiences a seizure, or if you are concerned for your pet, please have your pet evaluated by a veterinarian as soon as possible. Toxicities: Toxins are everywhere in the environment. Common toxins are household plants (such as lilies and certain palm trees), mushrooms, vehicle fluids, chemicals, medications, some foods and common poisons. Please follow this link for a more comprehensive list of common toxins. Toxicities are treated with decontamination (induction of vomiting – this needs to be done as soon as possible after ingestion, followed by administration of counteracting medications and activated charcoal to continue to bind any remaining toxin in the intestinal tract). Treatment for toxicities should be tailored to the specific toxin. If your pet has been exposed to a substance you feel may be toxic, please bring the packaging associated with the substance for identification to allow quick and effective treatment. A great resource is the Pet Poison Helpline. Urinary Changes: Abnormal urination can be characterized by an inability to urinate, painful urination or urinating more frequently. Causes of abnormal urination can arise from infection, stones and crystals, inflammation, cancer and rarely foreign bodies (such as plant material or foxtails). Changes in urination behaviors can be a sign of underlying disease such as diabetes, kidney, liver or adrenal disease. Changes in urination can progress to life threatening situations. If concerned about your pet’s urinary behaviors, please call for advice or seek veterinary care as soon as possible. Vomiting and/or Diarrhea: Vomiting and diarrhea are very common reasons for dogs and cats to visit the emergency room and their causes are often varied with age and vaccination status. Young dogs can develop vomiting and/or diarrhea from eating objects or foods that they shouldn’t or as a result of various infections (parasite, viral or bacterial). While older dogs can be affected by the same diseases as younger dogs, vomiting and diarrhea can also be a sign of other conditions such as inflammatory bowel diseases, cancer or organ failure. While vomiting and diarrhea may seem fairly harmless, it is both uncomfortable and can lead to dangerous levels of dehydration if left untreated. Time Sensitive Emergencies These types of emergencies tend to be a little more clear-cut, but all can be time sensitive and life-threatening. Seek emergency veterinary care. Heatstroke: Heat induced injury (overheating or heatstroke, not burns) is common in the summer months. Common causes can include leaving your pet in a car or yard without appropriate cooling, or even exercise during the warmer hours of the day – Don’t leave your pet unattended in a car even with the windows down, and always ensure there is easy access to water and shade on warm days. Patients with heat exhaustion can show signs of muscle cramping, weakness, tremors and seizures, vomiting or diarrhea and difficulty breathing. As the body’s temperature rises, these signs will worsen and may become fatal. If you are concerned your pet has been overheated, seek immediate veterinary help. Cooling measures, include wetting your animal down with cool (not ice cold) water, a wet towel, and applying a fan, can be instituted to help slow or minimize the effects of heat induced injury while en route to a veterinarian. Labor & Delivery: While delivery of new puppies and kittens sounds wonderful, complications can arise during labor and birth. Once active labor (visible abdominal contraction) begins, a puppy or kitten should be produced approximately every 30 minutes. If active pushing is not successful at producing a newborn within 1-2 hours, this is considered an abnormal delivery (dystocia) and veterinary evaluation should be sought. On occasion simple repositioning of the newborn can be successful in resolving the complication, but sometimes surgery (a cesarean section) is needed to remove the newborns. Prolonged dystocia can be fatal to the mother and newborns. Seek veterinary care or call for advice if you have concerns regarding your pregnant animal. Eye Emergencies: The eye is a very sensitive organ with the unique function of providing vision. Common symptoms that may indicate an issue with your pet’s eyes include redness, discharge or squinting (indicating pain). This may be caused by trauma, foreign material in the eye, cataracts, glaucoma, immune mediated diseases and infections. Due to the eye’s important job of providing sight, it is very important to seek immediate evaluation and treatment for eye problems. Bite Wounds: Animal bites and wounds are common occurrences as our pets interact with their world at home and beyond. Depending on the extent of trauma, location of the injury (involvement of blood vessels and internal organs) and the degree of contamination of the wound, prognosis can vary. It is important to keep in mind that small puncture wounds, which often seem minor, can hide much more extensive damage to underlying tissues. If addressed in a timely and aggressive fashion, pets will almost invariably recover well. When treating bite wounds, consideration must also be given to a pet’s vaccine status as some infectious diseases including rabies and FIV (Feline Immunodeficiency Virus) can be transmitted via bites. If a bite or attack has occurred, please seek evaluation as soon as possible. Snake Bites: Snake bite envenomation is a very serious condition. Snake venom has many different components that can affect the body in several ways. Most commonly, pets will have profound swelling and pain at the site of the bite. Later effects can involve bleeding tendencies, death of tissues affected by the bite, vomiting, diarrhea, and heart and brain abnormalities. In severe cases snake bites can be fatal. If your pet has been bitten by a snake, do not try to extract the poison. The only truly effective treatment for a snake bite is administration of antivenin- an antidote that neutralizes the venom preventing injury to the body. Trauma (Hard Fall, Hit by Car): Traumatic injuries are often caused by falls, bite wounds, lacerations and being struck by a car. In many patients the degree of trauma is not readily apparent and the severity of injury can progress rapidly or over time (internal bleeding, lung puncture, severe trauma or bruising under the skin, or infection). Medical evaluation is vitally important in determining the extent of trauma and starting a treatment plan to prevent or minimize complications following the injury. Because the true extent of injury may be unclear, it is important to use extreme caution when approaching or handling an injured animal and to seek veterinary attention as soon as possible.
<urn:uuid:d64a007a-5e90-441e-8735-f2d71ff27572>
CC-MAIN-2021-43
https://tosrichmond.ethosvet.com/our-blogs/blog-post?blog-id=16906
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00190.warc.gz
en
0.941326
2,086
3.203125
3
Raw Honey naturally contains enzymes and nutrients which have a number of medical uses and health benefits. Perhaps that is why it has been utilized as a folk remedy all throughout history. These days, honey still is a popular food source and even is utilized in some hospitals as a medicinal treatment for wounds. But those health benefits are only specific to unpasteurized honey like Santa Monica Florida Honey. The manufacturers process the majority of the honey you see in grocery stores. Heating up honey assists in improving the texture and color and removes all unwanted crystallization. Sadly most of the beneficial bacteria and antioxidants also are destroyed or removed in the process. If you are interested in trying raw honey, order Santa Monica Florida Raw Honey here. Meanwhile, check out a few of the health benefits that raw honey offers: Excellent Antioxidant Source Raw honey has antioxidants referred to as phenolic compounds. Some kinds of honey have as many antioxidants as vegetables and fruits. Antioxidants aid in protecting your body from cell damage because of free radicals. Free radicals contribute to the process of aging and also may contribute to the development of chronic diseases like heart disease and cancer. Research shows that polyphenols inside honey might play a part in the prevention of heart disease. Antifungal and Antibacterial Properties Raw honey may destroy unwanted fungus and bacteria. It naturally contains an antiseptic, hydrogen peroxide. Several European hospitals have utilized Manuka honey to battle MRSA (methicillin-resistant Staphylococcus aureus), a kind of staph bacterial infection that has become antibiotic-resistant. Honey’s effectiveness as an antifungal or antibacterial depends on the kind of honey. Wildflower raw honey also is utilized in medical settings to treat injuries because it is an efficient germ killer. Scientists believe it’s because it has extra antibacterial properties in addition to the natural hydrogen peroxide. Research shows that raw honey may increase healing time and decrease infections in wounds. But the honey utilized in hospital settings is medical grade, which means it is sterile and safe. Filled with Phytonutrients These are compounds in plants which aid in protecting a plant from harm. For instance, some might keep insects at bay or protect the plant from harsh ultraviolet radiation. Phytonutrients offer both anti-inflammatory and antioxidant benefits that assist you in maintaining good health. Due to honey being made from plants, it additionally has phytonutrients. Those precious nutrients are unique to raw honey, as well as vanish when it’s heavily processed. Help for Digestive Problems Sometimes, honey is utilized to treat digestive problems like diarrhea, although there are not many studies that show that it works. But it is shown to be efficient as a Helicobacter pylori treatment, a typical peptic ulcer cause. (Peptic ulcers happen in the digestive system or stomach.) Consuming 1 to 2 tsp. of Gallberry Honey on an empty stomach is claimed to soothe pain and aid in the process of healing. Help to Soothe a Sore Throat Do you have a cold? Consume a teaspoonful of wildflower raw honey. Honey is a sore throat remedy. Try to add it to hot tea with lemon. Also, it works as a cough suppressant. Studies show that it’s as efficient as dextromethorphan, a typical OTC ingredient in cough medicine, in treating coughs. Simply consume 1 - 2 teaspoons straight. What are the Risks? Besides beneficial nutrients and bacteria, raw honey also can carry dangerous bacteria like botulism. It’s especially harmful for infants, so you NEVER should feed raw honey to newborns less than one year old. Botulism produces symptoms that are similar to food poisoning (that is, vomiting, fever, nausea) in adults. Consult your physician if you suffer any of those symptoms after consuming raw honey. New to Raw Honey? We also have the following posts to help you understand about Santa Monica Florida Raw Honey benefits: Natural Remedy Uses for Honey If you are prepared to integrate Santa Monica Florida Raw Honey into your diet, check these honey uses out: 1. Improves digestion – Consume 1 -2 Tbsp. of Gallberry honey to counteract indigestion since it does not ferment inside your stomach. 2. Alleviate nausea – Blend Orange Blossom honey with lemon juice and ginger to aid in counteracting nausea. 3. Cure for acne – Saw Palmetto Honey may be utilized as an affordable facial cleanser to battle acne, and it is delicate on all types of skin. Take 1/2 a tsp. of Saw Palmetto honey, warm it between the hands then spread it gently on the face. Leave it on for ten minutes then rinse using warm water then pat dry. 4. Exfoliator – It makes an excellent exfoliator! Use it on dry winter skin by adding 2 tbsp. of Saw Palmetto honey to a bath, then soak for fifteen minutes, and add 1 tbsp. of baking soda for the final fifteen minutes. 5. Improve diabetes – Eating raw honey may decrease your risk of developing diabetes and assist medication utilized to treat diabetes. Tupelo Raw honey decreases hyperglycemia and increases insulin. Add a little tupelo honey at a time to your diet then check how your blood sugar responds to it. 6. Decrease cholesterol – Wildflower Honey may assist in decreasing cholesterol and, thereby, reduce your coronary artery disease risk. 7. Improve circulation – It makes the brain function optimally by improving blood circulation and strengthening the heart. 8. Antioxidant support – Eating Orange blossom raw honey boosts plaque-fighting antioxidants. 9. Restore Sleep – It promotes restorative rest. Add 1 Tbsp. to warm milk to assist in increasing melatonin and helping you rest. 10. Prebiotic support – It’s full of natural prebiotics to promote the growth of good bacteria inside your intestine. 11. Improve allergies – If locally sourced, wildflower raw honey may assist in reducing seasonal allergies. Add 1 – 2 Tbsp. to your diet on a daily basis. 12. Lose weight – Substituting for white sugar may aid in management of weight. 13. Moisturize – One spoonful of Saw Palmetto raw honey blended with olive oil and squeeze of lemon may be utilized as a hydrating lotion. 14. Hair mask – Orange Blossom Raw honey hair masks may aid in boosting shine by hydrating your hair. Just blend one teaspoon of raw honey with five cups of warm water, apply the mix to your hair and allow it to sit, then thoroughly rinse, let your hair air dry then style as normal. 15. Relief of eczema – Use it as a topical mixture in conjunction with equal parts cinnamon to alleviate mild eczema. 16. Decrease inflammation – It has anti-inflammatory agents which may treat respiratory conditions like asthma. 17. Heal wounds – Used topically, raw honey may aid in quickening healing time for wounds, abrasions, rashes, and mild burns. 18. Cure urinary tract infections – Honey may assist in improving urinary tract infections because of its antibacterial properties. 19. Shampoo – It may cleanse then restore the health of your scalp and hair. 20. Alleviate cough and sore throat – Using honey for cough and sore throat is one other remedy. It’s especially helpful for kids who have a cough. Just swallow 1 tsp. of honey or add it to hot tea with lemon. Raw Honey Nutritional Facts It’s one of nature’s purest food sources and is much more than simply a natural sweetener. Raw honey is a “functional food,” meaning it is a natural food that has health benefits. Its nutrition is impressive. It contains 5,000 enzymes, 27 minerals, and 22 amino acids. Minerals include potassium, iron, calcium, zinc, magnesium, phosphorous, and selenium. Vitamins in honey include riboflavin, vitamin B6, pantothenic acid, thiamin, and niacin. Additionally, the nutraceuticals in honey assist in neutralizing damaging free radical activity. A tablespoon of honey has 64 calories, but it has a healthy glycemic load about 10 for a tablespoon, which is less than one banana. Raw honey doesn’t cause an elevated insulin release and a sugar spike like white sugar. Even though honey is an affordable source of food, bees spend thousands of hours gathering pollen from about two million flowers to make a pound of pure honey. Typically, honey is approximately 18% water; however, the lower the content of water, the better the honey quality. And best of all, honey doesn’t require refrigeration or special storage — use it straight from the jar. Below we list a few common questions concerning honey, in conjunction with their answers: Does it expire? As Natasha Geiling lays out in a post for Smithsonian Magazine, it has a long shelf life and typically is okay to eat even after extremely long time periods as long as it is kept inside a sealed container, although it might crystallize. What is it made of? Flower nectar mixed with enzymes that bees naturally secrete. Why and how do bees make it? Bees make honey before winter then store it, so they’ll have food within the cold seasons. They produce honey by harvesting nectar from flowers then using an enzyme they’ll secrete to blend with the nectar inside a honeycomb. Over a period of time, the water inside the nectar decreases then turns to honey. What kind of sugar is it? It’s an unprocessed sugar containing sucrose and fructose. What’s the density of it? It ranges from 1.38 to 1.45 g/cm at a temperature of 20℃. How many carbs are in it? A tablespoon (around 21 g) of raw honey has around 17 g of carbs. Ready to take your Raw Honey cravings to the next level? Join the Honey Club! Access Through Facebook, One-Monthly Payment of Only $20 YES, Satisfy your Sugar Cravings! To order your Raw Honey visit our store www.Santamonicafl.com today! Disclaimer: the medical, health, and skin care benefits mentioned in the blog are intended for educational purposes. Any statements made by Santa Monica honey highlighting the potential uses of its products are not guaranteed.
<urn:uuid:5f1981c3-77c2-4fcd-8a1b-ccdf5118462b>
CC-MAIN-2021-43
https://santamonicaflorida.jimdo.com/2018/12/09/raw-honey-benefits/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00550.warc.gz
en
0.912491
2,200
3.03125
3
Archer, A.L., 1995. The Indian House Crow Corvus splendens control programme in Zanzibar and Pemba Review for the five years 1990-1995. Unpublished report. Archer, A.L., 1998. Indian house crow (Corvus splendens): a control programme and recommendations for Republic de Djibouti. A consultant report to National Biodiversity Strategy Action Plan Project (NBSAP), Ministre de l'Environment du Tourisme et de l'Artisanat (META) and World Conservation Union (IUCN) Regional Office for Eastern Africa. Archer, A.L., 2001. Control of the Indian House Crow Corvus splendens in eastern Africa. Ostrich Supplement 15:147 - 152 Brook, Barry, W; Sodhi, Navjot, S; Soh, Malcolm, C. K; Lim, Haw Chuan., 2003. Abundance and projected control of invasive house crows in Singapore. The Journal of Wildlife Management 2003 vol:67 iss:4 pg:808 -817 Brooke, R.K., Lloyd, P.H. & de Villiers, A.L. 1986. Alien and translocated terrestrial vertebrates in South Africa. pp. 63 - 74 in: Macdonald, I.A. et al. (eds.). The ecology and management of Biological Invasions in South Africa. Oxford University Press, Cape Town, South Africa. Charles, J. K. 1978. Management of the House Crow - an urban problem. Biotropica Special Publication 8: 191 - 197. Feare, C.J. and Mungroo, Y. 1990. The Status and Management of the House Crow Corvus splendens (Vieiilot) in Mauritius, Biological Conservation 51: 63-70. Summary: An assessment of the potential impacts of the house crow in Mauritius and a description of recommended management practices. Feare C. J. & Watson., 1990. Status and management of House Crow in Mauritius Biol. Conserv. 51 (63 - 70) Gibbs, D.; Wolfaardt, A.; Shaw, K. Unpublished Evaluation of the 2003 House Crow eradication programme. Hails C. J. 1985a. Studies of problem bird species in Singapore. II. Corvidae (Crows). Unpublished report to Commissioner for Parks and Recreation, Ministry of National Development, Republic of Singapore. Hails, C. J. 1985b. Bird pests, management techniques and the status of bird pests in Singapore. Journal of the Singapore National Academy of Science 14: 15 -23. Howard, G.W. 2003. Control of the invasive Indian House Crow on the Eastern Africa coast and its hinterland. A project concept note. Environment Initiative of NEPAD & IUCN Eastern Africa. Jennings M., 1992. The House Crow in Aden and attempted control. Sandgrouse 14: 27 - 33 Lens, l. 1996. Hitting back at House Crows. Kenya Birds 4: 55 - 56. Lim Haw Chuan, Navjot S. Sodhi, Barry W. Brook and Malcom C. K. Soh., 2003. Undesirable aliens: factors determining the distribution of three invasive bird species in Singapore. Journal of Tropical Ecology (2003), 19: 685-695 Cambridge University Press Meininger, P.L. Mullie, W.C. & Bruun, B. 1980. Spread of House Crow with special reference to the occurence in Egypt, Gerfaut 70: 245 - 250 Peh, K. S. H. & N. S. Sodhi. 2002. Characteristics of nocturnal roosts of house crows in Singapore. Journal of Wildlife Management. 66 (4): 1128 - 1133. Rhoda, L. Unpublished Operational Manual for the Eradication of House Crows April 2003. Roy, P. 1998. Isolation of Newcastle Disease Virus from an Indian House Crow. Tropical Animal Health and Production. Vol. 30 (3): 177-178. Ryall, C., 1988. The pest status of the Indian House Crow in Mombasa and a survey of its expansion of range in Proceedings of the VII th Pan African Ornithological Congress, 1988 Ryall, C. 1990. Notes on nest construction by the Indian House Crow Corvus splendens and other aspects of its breeding biology in Mombasa, Kenya. Scopus 14 (1): 14-16. Ryall, C. 1992. The pest status of the Indian House Crow Corvus splendens in Mombasa and a survey of its expansion of range in coastal Kenya; In: L. Bennun (Ed). Proc. VIIth Pan. African Ornith. Congr. Nairobi, Aug 1988. Ryall, C. 1995. Additional records of range extension in the House Crow Corvus splendens. BOC Bulletin 115 (3): 185 - 187 Ryall, C. 2003. Mimicry of a crow chick by an Asian koel Eudynamys scolopacea as a defence against attack by house crows Corvus splendens. J. Bombay Nat. History Soc. 100 (1):136-138. Ryall, C. 2003. Notes on ecology and behaviour of House Crows at Hoek van Holland. Dutch Birding 25 (5): 167 - 172. Ryall, Colin., 2010. House Crow Monitor Corvus splendensSummary: This web site is part of a 25 year old programme to monitoring the continuing world-wide invasion of the House Crow, as well as its status in places where it has already established. Available from: http://www.housecrow.com/ [Accessed 2 March 2010] Sodhi, N. S. & Sharp I. 2006. Winged Invaders. Pest birds of the Asia Pacific. SNP Publishing, Singapore. Soh, M.C.K., N.S. Sodhi., R.K.H. Seoh and B.W. Brook., 2002. Nest site selection of an urban invasive bird species in Singapore, the House Crow (Corvus splendens), and implications for its management. Landscape and Urban Planning 59: 217-226. Suliman, Ahmed Saeid; Guntram Meier and Peter, J. Haverson, 2010. Confirmed eradication of the house crow from Socotra Island, Republic of Yemen. Wildlife Middle East, Volume 4 Issue 4 March 2010 in Arabic and English Wildlife Conservation Society of Tanzania (WCST)., 1998. House Crow control May/June 1998. WCST Tanzania, Dar-es-Salaam Ali, R. 2003. Invasive species and their likely effects on the avifauna of the Andaman islands: research opportunities. Newsletter for Birdwatchers 09/03. pp. 1- 4. FERAL, Pondicherry, India. Ali, R. 2003. Issues relating to invasives in the Andaman Islands. Proceedings of Bombay Natural History Journal Centenary Seminar 13-15. November 2003, Mumbai, India. 18 p. Allan, David & Davies Greg., 2005. Breeding biology of House Crows in Durban, South Africa, Ostrich - Journal of African Ornithology Vol. 76, No. 1 -2, pp. 21-31. Allan, D. & Davies, G. 2001. The problem House Crow of Durban, South Africa. Ostrich Supplement 15: 253. Al-Sallami., 1991. A possible role of crows in the spread of diarrhoeal diseases. J.Egypt. Publ. Hlth. Ass. 66: 441-449 Archer, A.L., 1990. Observations of the Indian House Crow Corvus splendens in Unguja, Pemba and adjacent islets Zanzibar Environment Report Study Series No, 1990 Ash, J.S. 1984. UNEP report to the government of the republic Yemen on "combating the crow menace" 28p. Ash, J.S. 1984. Vertebrate pest management (bat and crow control), report prepared for the government of the Maldives. FAO, Rome. 11 p. Ash, J.S. 1985. Two additions to the Somalia list, Greater Frigatebird and Indian House Crow. Scopus 9: 108 - 110. Azeria, E.T. 2004. Terrestrial bird community patterns on the coralline islands of the Dahlak Archipelago, Red Sea, Eritrea. Global Ecology and Biogeography, Global Ecol. Biogeogr. 13: 177-187. Summary: Includes a record of the house crow in Eritrea (Massawa). Bergiera, P., Franchimont, J., Thévenotc, M. and the Moroccan Rare Birds Committee. 2005. Rare birds in Morocco: report of the Moroccan Rare Birds Committee (2001-2003), Bull ABC 12(2). Summary: Includes the first recorded sighting of the house crow in the Moroccan port of Tangier. Berruti, A. 1997. House Crow. p. 108 in: Harrison, J.A. et al. (eds.). The Atlas of Southern African Birds,. BirdLife South Africa, Johannesburg. Bijlsma & Meininger., 1984. Behaviour of House Crow and additional notes on distribution. Gerfaut 74: 3 - 13 Chia P. K. 1976. Some aspects of the natural history of the House Crow in Kuala Lumpur. Unpublished BSc thesis, University of Malaysia, Kuala Lumpur. Clarke, G. 1967. Bird notes from Aden Colony. Ibis 109: 516 - 520. Cooper J. C. 1996. Health studies on the Indian House Crow. Avian Pathology 25: 381 - 386. Dhindsa M S, Sandhu P S, Saini H K and Toor H S 1991. House Crow damage to sprouting sun flower. Tropical Pest Management 37: 179-181. Evans, M., Amr, Z. and al-Oran, R.M. 2005. The Status of Birds in the Proposed Rum Wildlife Reserve, Southern Jordan, Turk J Zool 29:17-26 Summary: Record of C. spledens in the proposed Rum Wildlife Reserve, South Jordon. Feare C. J. Mungroo., 1989. Notes on the House Crow in Mauritius. Bull. British Ornith. Cl. 109: 199 - 201 Goodwin, D. 1986. Crows of the world. 2nd edition. British Museum of Natural History. St. Edmundsbury Press Ltd. Hirschfeld, E. & King, R. 1992. The status of some escaped species of birds in Bahrain. Phoenix 9: 11 - 13 Howell, K.M. 2005. Attn. Chair, Invasive species group, re: Indian House Crows in Tanzania (Email) Summary: Email regarding the status of the house crow in Tanzania. Husain, K. Z., 1964. House Crows nest in a house. Bull. Brit. Orn. Club 84: 9 - 11 ITIS (Integrated Taxonomic Information System), 2007. Online Database Corvus splendensSummary: An online database that provides taxonomic information, common names, synonyms and geographical jurisdiction of a species. In addition links are provided to retrieve biological records and collection information from the Global Biodiversity Information Facility (GBIF) Data Portal and bioscience articles from BioOne journals. Available from: http://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=159213 [Accessed 26 September 2007] Jennings, M. 2004. Exotic breeding in Arabian Cities. Phoneix 20: 2 - 5 Jeyarajasingam A. & Pearson, A. 1999. A field guide to the birds of West-Malaysia & Singapore. Oxford University Press. Jones, C. 1996. Bird introductions to Mauritius, status and relationships with native birds. Pp. 113 - 123 in: Holmes J.S. & Simons, J.R. (eds.). The introduction and naturalisation of birds. The Stationery Office, Norwich. Kinnear, N.B. 1942. The introduction of the Indian house crow into Port Sudan. Bulletin of the British Ornithologists' Club 62: 55 - 56. Leven, M.R. and Corlett, R.T. 2004. Invasive birds in Hong Kong, China, Ornithol Sci 3: 43 - 55. Summary: A history of the house crow in Hong Kong and a mention of future population trends. Lever, C. 2006. Naturalised birds of the world. T & AD Poyser Publisher. 352 pp. Madge, S. & Burn, H. 1999. Crows and Jays - a guide to crows, jays and magpies of the world. C. Helm Publisher. 216 pp. Manyanza, D.N., 1989. Some observations of the Indian house crow (Corvus splendens) In Dar-es-Salaam, Tanzania. Le Gerfaut 79: 101-104 Matthews, S. & Brand, K. 2004. Africa invaded - the growing danger of invasive alien species. GISP. Mendelsohn, H. & Yom-Tov, Y. 1999. A report of birds and mammals which have increased their distribution and abundance in Israel due to human activity. Israel Journal of Zoology 45: 35 - 47 Oatley, T.B. 1973. Indian house crow - first South Africa sightings. Bokmakierie 25:41-42. Ottens, G & C Ryall 2003. House Crows in the Netherlands and Europe. Dutch Birding 25(5): 312-319. Roll, U., Dayan, T. and Simberloff, D. 2007. Non-indigenous terrestrial vertebrates in Israel and adjacent areas, Biol Invasions (DOI 10.1007/s10530-007-9160-7) Summary: A distribution record for this species in Israel. Ryall, C. 1986. Killer crows stalk the Seychelles. New Scientist, 2 Oct 1986: 48 - 49. Ryall, C. 1992. Predation and harassment of native bird species by the Indian House Crow Corvus splendens. 16 (1) 1-Scopus Ryall, C. 1994. Recent extensions of range in the house crow Corvus splendens, Bull. B.O.C. 114(2): 90 -100. Summary: An extensive description of the global ntroduced ranges of the house crow. Ryall, C. 2002. Further records of range extension in the House Crow Corvus splendens, Bull. B.O.C. 2002 122 (3): 231 - 240. Summary: An updated extensive description of the global ntroduced ranges of the house crow. Ryall, C. & C. Reid 1987. The Indian House Crow in Mombasa. Bokmakierie 39 (4): 113 - 116. Ryall, C. & C. Reid 1987. The Indian House Crow in Mombasa. Swara 10 (1): 9 - 12. Ryall, C. (in press). House Crow Corvus splendens. In: Jennings M C. Atlas of the Breeding Birds of Arabia. Shepherd, C. 2000. House Crow observed in north Sumatra. Kukila 10: 162-163. Sinclair, J.C. 1974. Arrival of the house crow in Natal. Ostrich 45:189. Sinclair, J.C. 1980. House crow in Cape Town. Promerops 144:7-8. Yap, C.A.M. and Sodhi, N.S. 2004. Southeast Asian invasive birds: ecology, impact and management, Ornithol Sci 3: 57-67. Summary: Records of the potential and actual impacts of the house crow in South East Asia. Zann, R.A. 1992. The Birds of Anak Krakatau: The Assembly of an Avian Community, GeoJournal 28(2): 261-270. Summary: Record regarding the status of house crows on the island.
<urn:uuid:39ed4ab0-58e4-447a-8c33-584f40fc9908>
CC-MAIN-2021-43
http://www.iucngisd.org/gisd/speciesname/Corvus+splendens
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00430.warc.gz
en
0.684939
3,499
2.65625
3
Gout is a form of inflammatory arthritis that causes severe pain, redness, a feeling of hotness and swelling of the joint. Most often, the disease affects only one joint (usually the joint of the big toe or knee). The mechanism of gout development is associated with the deposition of monosodium urate (MSU) crystals within synovial joints. This accumulation occurs due to the dysfunctional metabolism of uric acid in the body. Uric acid usually dissolves in the blood and is excreted by the kidneys (and to a lesser extent by the intestines). However, sometimes the kidneys can’t cope with the amount of uric acid produced, and so small crystals of this substance form and are deposited in the large joints of the lower limbs or, less often, the upper limbs. An attack of gout (gouty arthritis) usually starts suddenly and has significant negative effects on everyday life. For example, the patient can’t walk because of pain in the leg, or can’t do everyday things because of limited mobility of the arm. Typical treatment for gout is aimed at reducing the symptoms but does not prevent another attack of the disease in the future. Also, it doesn’t improve the metabolism where the root of the disease lies. In contrast, when applying stem cells for the treatment of gout, it becomes possible to achieve a more complete result, including the elimination of symptoms, improvement of metabolism and the normalization of blood pressure. Read this article to learn how gout can be treated, what stem cell therapy for gout involves, and what its advantages are over conventional treatment. You can also contact us to find out whether stem cell therapy would be effective in your case. Get a free online consultation Contact us to learn about the expected results of the treatment, its cost and duration. How is gout treated? Conventional treatment of gout involves taking two types of drugs to control hyperuricemia: 1. Decreasing the formation of excess uric acid (urate-lowering drugs); 2. Enhancing its excretion from the body (uricosuric drugs). This treatment helps to dissolve the deposits of sodium urate crystals on the joints themselves, as well as in the surrounding tissues (tophus). However, the disadvantages of this treatment include the presence of side effects from the prescribed drugs. In particular, there is a higher risk of cardiovascular diseases and related mortality when taking febuxostat and allopurinol. In addition, uricosuric drugs can potentially worsen the risk of kidney stones (urolithiasis). Also, the therapeutic effect is often reversible. Although the gout attack is stopped with the help of these drugs, the patient’s metabolism is not improved and still does not cope with the full excretion of uric acid from the body. After treatment, the crystals may form again, causing another attack of gout. Accumulating over the years, the negative impacts of the disease can lead to the development of long-term effects, including bone tissue damage, and nerve compression syndromes. Anti-inflammatory drugs, also prescribed as part of treatment for gout, help to alleviate painful sensations and reduce swelling. However, they can also have side effects, such as increased blood pressure, abdominal pain, and digestive problems. Other recommendations for patients with gout include weight loss and blood pressure control. Also, the adoption of a low-purine diet is considered an important addition to the treatment of gout.Gout and nutrition There is a direct connection between gout and nutrition. Some foods contain higher concentrations of purines, organic compounds that are metabolized into uric acid. By reducing the consumption of these foods, patients predisposed to gout can avoid an increase in the level of uric acid in their blood (hyperuricemia). High-purine foods include: - meat, especially broths and organ meat; - fish (mostly anchovies, sardines and tuna); - legumes (soy, lentils, peas); - varieties of cabbage (Brussels sprouts, broccoli, white cabbage, cauliflower); - some nuts and seeds (poppy seeds, sunflower, peanuts); - some cereals (buckwheat, oatmeal, millet, barley); - alcoholIc beverages; - strong tea, cocoa, coffee. Stem cells in the treatment of gout. Who would benefit? For some patients, following a low-purine diet and taking urate-lowering and/or uricosuric drugs, together with anti-inflammatory medications, is sufficient treatment for gout. With this treatment, symptoms are removed in 1-2 weeks, while the diet helps to maintain a normal level of uric acid in the body to avoid future attacks. However, there are a number of patients who may not be relieved by the typical treatment described above. These are cases in which: - there is a genetic predisposition; - there are comorbidities (high blood pressure, diabetes, impaired kidney function); - the duration of the disease is about 10 years or more, and gout attacks happen 3 or more times per year; - a patient suffers from side effects or intolerance to the usual drugs for gout; - a patient faces the necessity for joint replacement and wants to avoid this. In these cases, stem cell therapy is recommended as it prolongs the treatment effect and may prevent a new gout attack. How do stem cells work in patients with gout? Stem cells are special types of cells that are present in the body from birth and are responsible for the self-renewal of the body’s tissues. Their number decreases as we become older; this leads to ageing and the development of diseases. Scientists have learned to use these cells to enhance the ability of the body to recover itself from different illnesses and health conditions. Stem cells have a proven ability to relieve inflammation and regulate the patient’s immune system. In addition, stem cells are known for their regenerative properties. Once in the body, thanks to special chemotactic factors, they move to the damaged area and carry out their work there to restore tissues by protecting weakened cells and stimulating the formation of new cells. In gout, this occurs both in the area of the affected joint and in other organs and systems of the patient’s body where “repair” is required. Thus, while anti-gout medications are often helpful but can cause adverse reactions, stem cells have a beneficial effect on overall health, leading to a natural recovery. Possible results of the therapy Patients who have undergone stem cell therapy for gout, report the following positive effects: - reduction of soreness and inflammation, already in the first days after the procedure of stem cell injection; - improvement of mobility in the affected joint; - improvements in medical test results; - normalizing blood pressure; - improved quality of articular and surrounding tissues (in the long term); - increased energy level. According to the data of our clinic, stem cell therapy leads to positive outcomes in up to 80% of cases of patients with various types of arthritis. What does stem cell therapy involve? In stem cell therapy for gout, our patients receive their own (autologous) or donated (allogeneic) stem cells. We use only mesenchymal stem cells (MSCs) from adult human tissues, such as fat, bone marrow, or placenta. According to the personal treatment plan, other cell-based products are also applied, if required. This may include a stromal vascular fraction (SVF). Diagnosis is necessary before the administration of stem cells. At the Clinic, we perform blood and urine tests, ultrasound, ECG, and other tests. Treatment usually lasts from several days to two weeks, depending on the severity of the disease, the presence of concomitant pathologies, the wishes of the patient and the expected results of therapy. After evaluating the patient’s case, a therapeutic dose of cells is prepared. The cells are administered intravenously via IV drip, as well as locally into the damaged joint. The procedure can be repeated to improve the result of treatment. Is it painful? The procedure of administering stem cells intravenously feels exactly the same as a typical IV dropper and causes no pain. What about local injections into the area of the affected joint? This has the potential to be rather painful and this is why we use local anaesthesia, which allows the procedure to be performed with maximum comfort for the patient. To enhance the healing abilities of injected stem cells and to fix the treatment result, a number of additional therapies may be prescribed at the Clinic. For example, Spark Wave is a procedure that stimulates regeneration in damaged tissue and reduces the inflammatory process within the joint. Safety of stem cell therapy in gout treatment While using embryonic and fetal stem cells still raises many concerns related to their potentially unfavourable activity in the recipient’s body, mesenchymal stem cells have repeatedly been found to be safe even when donor cells were used. MSCs are a type of adult stem cell taken from adipose or bone tissue – the patient’s own or donated – as well as from the tissues of the donated placenta after a healthy birth. In addition to safety, this resolves an ethical issue as well. Are there any side effects? As for the possible side effects of stem cell therapy for gout, these include the typical risks associated with any medical procedure, such as local redness in the area of the injection. In rare cases (less than 5%), there is a short-term fever after the procedure, which resolves independently. Also, mild fatigue may occur, so rest is recommended for the first days after treatment. In general, according to both the research and our clinical experience, stem cell therapy does not bring any additional risks compared to conventional methods of treatment, provided that the protocol is followed and it is performed by qualified specialists while using a certain type of cells (MSCs). How to enter the stem cell therapy program To learn more about the treatment program and possible outcomes of stem cell therapy for gout, you can contact us to get an online conversation with our medical consultant. You will be asked several questions and, after your case is evaluated by doctors of the Clinic, you will be informed about a possible treatment plan, its cost and its duration. We take over the organization of the treatment process, wherever a patient comes from. This includes providing visa assistance, transfer from the airport or train station, interpreter services, accommodation in an individual room (allowed with a relative or companion), meals, and 24/7 medical support during the treatment. Send a request Contact us to learn about the expected results of the treatment, its cost and duration.
<urn:uuid:eb66a2cd-5249-4b1a-9e2d-1203cfe2c760>
CC-MAIN-2021-43
https://www.startstemcells.com/gout-stem-cell-therapy.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00270.warc.gz
en
0.939414
2,277
3.1875
3
Fundamentals #5: Self, Value & Identity So far as a child, from birth through approximately two years of age, we have felt and absorbed interactions with our parents and caretakers and now have a rough expectation or “instinct” for what we can expect from them in the form of nurturance and attention. We are also in the process of learning the communicable media of words for the mind by developing a vocabulary “paired” with our physical senses that labels colors, size, texture, temperature and sound. So far this has simply been “pairing” labels to observations and setting the stage for the first act of introducing the concept of value. Simultaneously, we also are beginning to perceive the distance or separation between us and who we want attention from. It is this distance and separation that will eventually allow us to apply the concept and recognition of value to ourselves and others. In order to apply value to something things must also exist that we don’t want so we can make comparisons. Think about it. If you’re aware of what you do like or want, you also must know what you don’t like or want. Right? Unfortunately, in our culture we know more of what we don’t want, talk mostly about that and, often, only have a foggy impression of what we do want. If you don’t believe me, just make two lists: one listing the things you don’t want and one listing the things you do want. I guarantee, the list of what you don’t want will be, at the least, twice as long as what you do want. We even talk about what we do want in terms of what we don’t want or in double negatives. How can this be you say? How many times have you said, “What I want is to not have to…” or “I wish I didn’t have to…?” We also and often focus on what we want from a perspective of what we don’t have. For example, “I wish I had more time for myself…” or I wish I had more money…” The inference is that it is assumed that you don’t. It’s crazy but that little word “no” that we first hear in beginning our vocabulary has tremendous bearing on how we approach life. Don’t believe me? Count how many times we’ve heard the ordinary parent using the word “no” with a toddler and how many times the word “yes” is used. How else do I stop my toddler from hurting themselves you say? Simple; without fuss or fanfare, simply redirect them. The less we use the word, the less power a negative inference will have in our child’s life. The “terrible twos” will also be a lot less intense for us to deal with if we redirect more than using the word “no.” Emphasis and repetition of any word creates an intensity and power in it. Other words with important meaning that we hear from baby are Mommy and Daddy. For adults it’s love, sex and money. See my point? Why should “no” be any different? Not to belabor the point but it is the word “no” which drives home the feeling of separation, rejection and the fact that at this age we have little or no control over how the world treats us let alone how much choice we have in what we’re allowed to do. Since it is one of the first words we learn, it has a long history of creating and recreating the feeling of a door slamming in our face. I have obviously digressed…but I think necessarily so. Now, let’s return to our child and their indoctrination into our culture. Introducing value happens extremely slowly over time. Remember, between two and, at the least, adolescence, we are still building a vocabulary to simply describe things and what we feel. Applying value is a dimension of mental activity which is much more subtle and involves immersion in a culture and family tradition in order to gain recognition and expression. It is one of the building blocks for giving meaning to our separation from the womb and our continuing to recognize the effects of that separation through our perceived distance from others. Let’s explore where value comes from. No matter what we need or want from others the type of response we receive will trigger a feeling within us. The intensity we feel and the attention we give that feeling, and the circumstances that elicited it, are all factors that are dependent on whether it is satisfied or not. When we do receive what we need, want or asked for we usually just take it in stride and move on to the next quest or requirement. But when we don’t receive what we need, want or asked for, that need, want or request we initially approached others with is intensified. Why? Because not receiving what we need, want or requested increases our feeling of lack and yearning for it beyond what we started with and it then receives more of our attention. Remember, this process is a dynamic establishing our locus of control (L.O.C.). With each additional denial or refusal the energy and feeling triggered by that denial grows and our future expectation of the likelihood of having our need, want or request satisfied diminishes. In continuous denial we can see that our feeling of distance and separation between where we are and where we want to be is getting wider. Through our growing expectation of denial, validated by our memory of our previous feelings, we become a “self-fulfilling prophesy” in repeating the same experience over and over again. As we become further and further separated from what we need and want and those who can provide it, the intensity and distance between us and others increases to the point where we begin to see and feel ourselves as being separate from the “outside” world. This is one of the first hallmarks of learning self-awareness, that is, we become aware of ourselves as being separate from the world. Meanwhile, as our vocabulary continues to develop, we accept labels of separation applied to us by our parents and caretakers such as good, bad, tall, short, smart, stupid, etc. It is necessary to build a solid baseline of language before meaning will begin to make sense and even then it still will be a continuous process which will last well into and perhaps past adolescence. Initially we may not yet relate to or understand most of these labels, but as our vocabulary and comprehension increase we begin to paint a picture of ourselves from the memories of our past labeling. From this a perceived “Self” begins to emerge complete with labels assigned by the “outside” world. This picture is what psychologists call our ego. However, this ego is not to be confused with our social and contemporary meaning of excessive pride and contrived superiority. It is simply a mental structure yielding an awareness of a “self.” Our ego is a simple coalescing structure comprised of remembered labels applied by the external world, our feelings about those labels and the experiences that led to them. Soon, the memory of them will be “fully” absorbed and we will have been “programmed” to be triggered into “feeling” emotionally (feelings “paired” with thoughts) good, bad, tall, short, etc. when the labels are spoken by others. These labels and more will slowly become how we identify ourselves, especially, in light of the fact that they resonate with the “who” that others perceive us as. Their first “application” and acceptance will occur within our primary family and close friends. As we grow older and make more contacts outside our family and circle of friends our assigned labels may be perceived similarly but, more often than not, will shift to a meaning that’s perceived differently from those in our “family” circle. After all, “strangers” don’t know us as well as our family and friends. This will have the effect of broadening our perceived identity, theirs and ours, while creating difficulty, if not contrast, as to how or why we may be perceived differently by our “family” circle and those outside of it. As we perceive our differing identity qualities as applied by our “family” circle and “outside” contacts we may begin to prefer and acquiesce to some labels over others due to their ability to gain the attention and nurturance that we need or want from them. The ones that we no longer receive positive responses from will be either denied or remain unacknowledged but will still be a retained memory as having been being applied to us. Hence, we will still resonate with it but just not outwardly. This is one of the first conflicts in how we wish to “present” ourselves and will confuse the clarity we might have about who we are. This confusion will intensify the feeling of separation but will also cause us to look at our “self” and “question” why we might be perceived the way we are by some and not others. Depending on the age we are, the “questioning” may not be as much in verbal terms due to the continuing need for more depth in understanding language but perhaps more sensed in an “uncomfortable” feeling corresponding to a feeling we as adults might have as seeming to be incongruent or out of phase. In psychological terms we might call this cognitive dissonance. This is where our assumptions, desired or not, don’t match our perceived reality. The progression of developing and integrating these labels and qualities and forming a perspective of “self” composed of more than just feelings happens very slowly. It occurs much the same way as we might gather ingredients to prepare a meal having multiple steps before the completion of the final dish. We could also say that the dish is more than the sum of its ingredients. That is, the structure for defining the “self” and the world is much more than just the composite of its labels and different relationships. This growing coalescence lends itself to a developing “self” awareness much like a group of elements produce a compound that exhibits characteristics different and more than what’s exhibited by any one of them independently. Another way to describe it would be like differing weather factors coming together to create a perfect storm; something which surpasses the force or intensity of any one of its meteorological components. Up to this point our child has probably progressed into school and through a couple of grades putting them somewhere between five and eight years of age. We see that our notion of value has only started to build as the realization of our growing separation from others emerges through the applying of labels of character and the responses of others to us and their chosen labels for us. Because the mind works on the separation of experiences through labeling and the more developed and “in control” our mental vehicle becomes, the more it makes sense that we feel an increasing sense of being separate from and definable by others. As that structure coalesces our perceived “self” or identity begins to emerge which psychologists call our ego. As the responses we receive from others begins to differ, the more our perception of our “self” begins to split and the more confusion we have about how to identify ourselves. It’s this difference or cognitive dissonance that leads us into our next section on the shadow.
<urn:uuid:5b5a46c8-c049-4bb6-bb34-a99fc20a4316>
CC-MAIN-2021-43
https://www.emotionaltroubleshooter.com/fundamentals-5
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00590.warc.gz
en
0.96556
2,404
2.734375
3
Today’s guest post on genetic syndromes comes from Rachel Nortz, who is contributing a post on the Down Syndrome. Down Syndrome is a genetic disorder that is characterized by all or part of a third copy of the 21st chromosome. There are three different forms of Down syndrome: trisomy 21, translocation, and mosaicism. Trisomy 21 is the most common form of Down syndrome. This occurs when the 21st chromosome pair does not split properly and the egg or sperm receives a double-dose of the extra chromosome. Translocation (3-4% have this type) is the result of when the extra part of the 21st chromosome becomes attached (translocated) onto another chromosome. Mosaicism is the result of an extra 21st chromosome in only some of the cells and this is the least common type of Down syndrome. Persons with Down syndrome typically have differences in physical growth and development, facial features, and severe intellectual disabilities. Down syndrome occurs in one out of every 691 births according to the CDC, making it the most commonly occurring genetic condition in the United States. Currently there are more than 400,000 people living with Down syndrome. Down syndrome is not often mistaken for any other genetic defect, as Down syndrome is diagnosed prenatally using a various screenings or diagnostic procedures. An ultrasound or blood test can be completed on the pregnant mother to estimate the risk of a child having Down syndrome. A chorionic villus sampling (CVS) and/or amniocentesis are most common procedures to diagnose Down syndrome. Both procedures do carry risk of spontaneous termination (1% chance), but are almost 100% accurate in diagnosing Down syndrome. Screenings and diagnostic procedures for Down syndrome are now commonplace to the majority of pregnant women. Down syndrome can also be diagnosed at birth based on physical characteristics of the baby and via a blood test that specifically looks at the chromosomes of the child. Males versus Females Down syndrome occurs evenly in both males and females. Males tend to be sterile while women are capable of having babies. If the mother with Down syndrome chooses to have children, there is a 50% change that her child will also have Down syndrome. Common physical characteristics of Down syndrome are: low muscle tone, small stature, an upward slant to the eyes and a single deep crease across the center of the palm. Individuals with Down syndrome may also present with a flat appearing face, small head, flat bridge on the nose, small mouth causing the tongue to appear large, extra fold of skin in the inside corner of each eye (epicanthal folds), rounded cheeks, misshapen ears, small wide hands, malformed fifth finger, unusual creases on the soles of the feet, overly-flexible joints, and shorter than normal height. However it should be stated that each person with Down syndrome may manifest these characteristics to varying degrees or not at all. Babies with Down syndrome develop at slower rates then normal children. The muscles of babies with Down syndrome ware hypotonic and thus will sit-up, crawl and walk at a much later age than same aged peers. There are several medical concerns with individuals with Down syndrome. There are common clinical conditions such as congenital heart disease (30-50% occurrence), ear nose and throat disorders, hearing and vision deficits, endocrine disorders, neurological disorders and gastrointestinal disorders (5-7% occurrence). It is very common for infants to have congenital heart defects. Some common cardiac defects are atrial septal, patent ductus arteriosus, and ventricular septal defects. Infants with Down syndrome also exhibit gastrointestinal malformations that most likely will require surgery. Children with Down syndrome have a high incidence rate of developing upper respiratory problems such as asthma and bronchitis, otitis media, vision loss, dental malformations and seizures. Obstructive sleeping disorders, such as noisy breathing and sleep apnea are common. It is also common to develop pneumonia, certain kidney disorders, and 20 percent greater chance of developing leukemia. There are also health issues which adolescents with Down syndrome demonstrate: increased weight gain, skin infections and psychiatric disorders. The increased weight gain can be contributed to reduced physical activity, an increase in food intake and thyroid issues. An increase in weight gain contributes to skin infections that can develop into abscesses. Therefore, proper hygiene is crucial during the adolescent period. Clinical depression is also common during the adolescent stage. Later in life people with Down syndrome are prone to Alzheimer’s disease. Typically children with Down syndrome are developmentally delayed and thus their social development is delayed as well. Their behavior changes will occur later than the typically developing child. For instance, temper tantrums will develop around 3-4 years of age versus the typically developing child who demonstrates temper tantrums around 2-3 years of age. The family environment will greatly influence social abilities of children with Down syndrome. Positive family interaction is crucial to the social development of children with Down syndrome by giving them supportive social interactions. Adolescents with Down syndrome will demonstrate lower maturity levels than their peers due to cognitive delays. Each person with Down syndrome will display different temperaments, behaviors and not all children with Down syndrome are “happy” as a common stereotype suggests. People with Down syndrome typically have some form of delay in cognitive development ranging from mild to moderate severity. Speech and Language Issues and Applicable Interventions Children who have Down syndrome will usually experience challenges with speech and language skills to varying degrees across their lifespan. The speech and language skills of a child with Down syndrome are considered delayed and not different. There are no specific speech and language skills that are only seen in children with Down syndrome. All of their deficits in speech and language can be seen in other children who do no have Down syndrome. Children with Down syndrome have speech and language skills that are affected by anatomical and physiological differences. Feeding may be a significant difficulty for infants with Down syndrome due to oral-motor deficits. Most babies may need assistance in developing a strong suck and swallow pattern. The size of the nipple, and the hole can be modified for the child. Children with Down syndrome need to have their bodies and their mouths prepared for feeding by increasing their alertness and awareness. Some students may also have hypo-sensitive oral cavities and thus poorly chew food, overstuff mouth, and be messy eaters. Other children may be hyper-sensitive causing defensiveness when eating, refusal to eat or picky eaters. Expressive language develops slowly, difficulty with this skill increases with age. Due to the delay in expressive language toddlers with Down syndrome may have delayed babbling and may be more inclined to use gestures of sign language instead of speech. Age of first word production is delayed until an average of 2 years of age. Once the children can use words they may experience difficulty with intelligibility and fluency; this is due to a structure and function problem with their articulators. If the child experiences frustration with expressive language the use of supplemental AAC device may aid in alleviating the stress. Children with Down syndrome usually have better receptive skills and thus can comprehend more language than they can produce. However children with Down syndrome do experience difficulty with understanding abstract concepts. A child’s expressive language may seem to reach a plateau at an age six developmental level. However, it should be noted that development of language in children with Down syndrome is not consistent; there are period of plateau and period of improvement. Syntax is also a weakness for children with Down syndrome; they produce shorter sentences and have difficulty with grammar. A child with Down syndrome will have a normally developing noun vocabulary; however acquisition of verbs is often delayed. Pragmatic skills may also be an area of difficulty for the child. For example: asking for help, using appropriate greetings, asking for information, etc. With the appropriate assessments and early intervention techniques from a team of experts children with Down syndrome will improve their speech and language skills. Depending on their severity level usually a child with Down syndrome will be able to develop effective communication skills. The biggest early intervention model for children with Down syndrome is Total communication (Kumin, 2003). Total communication pairs gestures and sign language with speech. It can also involve Augmentative and Alternative Communication (AAC) modalities as well either low or high tech. As children with Down syndrome are delayed in their speaking, sign language and gestures are typically how children with Down syndrome say their first words. As they develop they will gain more expressive language skills, but the time this takes may vary from a few months to many years (Kumin, 2003). Thus it is very important for a Speech-Language Pathologist to be involved in developing a Total communication system for the child. The parent is also very imperative because they need to be able to use this Total communication system as well. Kumin (2003) suggests that parents be the primary models for good communication skills. She also suggests using real objects and real situations to teach language skills is the most successful (Kumin, 2003). Multidisciplinary Professionals Involved As Down syndrome impacts a child’s physical and mental abilities from birth, early intervention is key. Thus a multidisciplinary approach is a major factor in the success of the child. The team should consist of the child’s parents, physician, occupational therapist for feeding, fine motor skills, self-care, cutting, writing, and play (Bruni, 2006), physical therapist for gross motor skills as these are delayed in children with Down syndrome (Winders, 2001), and a Speech-Language Pathologist for communication skills including sign language, AAC devices, receptive and expressive skills as well as pragmatics. Children with Down syndrome face many physical, mental and communication challenges in their lifetime, but with the right early interventions and support they can lead a fulfilling and normal life. For more information about Down syndrome or to participate in their charity, The Buddy Walk please visit: http://www.ndss.org/ Bruni, M. Bethesda, MD. (2006). Fine Motor Skills for Children withDown Syndrome: A Guide for Parents and Professionals (Second Edition) Woodbine House.. – See more at: http://www.ndss.org/Resources/Therapies-Development/Occupational-Therapy-Down-Syndrome/#sthash.U28sLE1n.dpuf Buckley S.J. (2000). Speech, language and communication for individuals with Down syndrome-An overview. Down Syndrome Issues and Information. Buckley, S., Bird, G., & Sacks, B. (1996-2008). Social development for individuals with Down syndrome- An Overview. Down syndrome Education International http://www.down-syndrome.org/information/social/overview Kumin, L. Bethesda, MD (2003). Early Communication Skills for Children with Down Syndrome. Woodbine House. (See more at: http://www.ndss.org/Resources/Therapies-Development/Speech-Language-Therapy/Speech-Language-Therapy-for-Infants-Toddlers-Young-Children/#sthash.hakY3Jqp.dpuf Kumin, L. (1998). Comprehensive Speech and Language Treatment for Infants, Toddlers and Children with Down Syndrome. Down Syndrome: A Promising Future, Together. Wiley-Liss, Inc National Down syndrome Society, What is Down syndrome? http://www.ndss.org/Down-Syndrome/What-Is-Down-Syndrome/ Nordenson, N. & Odle, T. (2006). Down syndrome. In Gale Encyclopedia of Medicine. (3rd ed., Vol. 1). Retrieved from http://www.gale.cengage.com/gvrl/ Pueschel, S. M. (1990). Clinicial Aspects of Down syndrome from Infancy to Adulthood. American Journal of Medical Genetics, Supplement 7: 52-56. Richard, G.J. & Reiehert Hoge, D. (1999) The Source for Syndromes. Linguisystems Winders, Patricia C. (2001).The Goal and Opportunity of Physical Therapy for Children with Down Syndrome. Down Syndrome Quarterly 6(2), 1-4. – See more at: http://www.ndss.org/Resources/Therapies-Development/Physical-Therapy-Down-Syndrome/#sthash.Lh1hfnVf.dpuf Rachel Nortz completed her undergraduate degree in Speech, Language and Hearing Sciences with an emphasis in Audiology at San Diego State University and graduate degree in Communicative Disorders at San Jose State University. For the past four years Rachel has worked with the preschool and elementary aged population. However she also has experience working in a Traumatic Brain Injury -Cognitive Retraining Program, Skilled Nursing Facility, and after school Sensory Training Approach to Reading and Spelling (Reading S.T.A.R.S).
<urn:uuid:d3354e5e-709b-40d0-b170-7fdf67ff35c8>
CC-MAIN-2021-43
https://www.smartspeechtherapy.com/spotlight-on-syndromes-an-slps-perspective-on-down-syndrome/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587770.37/warc/CC-MAIN-20211025220214-20211026010214-00630.warc.gz
en
0.930652
2,690
3.953125
4
Christian Apologetics Free Course 12, Lesson 01 Analyzing Bible Difficulties 01 The Categories Of Difficulties In The Bible History tells us that the attempt to find difficulties and contradictions in the Bible is thousands of years old. Over these millennia several thousand alleged difficulties have shaken many Bible-believing Christians, but the actual number of difficulties is far less than it at first appears. Careful analysis shows that these difficulties fall into certain categories, the total of which is not more than about two dozen. This means that instead of a few thousand different difficulties in the Bible, there are less than two dozen types of difficulties to answer, most of which are not really difficulties at all. The actual number looks so large because hundreds of examples of same type are found, inflating the total number. Of these two dozen categories, only about a dozen are of any serious interest and it is only these we mention in this work. Difficulties, problems, and apparent contradictions arise because of the following reasons: 1–Difficulties Arising From The Original Text: All of us know that the Holy Scriptures are absolutely infallible and inerrant, but this applies only to the original writings (or autograph). Today we do not have the original autographs with us. What we have are copies which have come to us through centuries of hand-copying. Any type of copying is bound to introduce errors of spelling, repetition, omission, and others; the problem is compounded when this process of hand-copying is repeated for hundreds or thousands of years without the help of modern writing aids. In fact many of the ancient non-biblical books have altered so much in this process that in some cases up to 90% of the extant text of a book is corrupted. However, those men who copied the books of the Bible knew that they were not handling an ordinary book, so they took exceptional care during hand-copying. As a result, the number of errors that have crept into the biblical manuscripts are minimal, compared to other ancient manuscripts. The hand-copying process resulted in some errors of translation in the King James Version, that most people use in English. Some of these errors have crept into other languages also because of the same influence. However, the tens of thousands of ancient biblical manuscripts available today have helped Bible scholars to restore the original text with great certainty. Another difficulty of the original text is the language. The ancient Hebrews did not write like contemporary authors. Their written language had only capital letters. Further, their alphabet contains no vowels and, to compound the problem, the words were not separated from one another. Thus the original written form of Genesis 1:1 might have looked something like: NTHBGGNNNGGDCRTD… and 1 John 1:9 might have looked like: These examples are given in English, but are sufficient to explain what the biblical text might have looked like in its original Hebrew. The words were finally separated, and vowels inserted, by modern scribes. Even a single wrong division or insertion of vowel by them could drastically change the meaning of the original, though the original text is still intact. Further, instead of numerals, the ancient Hebrews used letters of the alphabet to express numbers. Thus names and words could often be numbers and vice versa, adding to the potential problems. Interestingly most copyists’ errors are of such a nature that they do not affect the essential nature of the message of the Bible. Nor do any such errors affect any major doctrine of the Bible. This is because most of the errors are related to spelling and numbers (such as some ages mentioned in the chronologies) which do not affect the Bible’s message. A good proportion of the alleged difficulties are based upon the King James Version of the Bible which was translated about four hundred years ago. But since this translation has been done, archaeologists have discovered thousands of ancient biblical manuscripts, some of which are more than two thousand years old. The science of recovering the original text by comparison of these manuscripts has developed to a high degree of precision, and many scholars have painstakingly worked out the original text with great certainty. All of this has helped scholars to make the newer translations like the New American Standard Bible and the New King James Version more accurate, even in trivial matters like numbers and ages mentioned in genealogies. It must be understood very clearly that there is no other ancient book of comparable age and size which is represented by so many ancient manuscripts and therefore we are fully able to reconstruct the original autographs to a high level of precision without actually having them in our possession. Secondly, even though many errors had crept into the manuscripts used for making translations like the KJV, most of the errors (over 90%) were insignificant in nature. The small number of significant errors do not affect sincere readers because, by comparison of different manuscripts, the original words are being retrieved. In essence, no copyist’s error has affected the essential message of the Bible and the remaining non-significant errors are now being corrected by scholars involved in Textual Criticism. 2–Difficulties Arising Out Of Translation Problems: Translating matter recorded in one language into another is quite difficult. The difficulty increases many fold when idiomatic expressions from an extinct language, representing the speech pattern of an ancient society, have to be translated into present-day speech. The problems faced by Bible translators are beyond imagination, and these difficulties will automatically introduce many unknown errors into the translated text. Sometimes a Hebrew or Greek word can be translated by many different words in another language, none of which might be adequate for a satisfactory translation. Translators can choose only one word, but that choice might not be fully appropriate. Further, if they use the same word throughout the Bible to translate the original word, they are being to narrow, and their language becomes too rigid. On the other hand, if they use different words in different places (as the context demands) to translate the same word in the original, then they may raise many other possible difficulties to the reader as well as for the expositor. As a consequence, every translation has to depend upon numerous carefully weighed compromises, and this is bound to cause many problems when the translation is widely circulated. The original autographs of the Bible were verbally inspired by God, and therefore they are inerrant and infallible, but the same is not true of translations. The paraphrases are removed further from the original text. Even the most faithful translation of the Bible contains some paraphrases, biases of the translators, wrong equivalents, and also archaisms. Archaic words are those which have lost or changed their original meaning so that they no longer mean what the translator intended them to mean. In addition to this, the Hebrew of the Old Testament and the Greek of the New Testament uses hundreds of different figures of speech. It is not always easy for a translator to recognize them, and even after a correct recognition, it is not always easy to convey the full meaning into another language. In I Thessalonians. 4:4 there is an expression about possessing one’s "vessel", an expression not at all easy for the translator. The Greek word used here for vessel may mean not only an actual vessel, but also a ship, the human body, and also one’s wife. Even though all these meanings do not have equal weight for this Greek word, they are all important to understanding the correct meaning of the commandment. This puts the translator in a very difficult position because no other language in the world may have an exactly equivalent word, with all these meanings attached to it. Many difficulties arise out of these problems of translations. Instances of translation-related problems abound both in the Bible as well as in common life. For example’ "hitting the bull’s eye" is in common use in English. But many non-English translators have translated this expression literally into their languages (when translating books), creating havoc with the message. When translating the Bible it is common to find non-English speakers translating "the lamb of God" into their native languages as "God’s sheep’s child". Today many wonder about the use of the term "sister" for lover in Song Of Solomon 4:9. The problem has been created by a literal translation of an endearing word that does not make sense to a non-Hebrew. The Scriptures have many sex-related words that sounded perfectly normal to the Hebrews, but that might be offensive to others if translated literally. Thus the translators are forced to substitute euphemisms, or even symbolic words instead of making an accurate translation. 3–Difficulties Arising Out Of False Interpretation Of The Bible: The Bible speaks about numerous subjects: history, geography, politics, ethics, psychology, human relationships, etc. These statements have definite and clear cut meanings because God does not deal in ambiguities. However, this does not imply that every person will necessarily understand everything found in the Bible. No human being can understand all human truth, and therefore it naturally follows that NO human can understand the whole of the divine truth. Once a difficulty arises in the Bible, the human mind tries to solve the problem by substituting a possible interpretation for the intended meaning. Obviously all interpretations will have a human prejudice in them and therefore the number of such interpretations might increase. Some of these interpretations might violently contradict the ideas cherished by others, and this might upset many people about the Bible. But the problem here is not with the Bible, but with the differing viewpoints of the people who are trying to bring out the possible meaning or implication of the biblical text under consideration. Many times people’s philosophical backgrounds bias them to such an extent that they start viewing the Bible in the light of these wrong notions. For example, for thousands of years people all over the world were under the influence of the Aristotelian cosmology, according to which the earth is flat and also the center of the solar system. Since this was the most dominant idea, almost all the people who read the Bible (laymen as well as scholars), interpreted many passages in the Bible to imply that the earth is the center of the solar system. Thus when the heliocentric view (according to which the sun is the center of the solar system) was advanced, a number of theologians rose up to oppose this view. They were all labouring under the false notion that the Bible conforms to their Aristotelian philosophy, and that if anyone dared to question this philosophy, he was questioning the Bible.Nothing could be farther from the truth, but this kind of behavior can be seen even today. Even now there are people who believe that the earth is flat and that all the pictures which show the earth to be a sphere are clever fakes. Interestingly, many of these people claim that they have come to this conclusion about the solar system from their study of the Bible. However, to an honest reader, who is willing to permit the use of figurative expressions in describing natural phenomena, it will become obvious that the Bible supports neither the concept of a flat earth, nor the geocentric/heliocentric solar systems. When the activities of the infinite God are expressed using the finite language and limited concepts of mankind, difficulties are bound to arise. Most of the difficulties are of predictable nature, and this is why we are studying the categories of difficulties. Once people are familiar with the commonest types of errors, they will be in a good position to tackle old as well as new problems when they are thrown at them. 4–Difficulties Arising Out Of A Wrong Conception Of The Bible: Many people think that when we say that the Bible is the word of God, of divine origin and authority, then it means that every statement in the Bible has come from the mouth of God. But this is definitely a wrong notion. The Bible contains a record of what regenerate AND fallen men have spoken, what the good AND the fallen angels have spoken, and also what God has spoken. Divine inspiration only implies that all that is recorded has actually taken place exactly as stated in the word of God. It does not imply that all of it represents truth. Rather, both the truth as well as the false statements of men and spirit beings, and even of animals, are recorded so that we might be instructed and warned when we study them in the light of the entire revealed word of God. For example, the fool’s comment that ‘there is no God’ is not recorded to imply that it is true, but to instruct us in what a human heart thinks when it is bent upon foolishness. The story of Jephthah in Judges 11, in which he vows to sacrifice that which comes to meet him, is not to approve what he did but rather to demonstrate the folly of hasty decisions. The story of the unrighteous steward is recorded in Luke chapter 16 not to commend his unrighteousness, but rather to demonstrate how wise the worldly people are in money matters. A good portion of the book of Ecclesiastes demonstrates how regenerated persons think when they are out of fellowship with God. All kinds of statements uttered by men and spirit beings, and even by animals, are recorded in the Bible, not to approve them but to instruct us in what is right and what is wrong. All statements must be examined in the light of the entire word of God to see what God wants us to learn from them. In Daniel 2:11 we find a very interesting reference to the polytheistic ideas of the wise men who attended King Nebuchadnezzar’s court. But this does not mean that the Bible condones polytheism. Similarly, comparing Isaiah 36:10 with 37:6 brings out a false claim, but it does not imply that the Scriptures endorse it.
<urn:uuid:3a74520b-6daa-4149-883a-a4297856608e>
CC-MAIN-2021-43
https://www.apologeticscourses.com/christian-apologetics-free-course-12-lesson-01/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00470.warc.gz
en
0.96026
2,855
3.328125
3
Whole grains are rich source of nutrients and have shown beneficial effects on human health. This study was designed to systematically review the existing results and quantitatively assess the dose–response relationship of whole grain intake with all-cause and cause-specific mortality. We searched ‘whole grain’ or ‘whole grains’ in combination with ‘mortality’’ or ‘cardiovascular disease’ or ‘cancer’ through the Web of Science and PubMed databases till 20 January 2016. To be eligible for inclusion, publications should be prospective cohort studies and reported the influence of whole grain intake on human mortality. Relative risks (RRs) and 95% confidence intervals (CIs) from the included studies were pooled by a random effects model or fixed effect model. We included 19 cohort studies from 17 articles, with 1 041 692 participants and 96 710 deaths in total, in the analyses. We observed an inverse relationship of whole grain intake with risk of total, cardiovascular disease and cancer mortality. The pooled RR was 0.84 (95% CI 0.81–0.88, n=9) for total mortality, 0.83 (95% CI 0.79–0.86, n=8) for CVD mortality and 0.94 (95% CI 0.87–1.01, n=14) for cancer mortality, comparing the highest intake of whole grain with the lowest category. For dose–response analysis, we found a nonlinear relationship of whole grain intake with risk of total, cardiovascular and cancer mortality. Each 28 g/d intake of whole grains was associated with a 9% (pooled RR: 0.91 (0.90–0.93)) lower risk for total mortality, 14% (pooled RR: 0.86 (0.83–0.89)) lower risk for CVD mortality and 3% (pooled RR: 0.97 (0.95–0.99)) lower risk for cancer mortality. Our study shows that whole grain intake was inversely associated with risk of total, CVD and cancer mortality. Our results support current dietary guidelines to increase the intake of whole grains. Government officials, scientists and medical staff should take actions to promote whole grains intake. Subscribe to Journal Get full journal access for 1 year only $9.92 per issue All prices are NET prices. VAT will be added later in the checkout. Tax calculation will be finalised during checkout. Rent or Buy article Get time limited or full article access on ReadCube. All prices are NET prices. Slavin J . Whole grains and human health. Nutr Res Rev 2004; 17: 99–110. Qi L, Hu FB . Dietary glycemic load, whole grains, and systemic inflammation in diabetes: the epidemiological evidence. Curr Opin Lipidol 2007; 18: 3–8. Qi L, van Dam RM, Liu S, Franz M, Mantzoros C, Hu FB . Whole-grain, bran, and cereal fiber intakes and markers of systemic inflammation in diabetic women. Diabetes Care 2006; 29: 207–211. Harland JI, Garton LE . Whole-grain intake as a marker of healthy body weight and adiposity. Public Health Nutr 2008; 11: 554–563. Huang T, Xu M, Lee A, Cho S, Qi L . Consumption of whole grains and cereal fiber and total and cause-specific mortality: prospective analysis of 367,442 individuals. BMC Medicine 2015; 13: 59. Cleveland LE, Moshfegh AJ, Albertson AM, Goldman JD . Dietary intake of whole grains. J Am Coll Nutr 2000; 19 (3 Suppl), 331S–338S. Albertson AM, Reicks M, Joshi N, Gugger CK . Whole grain consumption trends and associations with body weight measures in the United States: results from the cross sectional National Health and Nutrition Examination Survey 2001-2012. Nutr J 2016; 15: 8. Jacobs DR, Meyer KA, Kushi LH, Folsom AR . Is whole grain intake associated with reduced total and cause-specific death rates in older women? The Iowa Women's Health Study. Am J Public Health 1999; 89: 322–329. Mellen PB, Walsh TF, Herrington DM . Whole grain intake and cardiovascular disease: a meta-analysis. Nutr Metab Cardiovasc Dis 2008; 18: 283–290. de Munter JS, Hu FB, Spiegelman D, Franz M, van Dam RM . Whole grain, bran, and germ intake and risk of type 2 diabetes: a prospective cohort study and systematic review. PLoS Med 2007; 4: e261. Aune D, Chan DS, Lau R, Vieira R, Greenwood DC, Kampman E et al. Dietary fibre, whole grains, and risk of colorectal cancer: systematic review and dose-response meta-analysis of prospective studies. BMJ 2011; 343: d6617. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ 2007; 335: 806–808. Juni P, Witschi A, Bloch R, Egger M . The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 1999; 282: 1054–1060. Stang A . Critical evaluation of the Newcastle-Ottawa scale for the assessment of the quality of nonrandomized studies in meta-analyses. Eur J Epidemiol 2010; 25: 603–605. Hootman JM, Driban JB, Sitler MR, Harris KP, Cattano NM . Reliability and validity of three quality rating instruments for systematic reviews of observational studies. Res Synth Methods 2011; 2: 110–118. Wu H, Flint AJ, Qi Q, van Dam RM, Sampson LA, Rimm EB et al. Association between dietary whole grain intake and risk of mortality: two large prospective studies in US men and women. JAMA Int Med 2015; 175: 373–384. Tabung F, Steck SE, Su LJ, Mohler JL, Fontham ET, Bensen JT et al. Intake of grains and dietary fiber and prostate cancer aggressiveness by race. Prostate Cancer 2012; 2012: 323296. Nimptsch K, Kenfield S, Jensen MK, Stampfer MJ, Franz M, Sampson L et al. Dietary glycemic index, glycemic load, insulin index, fiber and whole-grain intake in relation to risk of prostate cancer. Cancer Causes Control 2011; 22: 51–61. Steffen LM, Jacobs Jr DR, Stevens J, Shahar E, Carithers T, Folsom AR . Associations of whole-grain, refined-grain, and fruit and vegetable consumption with risks of all-cause mortality and incident coronary artery disease and ischemic stroke: the Atherosclerosis Risk in Communities (ARIC) Study. Am J Clin Nutr 2003; 78: 383–390. Jacobs DR, Meyer HE, Solvoll K . Reduced mortality among whole grain bread eaters in men and women in the Norwegian County Study. Eur J Clin Nutr 2001; 55: 137–143. Buil-Cosiales P, Zazpe I, Toledo E, Corella D, Salas-Salvado J, Diez-Espino J et al. Fiber intake and all-cause mortality in the Prevencion con Dieta Mediterranea (PREDIMED) study. Am J Clin Nutr 2014; 100: 1498–1507. Boggs DA, Ban Y, Palmer JR, Rosenberg L . Higher diet quality is inversely associated with mortality in African-American women. J Nutr 2015; 145: 547–554. Bakken T, Braaten T, Olsen A, Kyro C, Lund E, Skeie G . Consumption of whole-grain bread and risk of colorectal cancer among Norwegian Women (the NOWAC Study). Nutrients 2016; 8: 40. Liu S, Sesso HD, Manson JE, Willett WC, Buring JE . Is intake of breakfast cereals related to total and cause-specific mortality in men? Am J Clin Nutr 2003; 77: 594–599. Jensen MK, Koh-Banerjee P, Hu FB, Franz M, Sampson L, Gronbaek M et al. Intakes of whole grains, bran, and germ and the risk of coronary heart disease in men. Am J Clin Nutr 2004; 80: 1492–1499. Higgins JP, Thompson SG . Quantifying heterogeneity in a meta-analysis. Stat Med 2002; 21: 1539–1558. Dong JY, Qin LQ . Dietary glycemic index, glycemic load, and risk of breast cancer: meta-analysis of prospective cohort studies. Breast Cancer Res Treat 2011; 126: 287–294. Dong JY, Qin LQ . Soy isoflavones consumption and risk of breast cancer incidence or recurrence: a meta-analysis of prospective studies. Breast Cancer Res Treat 2011; 125: 315–323. Dong JY . Depression and risk of stroke. JAMA 2011; 306: 2562 author reply 2563. Egger M, Davey Smith G, Schneider M, Minder C . Bias in meta-analysis detected by a simple, graphical test. BMJ 1997; 315: 629–634. Begg CB, Mazumdar M . Operating characteristics of a rank correlation test for publication bias. Biometrics 1994; 50: 1088–1101. Kyro C, Skeie G, Loft S, Landberg R, Christensen J, Lund E et al. Intake of whole grains from different cereal and food sources and incidence of colorectal cancer in the Scandinavian HELGA cohort. Cancer Causes Control 2013; 24: 1363–1374. Egeberg R, Olsen A, Christensen J, Johnsen NF, Loft S, Overvad K et al. Intake of whole-grain products and risk of prostate cancer among men in the Danish Diet, Cancer and Health cohort study. Cancer Causes Control 2011; 22: 1133–1139. Egeberg R, Olsen A, Loft S, Christensen J, Johnsen NF, Overvad K et al. Intake of whole grain products and risk of breast cancer by hormone receptor status and histology among postmenopausal women. Int J Cancer 2009; 124: 745–750. Larsson SC, Giovannucci E, Bergkvist L, Wolk A . Whole grain consumption and risk of colorectal cancer: a population-based cohort of 60000 women. Br J Cancer 2005; 92: 1803–1807. Aarestrup J, Kyro C, Christensen J, Kristensen M, Wurtz AM, Johnsen NF et al. Whole grain, dietary fiber, and incidence of endometrial cancer in a Danish cohort study. Nutr Cancer 2012; 64: 1160–1168. Ross AB, Kristensen M, Seal CJ, Jacques P, McKeown NM . Recommendations for reporting whole-grain intake in observational and intervention studies. Am J Clin Nutr 2015; 101: 903–907. Aune D, Norat T, Romundstad P, Vatten LJ . Whole grain and refined grain consumption and the risk of type 2 diabetes: a systematic review and dose-response meta-analysis of cohort studies. Eur J Epidemiol 2013; 28: 845–858. Greenland S, Longnecker MP . Methods for trend estimation from summarized dose-response data, with applications to meta-analysis. Am J Epidemiol 1992; 135: 1301–1309. Aune D, Greenwood DC, Chan DS, Vieira R, Vieira AR, Navarro Rosenblatt DA et al. Body mass index, abdominal fatness and pancreatic cancer risk: a systematic review and non-linear dose-response meta-analysis of prospective studies. Ann Oncol 2012; 23: 843–852. Benisi-Kohansal S, Saneei P, Salehi-Marzijarani M, Larijani B, Esmaillzadeh A . Whole-grain intake and mortality from all causes, cardiovascular disease, and cancer: a systematic review and dose-response meta-analysis of prospective cohort studies. Adv Nutr 2016; 7: 1052–1065. Chen GC, Tong X, Xu JY, Han SF, Wan ZX, Qin JB et al. Whole-grain intake and total, cardiovascular, and cancer mortality: a systematic review and meta-analysis of prospective studies. Am J Clin Nutr 2016; 104: 164–172. Zong G, Gao A, Hu FB, Sun Q . Whole grain intake and mortality from all causes, cardiovascular disease, and cancer: a meta-analysis of prospective cohort studies. Circulation 2016; 133: 2370–2380. Wei H, Gao Z, Liang R, Li Z, Hao H, Liu X . Whole-grain consumption and the risk of all-cause, CVD and cancer mortality: a meta-analysis of prospective cohort studies. Br J Nutr 2016; 116: 514–525. Aune D, Keum N, Giovannucci E, Fadnes LT, Boffetta P, Greenwood DC et al. Whole grain consumption and risk of cardiovascular disease, cancer, and all cause and cause specific mortality: systematic review and dose-response meta-analysis of prospective studies. BMJ 2016; 353: i2716. Lei Q, Zheng H, Bi J, Wang X, Jiang T, Gao X et al. Whole grain intake reduces pancreatic cancer risk: a meta-analysis of observational studies. Medicine 2016; 95: e2747. Haas P, Machado MJ, Anton AA, Silva ASS, De Francisco A . Effectiveness of whole grain consumption in the prevention of colorectal cancer: Meta-analysis of cohort studies. Int J Food Sci Nutr 2009; 60: 1–13. Chiuve SE, Fung TT, Rimm EB, Hu FB, McCullough ML, Wang M et al. Alternative dietary indices both strongly predict risk of chronic disease. J Nutr 2012; 142: 1009–1018. Hu FB, Stampfer MJ, Rimm E, Ascherio A, Rosner BA, Spiegelman D et al. Dietary fat and coronary heart disease: a comparison of approaches for adjusting for total energy intake and modeling repeated dietary measurements. Am J Epidemiol 1999; 149: 531–540. Giovannucci E, Colditz G, Stampfer MJ, Rimm EB, Litin L, Sampson L et al. The assessment of alcohol-consumption by a simple self-administered questionnaire. Am J Epidemiol 1991; 133: 810–817. Chasan-Taber S, Rimm EB, Stampfer MJ, Spiegelman D, Colditz GA, Giovannucci E et al. Reproducibility and validity of a self-administered physical activity questionnaire for male health professionals. Epidemiology 1996; 7: 81–86. Sun Q, Spiegelman D, van Dam RM, Holmes MD, Malik VS, Willett WC et al. White rice, brown rice, and risk of type 2 diabetes in US men and women. Arch Intern Med 2010; 170: 961–969. Halton TL, Willett WC, Liu S, Manson JE, Stampfer MJ, Hu FB . Potato and french fry consumption and risk of type 2 diabetes in women. Am J Clin Nutr 2006; 83: 284–290. Wang L, Gaziano JM, Liu S, Manson JE, Buring JE, Sesso HD . Whole- and refined-grain intakes and the risk of hypertension in women. Am J Clin Nutr 2007; 86: 472–479. He M, van Dam RM, Rimm E, Hu FB, Qi L . Whole-grain, cereal fiber, bran, and germ intake and the risks of all-cause and cardiovascular disease-specific mortality among women with type 2 diabetes mellitus. Circulation 2010; 121: 2162–2168. This work was sponsored by National Natural Science Foundation (NSFC 81370966) of China. Baihui Zhang, Qingxia Zhao and Wenwen Guo organized the data and drew up the article. Xia Wang and Wei Bao designed the study and conduct technical review for the manuscript. All authors agreed to the last draft. The authors declare no conflict of interest. Supplementary Information accompanies this paper on European Journal of Clinical Nutrition website About this article Cite this article Zhang, B., Zhao, Q., Guo, W. et al. Association of whole grain intake with all-cause, cardiovascular, and cancer mortality: a systematic review and dose–response meta-analysis from prospective cohort studies. Eur J Clin Nutr 72, 57–65 (2018). https://doi.org/10.1038/ejcn.2017.149 Long-term Paleolithic diet is associated with lower resistant starch intake, different gut microbiota composition and increased serum TMAO concentrations European Journal of Nutrition (2020) Evaluating Mediterranean diet and risk of chronic disease in cohort studies: an umbrella review of meta-analyses European Journal of Epidemiology (2018)
<urn:uuid:41fe38a8-5a2c-4f2e-aca6-84197ef56853>
CC-MAIN-2021-43
https://www.nature.com/articles/ejcn2017149/email/correspondent/c1/new?error=cookies_not_supported&code=8350f117-cd51-4cab-9084-4e1406e7e4f4
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00510.warc.gz
en
0.757256
3,732
2.5625
3
This Melchizedek was king of Salem and priest of God Most High. He met Abraham returning from the defeat of the kings and blessed him, and Abraham gave him a tenth of everything. First, his name means “king of righteousness”; then also, “king of Salem” means “king of peace.” Hebrews 7:1-2 Four kings from Mesopotamia went to war against five Canaanite kings who rebelled after having been subjugated to them for 12 years. Abram’s nephew Lot was taken captive and his possessions were taken as plunder by the four kings from Mesopotamia. Lot exposed himself and his family to danger by preferring to live in a fertile area near Sodom, even though the people who lived in the region were workers of iniquity. Not only did Lot suffer being taken captive, but so was his entire family. Ironically Lot, who sought to enrich himself by living in the plain of the Jordan, ended up losing of all his property. Abram had a company of three hundred and eighteen trained men, born in his own house. This number of men is an indication of Abram’s wealth and power. Including women and children, there were probably more than 1,000 persons under his authority and enough corresponding numbers of flocks and herds to feed, clothe and provide shelter for all of them. Although the armies of the five ungodly kings of the cities of the Valley of Siddim had fled in defeat, Abram was victorious. Gideon with a mere 300 men routed a great multitude of Midianites and Amalekites. In a similar fashion, because the LORD was with him, Abram not only defeated the armies of four kings with only 318 men but all the stolen goods and kidnapped people were recovered. After Abram returned from defeating Kedorlaomer and the kings allied with him, the king of Sodom came out to meet him in the Valley of Shaveh (that is, the King’s Valley). Then Melchizedek king of Salem brought out bread and wine. He was priest of God Most High, and he blessed Abram, saying, “Blessed be Abram by God Most High, Creator of heaven and earth. And blessed be God Most High, who delivered your enemies into your hand.” Genesis 14:17-20a This is the first recorded appearance of the priest of the Most High God (Hebrew – Kohen El Elyon). The name Melchizedek is the compilation of three Hebrew words, melek, iy and tsedeq. Melek means “king.” Meleky or melchi means “king of. Tsedeq” means “righteousness.” The meaning of the name Melchizedek is “King of Righteousness.” Salem (shalom) means peace. The King of Righteousness was also the King of Peace. Melchizedek brought out bread and wine which are the emblems of the communion table and blessed Abram. The term “chazal” refers to the rabbinic sages who served as commentators on the Hebrew Scriptures. According to the Chazalic literature, specifically Targum Jonathan, Targum Yerushalmi, and the Babylonian Talmud, the name Melchizedek (מלכי־צדק) served as a title for Shem, the son of Noah. He also said, “Blessed be the LORD, the God of Shem! May Canaan be the slave of Shem. May God extend the territory of Japheth; may Japheth live in the tents of Shem, and may Canaan be his slave.” Genesis 9:26-27 Noah declared that Yehovah – the Great I AM – was the God of Shem. Through Shem, the middle son, the “promised seed of the woman” (Messiah) would be transmitted. Shem not only was still alive during the days of Abraham, but actually outlived Abraham. We have this hope as an anchor for the soul, firm and secure. It enters the inner sanctuary behind the curtain, where Jesus, who went before us, has entered on our behalf. He has become a high priest forever, in the order of Melchizedek. Hebrews 6:19-20 Jesus fulfilled the prophecy of Psalm 110:4 which declared: The LORD has sworn and will not change his mind: “You are a priest forever, in the order of Melchizedek. Then Abram gave him a tenth of everything. Genesis 14:20b Abram gave a tithe of the recovered goods to Melchizedek. Without father or mother, without genealogy, without beginning of days or end of life, like the Son of God he remains a priest forever. Hebrews 7:3 Jesus being 100% human has a genealogy that proves He is the promised descendant of David who is the promised seed of Eve. Jesus had both a natural mother and a father. But also being in very nature, 100% God the Son, Jesus is eternal. The phrase translated as “like the Son of God” in the NIV is translated as “resembling the Son of God” in the English standard Version and “but made like unto the Son of God” in the King James. The inspired author was trying to communicate the idea that God intentionally presented Melchizedek as a type of Christ, who foreshadowed the Son of God who was to come. The purpose of the phrase was to emphasize the unique nature of Melchizedek’s priesthood; that it did not pass from one person to another as did that of the Aaronic priesthood. And he says in another place, “You are a priest forever, in the order of Melchizedek.” Hebrews 5:6 The Scripture states that Jesus is our eternal High Priest “in the order” of Melchizedek, not that Jesus was Melchizedek. Just think how great he was: Even the patriarch Abraham gave him a tenth of the plunder! Now the law requires the descendants of Levi who become priests to collect a tenth from the people—that is, from their fellow Israelites—even though they also are descended from Abraham. This man, however, did not trace his descent from Levi, yet he collected a tenth from Abraham and blessed him who had the promises. And without doubt the lesser is blessed by the greater. Hebrews 7:4-7 Abraham the patriarch of Israel received several exceedingly great promises. Now the LORD said to Abram, “Go forth from your country, And from your relatives And from your father’s house, To the land which I will show you; And I will make you a great nation, And I will bless you, And make your name great; And so you shall be a blessing; And I will bless those who bless you, And the one who curses you I will curse And in you all the families of the earth will be blessed.” Genesis 12:1-3 Abram, who was later named Abraham, became the father of Isaac. Isaac was the father of Jacob, whose name was changed to Israel. Jacob went down to live in Egypt where he and his offspring were referred to as the Hebrews. When the Hebrews were delivered from the bondage of Egypt after 400 years of slavery, they emerged as the nation of Israel. When the sun had set and darkness had fallen, a smoking firepot with a blazing torch appeared and passed between the pieces. On that day the Lord made a covenant with Abram and said, “To your descendants I give this land, from the river of Egypt to the great river, the Euphrates — the land of the Kenites, Kenizzites, Kadmonites, Hittites, Perizzites, Rephaites, Amorites, Canaanites, Girgashites and Jebusites.” Genesis 15:17-21 The LORD’s covenant promise of land given to the descendants of Abram, the children of Israel, was made while Abram was in a deep sleep. This covenant was a unilateral, unconditional promise. There were no terms that Abram or his descendants had to fulfill to earn the right to the land of Canaan. The LORD told Abram that he should know for certain that the LORD would give his descendants the land. It was the LORD himself who verified the covenant through the testimony of two witnesses. God is a consuming fire. He himself, symbolized by the smoking pot and the blazing torch, passed between the pieces. The smoking fire pot also pictures the furnace of affliction that the Hebrews would endure in Egypt; while the blazing torch represents the Shekinah glory that would dwell among them during their wilderness journey and in the Promised Land. the people of Israel. Theirs is the adoption as sons; theirs the divine glory, the covenants, the receiving of the law, the temple worship and the promises. Theirs are the patriarchs, and from them is traced the human ancestry of Christ, who is God over all, forever praised! Amen. Romans 9:4-5 From God through the nation of Israel came the patriarchs, the divine covenants, the Law of Moses, the prophets, the apostles and Messiah Jesus. Surely all nations have been blessed through Abraham. Although it was the high priests from the tribe of Levi, who would be among Abraham’s descendants, were the ones who received the tithes, as great Abraham was, he gave a tithe to Melchizedek and was considered lesser than Melchizedek who blessed him. In the one case, the tenth is collected by people who die; but in the other case, by him who is declared to be living. One might even say that Levi, who collects the tenth, paid the tenth through Abraham, because when Melchizedek met Abraham, Levi was still in the body of his ancestor. Hebrews 7:8-10 The high priests from the tribe of Levi who collected tithes eventually died. In Genesis there is no record of Melchizedek’s birth, death or ancestry. Therefore, he is presented symbolically as “a priest forever” (him who is declared to be living). For although Levi wasn’t born yet, the seed from which he came was in Abraham’s body when Melchizedek collected the tithe from him. Therefore, in a sense Levi himself, who receives tithes, paid the tithe to Melchizedek through Abraham. If perfection could have been attained through the Levitical priesthood—and indeed the law given to the people established that priesthood—why was there still need for another priest to come, one in the order of Melchizedek, not in the order of Aaron? Hebrews 7:11 This is a rhetorical question. There was a need for another priest in the order of Melchizedek. Perfection could never be obtained through the sacrifices of bulls and goats by the Levitical priesthood, because the blood of animals could only temporarily cover sin. It is impossible for the blood of bulls and goats to take away sins. Hebrews 10:4 For when the priesthood is changed, the law must be changed also. He of whom these things are said belonged to a different tribe, and no one from that tribe has ever served at the altar. For it is clear that our Lord descended from Judah, and in regard to that tribe Moses said nothing about priests. Hebrews 7:12-14 The Torah clearly states that the priesthood was given to the tribe of Levi. The Levitical priests–indeed, the whole tribe of Levi–are to have no allotment or inheritance with Israel. They shall live on the food offerings presented to the LORD, for that is their inheritance. Deuteronomy 18:1 The Levites did not receive an allotment of territory because they served in the temple and ate from the offerings. Jesus, the King of the Jews, did not descend from Levi. And what we have said is even more clear if another priest like Melchizedek appears, one who has become a priest not on the basis of a regulation as to his ancestry but on the basis of the power of an indestructible life. For it is declared: “You are a priest forever, in the order of Melchizedek.” Hebrews 7:15-17 What has been said and been made even more clear is the impossibility for perfection to come through the Levitical priesthood, therefore the priesthood and the law must be changed. In addition, Jesus was from Judah and the Messiah would be a priest like Melchizedek, not like Aaron. Levitical priests were priests according to mortal flesh, but Jesus is a priest because of “the power of an indestructible life.” While the Levitical priesthood was temporary and a type of the coming reality based on physical ancestry, Jesus is eternal and the reality. The former regulation is set aside because it was weak and useless (for the law made nothing perfect), and a better hope is introduced, by which we draw near to God. And it was not without an oath! Others became priests without any oath, but he became a priest with an oath when God said to him: “The Lord has sworn and will not change his mind: ‘You are a priest forever.’” Because of this oath, Jesus has become the guarantor of a better covenant. Hebrews 7:18-22 Because the priests of the order of Levi were not sufficient, there was need of a still greater priesthood. This is the inspired testimony of David in Psalm 110, where he speaks of the LORD (Yehovah) Jesus as his Lord (Adonai), and exalts Him as king and priest. The Lord Jesus Christ was ordained to the priesthood, according to Psalm 110, in a manner distinct from all others. His ordination was unique, for neither Aaron, nor his sons, nor were any of the priests of the tribe of Levi ever ordained by an oath. But our Savior is made a priest by an oath. And it is written, as if to make it exceeding sure, that the Lord “has sworn and will not change his mind” (Psalm 110:4). By an oath that stands fast forevermore Christ is made a priest forever after the order of Melchizedek. Now there have been many of those priests, since death prevented them from continuing in office; but because Jesus lives forever, he has a permanent priesthood. Therefore he is able to save completely those who come to God through him, because he always lives to intercede for them. Hebrews 7:23-25 Numbers 20:28 makes it clear that Aaron’s priesthood was not forever. This is where Moses, Aaron and Eleazar go up the mountain, Aaron’s priestly garments are removed and given to Eleazar and Aaron dies there on Mt. Hor. Later in Joshua 24:33 Phinehas replaced Eleazar. Josephus said that there had been 83 high priests from Aaron until 70 AD. For if, when we were enemies of God, we were reconciled to Him through the death of His Son, how much more, having been reconciled, shall we be saved through His life! Romans 5:10 Jesus is the eternal mediator between man and God. He is the eternal savior who saves to the uttermost. He truly meets our needs. Such a high priest truly meets our need—one who is holy, blameless, pure, set apart from sinners, exalted above the heavens. Unlike the other high priests, he does not need to offer sacrifices day after day, first for his own sins, and then for the sins of the people. He sacrificed for their sins once for all when he offered himself. For the law appoints as high priests men in all their weakness; but the oath, which came after the law, appointed the Son, who has been made perfect forever. Hebrews 7:26-28
<urn:uuid:a341c8f2-a2bf-455f-a37c-663c0069aac2>
CC-MAIN-2021-43
https://lastdayscalendar.com/tag/jesus-is-a-priest/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00030.warc.gz
en
0.978936
3,385
2.71875
3
are two primary types of rock climbing: indoor climbing and outdoor climbing. I will be primarily be focusing on the outdoor aspect of rock climbing. the arena of outdoor climbing, there are two primary categories of climbing: sport climbing and traditional climbing (also referred to as ‘trad’ climbing). ? Sport climbing Sport climbing is performed on predefined routes where the participant does not have to place their own ‘protection’. In turn this makes it so sport climbing can be accessible to beginners of the sport, because there does not need to necessarily be specialized knowledge of how to place ‘protection’ (i.e. ‘cams’ and ‘hex’s’) due to there being pre-placed bolts along the route that have metal hangers (for the participant to clip their harness into). Sport climbing also tends to have a higher intensity than that of ‘trad’ climbing due to the main goal of sport climbing being the physical aspects rather than the goal in ‘trad’ climbing, being reaching a specific destination (e.g. the summit). Therefore, although sport climbing can be more accessible for beginners, it is also less accessible to people with lower Since bolts are pre-placed in the routes for sport climbing, not only does it decrease potential cost for beginners to the sport, as they do not need to purchase/rent the otherwise required ‘protection’ equipment, it then ensures the preplaced route is clearly marked out thereby making it so even beginners can partake in the sport safely and confidently navigate through the course. There are five subcategories of sport climbing designed around the skillset of the participant involved. These subcategories are rated with a number, 5.0; being considered ‘easy’, ranging up to 5.15; being considered ‘very difficult’. Sport climbing routes that are rated between 5.0 and 5.4 are aimed more towards the beginner as they tend to be more of a steep incline, making it difficult to fall, and with much larger handholds and footholds, thereby making it so they do not have to had mastered foot and hand placement before taking part. On the exact opposite of the spectrum, routes rated between 5.13 and 5.15 are considered to extremely strenuous and tedious, meaning that only people with abnormally high levels of fitness would be able to complete the routes. These routes are considered to be very difficult and are aimed at participants who have mastered everything there is to learn about the sport and also have a significantly higher natural ability than anybody else. This is because the routes usually take place on vertical inclines, where any wrong move will lead to a potentially life threatening fall, with very technical parts. The handholds and footholds, in the range of difficulty of route, are extremely small and thus requires the participant to have both an above-par grip and an astronomically high level of patience. Although equipment for ‘protection’ is not required in sport climbing courses, the participant still needs to provide/rent seven pieces of equipment: a rope (a sixty meter rope is usually sufficient but in some modern routes a seventy meter rope may be required), harness (although it can be any sturdy harness, a padded harness is recommended for beginners of sport climbing as it is expected for the participant to fall frequently), shoes (climbing shoes), quick draws (to attach to the pre-placed bolts), helmet (so you don’t hurt your head if you were to fall), chalk (to ensure you maintain friction on your hands by drying up the oils produced by your skin) and a chalk bag (to store the chalk). And in turn, to be able to participate in sport climbing, the participant would need to, at the very least, understand the basics of the equipment. ? Traditional (Trad) Climbing Whereas sport climbing takes place on pre-planned routes with pre-installed safety equipment, trad climbing takes place primarily on routes that the participant has mapped out. In turn, the participant also has to place their own protection. This in turn means that the participant has to have learned more skills and techniques before participating in this activity. ‘Trad’ climbing, as it is seen today is the way climbing was performed in the past (1980’s), where it was simply referred to a climbing. It received its own specific name once sport climbing took off and a differentiation between the climbing types had to be created. ‘Trad’ climbing also requires more technical knowledge of climbing and the skill of route-mapping, and using and making anchors. The greatly differs to sport climbing where you primarily use quick-draws alone in order to attach yourself to pre-placed bolts. Before the participant can embark in ‘trad’ climbing, they must first learn how to perform route-finding, as in ‘trad’ climbing, the route to the destination is, more times than not, unplanned by the organization providing the climbing experience. ‘Trad’ climbing also requires more tactics as, in order to prevent damage to their equipment, participants of ‘trad’ climbing tend to be more careful and try to fall infrequently as falling applies a lot of stress to the equipment, and climbing protection is quite expensive. There are two distinct types of protection used by participants of ‘trad’ climbing: passive protection and active protection. Passive protection comes in two basic forms: cams and wedges. Wedges are pieces of metal that are usually attached on a piece of wire, which are tapered in such a way which allows them to be inserted behind a crack in a rock face. Cam’s on the other hand, are more rounded and tend to be twisted and are used to jam into hard to reach places. Active protection, or spring loaded cams, have three or four curved cams that are designed to pull inwards when the trigger of the device is pulled. Once the device is released, the device expands into the crack of the rock face. If the device is correctly positioned, even the heaviest of shock loads is applied, will not come loose. Knowing how to use this essential skill is vital for the participant to be able to not get hurt. It is essential for beginners to learn how to make solid anchor points with their protection equipment before embarking in ‘trad’ climbing. Once learning the basics of how to use the protection equipment, it is ideal for beginners to attempt short, easy pitches before embarking on much longer routes. of the type of climbing that is performed, certain skills and techniques must be learned and followed. It is in the interest of the participants’ safety to be able perform certain skills efficiently. An example of a skill that would be advantageous to have practiced before partaking in rock climbing would be knot tying. A figure-eight and fisherman’s knot would be used to secure and attach the lead rope to the belayer’s climbing harness. If this knot is done incorrectly, than it could cause the rope to become detached from the lead climber, effectively making any protection equipment useless. skill that the participant of rock climbing would need to learn before embarking on an expedition would be how to belay the rope to their partner. Although this skill is less important for participants who are new to the activity, as it is unlikely for them to be taking the lead climbers position, it is still an important skill to have trained in for when they become more familiar with the sport. If the lead climber is unfamiliar with belaying than if the secondary climber were to fall, the rope may have too much slack and would make it more likely for both the lead and secondary to fall, as a higher fall means a larger impact force on the protection gear. If the force is great enough it may dislodge any protection that the participants have placed (‘trad’ climbing only) may come undone from the surface of the rock face, meaning both the lead and secondary climber would fall leading to a potential fatality (if the route reaches a high enough altitude). Belaying would ensure that the rope would remain taught at all times, meaning that any fall would be reduced, and in turn ensure that the protection remains sturdy, thereby preventing any possible injuries. the participant reaches a certain level of proficiency, they may start to move onto more advanced skills and techniques. An example of an advance skill would be route mapping. Whilst beginners usually start rock climbing using predetermined routes that have a predetermined difficulty ranking, once the participant becomes more proficient at climbing they can start to map their own mapping can be challenging as you can never fully determine how hard it will be, or if you are able to finish it. This is because routes mapped out by the participant themselves are not checked by rock climbing providers, thus the participant can never be sure of whether previous handholds and footholds still exist. Although this makes the activity dangerous, it also increases the challenge involved in performing the climb as it makes it so the participant has to constantly think on their feet to ensure that they remain in a safe position. Route mapping is aimed entirely at experienced climbers that have a strong understanding of basic climbing. participating in this sport the participant must have a minimum requirement of fitness. Repeatedly pulling up your own body weight up, or holding yourself in place long enough to plan your next move, can become strenuous very quickly on muscles that are not trained to be used to that level of muscular endurance. addition, it would be advantageous to have obtained an average flexibility before embarking on this activity. Although flexibility is less important than muscular endurance, a decent level of flexibility would be advantageous as it would allow you to be able to move from point to point easier than someone with are some general skills that would be advantageous to learn before partaking in rock climbing. An example of one of these general skills would be weather forecasting as it is very important for the participant to know what the weather conditions are in the region of their chosen route. Making a mistake with weather forecasting can have dire consequences for the participant involved as the weather may cause them to become stranded outside. An example of this would be if it started to rain a lot, it may make the rock face become slippery and make it impossible to progress, or go back: leaving the participant in an awkward situation. i will be discussing mountain biking and some of the skills and techniques required to partake in mountain biking. starting mountain biking, the participant must first know how to how to perform the basic skills and techniques of the sport. Here i will be outlining some of those required skills. ? Breaking: Although breaking can be done by exclusively using the back break, effective breaking can require some practice. When breaking you have to considering which break to pull and how much. Remembering that the more weight a tire carries, the more potential braking power it has. This is because it has more pressure into the ground, making it less likely to skid, or come away from the ground. When going downhill, your front wheel carries more weight than the back wheel: therefore by gently squeezing the front brake, it can help you control and manage the speed you are going. But you should be careful not to squeeze the front brake too hard because it may cause the front wheel to lock off and in turn cause you to be thrown over the handlebars. ? Going uphill: Before going uphill it is advantageous to enter a lower gear. This will make it easier to pedal, making it less strenuous. Before changing into a lower gear, remember to ease up on your pedaling as this will lower the pressure on the chain. Don’t just use one set gear for inclines, different types of terrain may make it so higher/lower gears are better for that particular incline. Through practice over time you will be able to find out what is best for you. Whilst going uphill, it is also advantageous to stay seated. Although standing usually helps you climb a steep hill with a road bike, but in most cases, you will find that on dirt, standing will cause the rear tire to lose its grip and spin out. Climbing up a hill requires traction, so stay seated as long as possible. ? Going downhill: When going downhill, it is most of all important to relax. Ensure you don’t lock up your elbows or clench up your grip. If you lock your elbows, than you won’t be able to absorb any shocks or impacts as easily. Although you shouldn’t grip too tightly onto the handlebars, you should remember to maintain a strong grip of the handlebars in order to maintain stability. Mountain biking downhill can be comparable to that of downhill skiing, in order to steer efficiently, you need to steer using body weight. By shifting your body weight in one direction, the direction of the bike should follow. Also, in many cases you will automatically lean slightly into that direction, making it seem that the bike is going in the same direction of where you look. Therefore, always focus on where you want to go. A good quote from Active.com states that: “You should not think so much about steering but the direction in which you wish to go”. If you try and use the handlebars to steer whilst travelling at a high speed, it will more than likely cause you over steer and lose Finally, whilst going downhill, it is important to stand above the saddle. This will make it so your legs can absorb most of the shock, instead of it all being absorbed by your posterior, and in turn your spine. Cornering: When cornering, there is a golden rule that will make the process significantly easier. The rule is to always look ahead. It is a very easy skill to gain, but even easier to let slip. Looking ahead can really make a phasimable difference. When you look ahead around the corner, this will cause you to twist your shoulders slightly, thereby moving your arms and in turn the handlebars. This will therefore make it feel like it has almost guided the bike around the corner. Before taking partaking in the world of mountain biking, you must consider some of the more generic skills and knowledge that you may need. An example of this would be: how to check the upcoming weather forecast for the day you are going to embark in the sport. This is important as if you know it is going to rain in advance of it happening, you will be able to dress accordingly. Another generic ability that could be helpful is risk assessment development. By creating a risk assessment, you will assess all the risks of mountain biking on your chosen route and how to efficiently avoid them. An example would be, if you recognized that the route you were going to take was rocky, you could wear shin and elbow pads in order to prevent grazing and bruising. Mountain biking does not require the participant to attain the same level of fitness as an athlete, but an average level of muscular endurance would be advisable. This is because if you are halfway through a course and run out of energy, it can be a long way from any roads. Meaning you may become stranded until you regain your breath.
<urn:uuid:a4706127-c2c0-47e9-ab5c-6b2afde495c3>
CC-MAIN-2021-43
https://bereavementpractitioners.org/rock-climbing-there-are-two-primary-types-of-rock/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00390.warc.gz
en
0.954838
3,385
2.734375
3
- In 2018, community members in Ban Boon Rueang, in Thailand’s northern Chiang Rai province, successfully campaigned against plans to convert a wetland forest into a special economic zone. - The wetland, which supports residents’ livelihoods as well as providing a haven for wildlife, remains under the customary management of villagers. - However, it faces ongoing threats due to climate change and dam construction on the Mekong River. Local officials also cannot guarantee that future administrations won’t revive plans to convert the area for industrial use. Srongpol Chantharueang remembers his parents telling him as a boy always to protect the local wetland forest when he grew up. They told him that the ecosystem would be important for his life and that of his community. “I didn’t understand what they meant at the time,” he told Mongabay via a video call in December 2020. “I didn’t understand what the true value of the wetland forest was.” A village leader, Srongpol lives in Ban Boon Rueang, a town in Thailand’s northern Chiang Rai province. Nestled between the Doi Yao mountain range and the lower reaches of the Ing River, a 260-kilometer (160-mile) tributary of the Mekong, Ban Boon Rueang is an unassuming town with an agrarian lifestyle that goes back generations. But in recent years, community members have been roused to take action to protect the surrounding nature that provides them with sustenance and secures spiritual connections to their ancestors. The community averted tragedy in 2015 when the Thai central government declared the local wetland forest — a 483-hectare (1,200-acre) haven of biodiversity — a target for the development of a special economic zone (SEZ), as part of a nationwide strategy to expand infrastructure and attract foreign investment. Thailand has experienced rapid economic development, accompanied by population growth, urbanization and natural resource depletion, over recent decades. Between 1961 and 1998, Thailand’s forest cover decreased from 53% to 25% of the nation’s total area, representing a loss of 14.4 million hectares (35.6 million acres) of forest. Wetlands have been overexploited, cleared for agriculture, or filled in for the development of residential and industrial estates. Village inhabitants foresaw what the SEZ would mean for their treasured ecosystem: the wetland would be filled in with concrete and the trees cut down. Their response was to mobilize to protect the land, which they have customarily managed for more than 200 years. The campaign successfully convinced authorities that the wetland forest was more beneficial economically and socially in its natural state. In 2018, the Chiang Rai provincial government withdrew the proposal to use the site as an SEZ — a great victory for Ban Boon Rueang and for Srongpol. The community’s quiet and methodical revolution recently gained international recognition: they were awarded the 2020 Equator Prize by the United Nations Development Programme. “By asserting their rights to manage the wetland forest, the community members protect their rights, identity and their future,” said Warangkana Rattanarat, Thailand country director of RECOFTC, a nonprofit international organization that worked alongside the community to protect and sustainably manage the forest. “Their success is an inspiration to other communities who are fighting similar injustices and threats.” However, while the villagers successfully fought off the development proposal, they continue to grapple with such threats as climate change and upstream dams, which harm the integrity of the ecosystem by preventing vital seasonal flooding of the wetland forest. An important ecosystem Boon Rueang wetland forest is the largest in a network of 26 wetland forests that swathe the meandering lower Ing River, en route to its confluence with the Mekong River at the Thai-Lao border. It is credited with saving the village from the devastating floods that swept the region in 2010. Although neighboring villages were engulfed, Boon Rueang was comparatively unscathed owing to the capacity of the wetland to buffer the floodwaters. To the village residents, the wetland forest is a lifeline, providing clean water, fish spawning and nursery grounds, and a riparian ecosystem on which the community depends. “It is like a village kitchen,” Srongpol said. “Its charm lies in its seasonality. Every season there is something different that we can gather to eat.” According to RECOFTC, the annual cost of replacing the lost livelihoods and ecosystem services provided by the wetland forest would be an estimated $4 million. Srongpol chairs the Boon Rueang Wetland Forest Conservation Group (BRWFCG), which serves as the community’s governing body for its wetland forest. When faced with the SEZ development, the BRWFCG spearheaded the community’s resistance, engaging a broad range of stakeholders to help gather information about the wetland forest’s importance. Data compiled by the BRWFCG and its partners revealed that the riparian ecosystem supports at least 276 species, including 87 types of fish and several dozen edible plants. Recent camera trapping and DNA studies confirmed the presence of leopard cats (Prionailurus bengalensis) and near-threatened Eurasian otters (Lutra lutra). There are anecdotal reports of other species on the IUCN Red List, such as critically endangered Sunda pangolins (Manis javanica), vulnerable fishing cats (Prionailurus viverrinus), and near-threatened king cobras (Ophiophagus hannah). According to the 2020 WWF Living Planet Report, nearly 70% of global wetlands have been lost since 1900 and they are still being destroyed three times faster than forests. In Thailand, pressure to convert land for agriculture, aquaculture and industry has resulted in the loss of many wetlands, which accounted for 7.5% of the nation’s land area in 1999. Lowland wetland forests, like Boon Rueang, are an increasingly rare lowland ecosystem in the region, which have tremendous carbon storage potential — double the capacity of a mixed deciduous forest, according to RECOFTC. Despite the pivotal role of wetlands in countering climate change, supporting biodiversity and mitigating disaster risk, the 2018 Global Wetland Outlook, published by the Ramsar Convention on Wetlands, found that wetlands remain “dangerously undervalued” by policy- and decision-makers in national plans. The community action of the Boon Rueang residents is a rare beacon of hope for wetlands. “By conserving the wetland forest, the community helps protect biodiversity in the region while also mitigating climate change,” said Warangkana of RECOFTC. Rivers out of balance Nonetheless, climate change remains a pervasive threat. The region has been rocked by severe El Niño-driven droughts over the last few years. When combined with the changing flow regimes on the mainstream Mekong River due to upstream dams beyond the villagers’ jurisdiction, not even the most targeted community action can avert the consequences. The flooding patterns that sustain the Ing River and its wetlands are determined by the natural flow cycle in the roughly 2,050-km (1,270-mi) stretch of the upper Mekong River, from where it rises in the Tibetan plateau to its confluence with the Ing River in Thailand. During the wet season, which typically runs from August to November, the Mekong River transitions into a flood phase, which sends a surge of floodwater up the lower Ing River. This flood pulse nourishes waterways and lakes in the wetland forests and carries migratory fish from the Mekong into the Ing River, where they support the livelihoods and diets of local fishing communities. Despite the importance of the upper Mekong’s natural flow regime to downstream habitats and communities, Chinese firms have so far built 11 hydropower dams on the mainstream river within China’s borders, with more completed or under construction in neighboring Laos. Studies implicate the upstream dams in weakened flood patterns in downstream catchments over recent years. A 2020 report by researchers from Eyes on Earth, a U.S. environmental research group, suggests that Chinese dams regulating water flow during the wet season in 2019 exacerbated the effects of climate-driven El Niño droughts for communities downstream. The upstream Mekong’s weakened flood pulse sets off “a domino effect throughout the ecosystem,” according to Teerapong Pomun, director of the Thai NGOs Living Rivers Association and the Mekong Community Institute, which work with communities along the Ing River to manage their wetland forests. Dwindling water levels and the weakened Mekong flood pulse due to the combination of dams and El Niño-driven dry spells are a serious concern here. “Villagers report that the Mekong River no longer fluctuates according to seasonal flood patterns, meaning that the river levels are abnormally low and fish replenishment has decreased,” said Warangkana of RECOFTC. In Boon Rueang, Srongpol mentions how it was once necessary to move about the wetland forest by boat during the wet season. “For the past few years, it has been possible to walk on foot year-round,” he said. Thai civil society organizations have been campaigning against development schemes in the Mekong River for more than two decades. “We try to raise the issue among local people and petition decision-makers in China, but it is very hard to have our voices heard,” Teerapong said. The recent inauguration of the Mekong People’s Forum, comprising civic groups from eight Thai provinces that border the Mekong, helps to address such issues at the policy level. “We need to unite our voices,” Teerapong said. “The dams are not just a problem for people living along the mainstream Mekong; this is a problem for communities along the tributaries too.” Conservation and international protection Through BRWFCG, the Boon Rueang community has an active voice in such regional advocacy platforms and continues its drive to sustain a healthy wetland forest. Its approach has inspired nearby villages to take similar action to stand up against the pervasive threat of land grabbing. By following customary management approaches that focus on living in balance with nature, the community has mitigated some impacts. Sixty-three communities along the Ing River have established fish conservation zones, where fishing activity is restricted to protect vulnerable spawning habitats. These fish conservation zones are steeped in Buddhist traditions: the waters are blessed by a local monk and, thus, respected. Studies by Living Rivers Association have found that fish size and numbers increase in these zones. The BRWFCG is also cultivating a plant nursery and organic fertilizer system to restore the wetland forest and its soil quality. Due to the community’s efforts, the immediate threat from SEZ development has abated. However, there is still no guarantee that developers will not target the area again in the future. Over the years, the community has repeatedly fought off attempts to convert the wetland for other purposes, such as for factories and plantations. “When the SEZ proposal was withdrawn, we asked the governor to guarantee that nobody would touch the land,” said village leader Srongpol. “But the governor said when he retires there is no guarantee that the next administration will not want to develop on it.” The community does not formally own the wetland forest land. In 1967, a Public Land Certification granted the community legal rights to fish, graze buffalo and establish a community forest from which they can gather non-timber forest products. Agreements that govern such activities “are not enough to protect them legally,” according to Teerapong. For this reason, the community is working with partners to designate the wetland forest landscape of the lower Ing River, including the Boon Rueang wetland forest, for global protection as a Wetland of International Importance under the Ramsar Convention. Although Thailand has 15 sites designated under the Ramsar Convention, totaling more than 400,000 hectares (990,000 acres) of wetlands, the habitats of the Ing River watershed are not among them. Advocates say they hope that such protection under international law will safeguard the important wetland forest ecosystems from future development pressure. The Ramsar bid has the endorsement of several villages and the Chiang Rai provincial leader. However, outreach and dialogue continue with communities throughout the river basin to ensure their inclusivity in the proposal. Teerapong estimates the proposal will be ready within the next two years. For Srongpol Chantharueang, the simple sight of water buffalo grazing in the wetland sparks joy. They have been a constant in the wetland forest landscape since his boyhood. He knows his community cannot afford to get complacent. “To destroy the forest is very easy,” he said. “To protect it and improve it for nature and the local people is much more challenging.” FEEDBACK: Use this form to send a message to the author of this post. If you want to post a public comment, you can do that at the bottom of the page.
<urn:uuid:f88f28a9-b754-4c18-9fa6-2051fa2d2d90>
CC-MAIN-2021-43
https://news.mongabay.com/2021/01/award-winning-thai-community-continues-the-fight-to-save-its-wetland-forest/amp/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00229.warc.gz
en
0.947946
2,785
2.859375
3
While those paintings from my post “We Didn’t Start the Fire” are fresh in your mind, I want to talk about the relationship between modern art and medieval art. Modern art is often thought of as something new, but in many ways it is a return to the medieval artistic tradition. It began with a strong influence from primitive art , and it is not at all coincidence that primitive art, medieval art, and modern art share so many defining characteristics. They’re all based on philosophies situated at one extreme of what I call the principle dimension of art, which for the moment I’ll call idealism versus realism. Cycladic head, Greece, circa 2500 BC That’s not all there is to it, but rather than explain that, which will take many posts, let’s look at similarities between modern art and medieval art. Note the symbolic choice of face colors in Munch’s 1895 Jealousy from my previous post, Picasso’s use of blue throughout his blue period to show misery and despair, and the use of colors associated with death, disease, mold, and corruption rather than the colors of wolves for Jackson’s The She-Wolf. Fresco, St. Georg Church, Reichenau, Austria, 10th century A.D. Picasso, Factory, 1909 Note the pseudo-perspective on the buildings in “Factory”, painted as as a child paints a building, where no one perspective is privileged, but each side looks more like it would if you were looking more directly at that side. This is called Cubism, and is the same approach used to draw the buildings in the Reichenau frescoes. The general multiple-simultaneous-perspective approach is also common in primitive art and is strictly mandated in ancient Egyptian art. Essences govern medieval and modern art A standard explanation is that Cubist painting and related styles depict the different sides of objects simultaneously, to give a truer picture of the object than one would get from a realistic drawing using perspective. I think this reveals the underlying motivation: All such paintings are made by people whose philosophies say that a realistic picture of an object is not a true picture. They are attempts to convey more of the “essence” of a subject than you could perceive simply by looking at it. This principle governs medieval art. That’s why it’s so unrealistic. Principles of medieval art include: – Instead of perspective, draw the most-important side of each figure. – Size is used to show importance rather than distance. – Colors are used for their symbolic meanings rather than to be realistic. – Space is not represented, as it is unimportant. All these principles recur in modern art. Medieval painting, possibly of the Ark, source unknown Cezanne, Four Bathers, 1890 Picasso, Les Demoiselles d’Avignon, 1907 Size and space Cezanne’s painting places the bathers in a three-dimensional scene. There is space between them, and you can tell where they’re standing. Their sizes have been curiously inverted: the one farthest from the viewer, but most-central to the picture, is drawn the largest, while the one closest to the viewer, but compositionally least-important, is the smallest, as in medieval art. In Picasso’s painting, size varies at random throughout each figure. Not only can’t you tell how to place these women in three dimensions, you can’t tell whether the central two are standing up or lying down. All five are scrunched together unnaturally closely, their figures filling the canvas. Space, and relations between people or things, are no longer important. In the medieval painting, the largest figures are, perhaps, Noah and his wife, not because they are closet, but because they are most-important. They look upwards because they are closer to God than the other figures are. The sky is gold to indicate they are performing the will of God (see Benton p. 70). It’s hard to say where the other people are, or where the boat (Ark?) is. The green squiggles signify more than depict water. In medieval paintings, every person was drawn in a position and posture indicating their relationship to God. In modern art, there is no God. The world has lost its center, and so positions and postures should remain ambiguous and unlocatable, adrift in a space without coordinates. If you scroll through all the paintings in my previous post, you’ll see there is no space in Modernist painting, even if they’re representational. This is not AFAIK adequately accounted for by modern art critics, because their perspective does not allow them to notice that it’s missing. Space is not shown in modern art because space is not a property of objects. Modern artists, and the continental philosophy it’s based on, focus on what the essence of an object or the meaning of a word is, and have forgotten about the question of how objects or words relate to each other, or combine to give meaning. Structure and creativity Modern art rejects the notion of structure, whether the structure of natural objects, the composition of a painting, the dramatic or logical structure of a story, or the graceful and efficient load-bearing structure of a dome or arch as opposed to the structurally-indifferent brute power of right-angled steel bars. This rejection of structure is known as Structuralism. The idea is that rather than being aware of structures, relationships, and measurements, you need only be aware of contrasts. Mark Rothko’s paintings are the apotheosis of this notion: It originates in Saussure’s linguistic theory, which says that words are defined not by properties of the things they denote, nor by algorithms, but by the set of words which contrast with or bound them. For instance, “tall” is not defined by an understanding of physical properties that make an object tall, but by being the opposite of “short”. “Wrist” is defined not as a particular anatomical feature, but as that word which lies semantically between “hand” and “forearm”. Structuralism, however, is purely topological, uninterested in the metric space words lie in or the set-membership functions one might use to define them. That means they don’t care about how far apart the meanings of words are, the fuzziness or interpenetrability of their boundaries, or any unclaimed space between them. Philosophically, they’re reverting to Aristotelian logic, in which only first-order predicates exist, there are no quantifiers and no measurements, no action at a distance, and all reality conforms to a kind of Law of the Excluded Middle in which everything must be This or That. Re. the Law of the Excluded Middle, we may also observe that they’re reverting to a belief in Aristotle’s claim that empty space is impossible [2, 3]. Art of the High Middle Ages, similarly, had no notion of compositionality–the idea that a composition is more than the sum of its parts, or has some property other than the collection of the properties of its parts. I’ll give detailed support for this claim in a later post. Ancient and medieval philosophy was focused on the idea that the answers to mysteries lay in the essences of objects, not in relationships between objects. Aquinas’ theory of imagination was that it produced images of remembered objects, and could construct a new image either by putting together multiple objects which had never been seen together, or by putting together formal properties from different known objects, “as when from the imaginary form of gold, and the imaginary form of a mountain, we construct the one form of a golden mountain, which we have never seen” (Eco p. 110). There was no allowance in the medieval scholastic theory of the imagination or creativity to conceive of, say, a wheel, other than through having formerly seen a wheel. That would be a creative act, and to suggest that humans were creative would have been heresy. Much of post-modern theory can be described as making the heresy of creativity unthinkable. This is why post-modernists like fan-fiction. Well, not enough to actually read it, but enough to write articles about it. They think it’s inherently uncreative (e.g., Coppa p. 231, 232, 245; Jamison 2013b; Wershler), and that it validates their claims that proper literature is not creative (Barthes 1971), but merely recombines elements of previous literature, the way Aquinas thought ideas simply recombine things people have seen before. (This belief may be technically true, but it is more misleading than informative due to the inability of humans to conceive of the degree to which the human brain decomposes sensory information.) If you compare all the paintings in my previous post to 19th-century paintings, you’ll notice stylistic or technical differences. Modern art : – is crudely drawn – uses a small number of colors, usually a subset of red, yellow, white, blue, black, green, and brown – chooses colors for their symbolic or emotional values – does little blending or shading of colors, and only for one-dimensional gradients–it never blends three colors to show realistic shadows, or shadows and a color gradient at the same time – does not use perspective or uses multiple simultaneous perspectives – does not depict empty space; stuffs the picture full (unless it’s an empty-canvas conceptual piece) – does not try to depict objects realistically – does not show people having emotions (a trend which began with Manet and Seurat’s coolly dispassionate evening parties and picnics, and harks back to neo-classicism and classical Greece) This is nearly the same as the list of differences medieval paintings have from Renaissance paintings! Modern art, conceptually and technically, rolled back the Renaissance. This is because it’s based on a philosophy which rolls back the Enlightenment and Renaissance to return to medieval conceptions of the world. More on that later. “Primitive” is a controversial word now. In most cases it’s more precise to say “hunter-gatherer”, but we can’t for art, because we often don’t know whether an ancient society was a hunter-gatherer society. We can, however, usually look at its technological artifacts, and its art, and say whether it was primitive. If anyone is offended by the term, their presumption that the term is an insult only proves their own prejudice against primitive societies. One good source for the influence of primitive art on modern art would be a biography of Picasso, but that’s just scratching the surface. (McGill 1984) describes a large art exhibition arguing that the influence of primitive art on modern art was more philosophical than formal and that Picasso wanted to return to irrationalism and ritualism. I would say philosophy and form always go hand-in-hand. Aristotle’s claim is cleverer than it at first appears, and might be correct in two senses. One is that a vacuum must have quantum fluctuations; the second has to do with how space is created by mass in general relativity. Much of deconstructionism can be concisely described as the claim that reality is unknowable because Aristotelian logic and physics don’t work in real life, but that’s also a topic for a separate series of blog posts. Of course there are exceptions–probably thousands or even tens of thousands of exceptions. Matisse, a Fauvist, maintained a sense of space. Georgia O’Keefe blended 3 colors to show color-realistic shadows. But these rules probably hold for more than 90% of modern art. Roland Barthes, 1971. “From Work to Text.” In Leitch et al., pp. 1326-1331. Janetta Rebold Benton. Materials, Methods, and Masterpieces of Medieval Art. Coppa, Francesca. “Writing Bodies in Space.” In Hellekson & Busse 2014, pp. 227-246. Umberto Eco 1959, translated 1986. Art and Beauty in the Middle Ages. Yale University Press. Hellekson, Karen, & Kristina Busse 2014. The Fan Fiction Studies Reader. Iowa City, University of Iowa Press, 2014. Jamison, Anne E. Fic: Why Fanfiction Is Taking over the World. Dallas, TX: Smart Pop, an Imprint of BenBella Books, Inc., 2013. Jamison 2013b. “An Interview with Jonathan Lethem.” In Jamison. (No page numbers in e-book.) Leitch et al., eds., The Norton Anthology of Theory & Criticism. New York: Norton, 2010. Douglas McGill 1984. “What does modern art owe to the primtives?” New York Times September 23, 1984. Wershler, Darren. “Conceptual Writing as Fanfiction.” In Jamison, pp. 408-417.
<urn:uuid:0ec9df5d-e26c-4e1d-b9d7-781327bbbfee>
CC-MAIN-2021-43
https://awritingguide.com/2019/04/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00109.warc.gz
en
0.943304
2,817
3.109375
3
Unethical business practices have been around since pretty much the beginning of business. We’ve all heard the term "snake oil salesman," which originated in the 1800s when Americans would sell fraudulent oil made from rattlesnakes when the real, effective product was made from the Chinese water snake. Today, people seem even more critical of unethical behavior, from Amazon’s alleged warehouse worker conditions to Nestle reportedly profiting from the limited water supply from California’s desert regions. Running a business is sometimes high stakes, especially if there’s a lot of competition in the market. No matter how cut-throat it is, if customers perceive you as selfish, there will always be backlash. You have the choice to run your business with high ethical standards or not, but what is ignoring your social responsibility actually worth? Avoid these common unethical practices. What Is Business Ethics? Business ethics is the study of appropriate business policy and practice. Typically, this is guided by the actual law, but there are many loopholes that raise controversy among consumers. Is Amazon unethical to put workers in a high-pressure environment? Is it unethical for billion-dollar companies like Walmart to pay paltry wages to workers, even if that wage is above the federal minimum? Just because companies aren’t performing outright illegal practices like insider trading, discrimination and bribery doesn’t mean they’re not still raising ethical issues. In fact, some unethical practices may seem so minor that businesses don’t even notice it’s happening. There is usually some sort of gray area, but businesses have an ethical responsibility toward: - Their workers - Their consumers - Their partners - The environment - The general public - The greater society Those who put their profits or companies over any one of those sectors may indeed face backlash for unethical business practices. Conflict of Interest Conflict of interest is one of the more absentminded unethical business practices. Companies don’t always realize this is happening because they’re so busy going about their daily lives and are often very close to the issue. Commonly, you’ll see conflict of interest policies in the media, especially companies with strong journalistic standards like the New York Times, and this helps promote more ethical reporting. For example, ethical publishers opt to disclose conflicts of interest within a story, which could be as small as stating they were given a free product for a review or as large as disclosing that they’re financially tied to the subject on which they’re reporting. In 2020, this was a major issue during Michael Bloomberg’s presidential campaign. How was Bloomberg News going to ethically report unbiased information about its boss, especially when, according to the New York Times, company guidelines prohibit it from reporting on his wealth and personal life? The publication attempted to mitigate this issue by abstaining from investigating Bloomberg and his political rivals. Sometimes, the best course of action in a conflict of interest is to recuse yourself entirely. False advertising, also known as misleading product information, doesn’t just get you in trouble with consumers — you can face serious issues with the Federal Trade Commission. False advertising generally falls into the following categories: - Deceptive product descriptions: The laws on this are particularly strict in the world of health, beauty, pharmacy and food. You cannot use terms like “organic” or “natural” unless they meet USDA guidelines, you can’t have misleading illustrations and pictures and you definitely, definitely can’t misrepresent ingredients. For example, Dannon had to pay out $45 million in a class-action lawsuit for falsely claiming Activia and DanActivia had “clinically proven” health benefits in 2010. - Deceptive pricing: This typically involves hidden surcharges that substantially inflate the price of a product or when companies inflate the price of a product and then pretend it’s on sale. The former often happens with phone companies that hide additional fees (like texting limits and out-of-network fees) and make unauthorized charges on a confused customer’s bill. - Deceptive measurement: You can’t use different kinds of measurement or packing material to make consumers feel like they’re getting more than they are. - Deceptive comparisons: Companies may want to compare their product to their competitors, but they must proceed with caution and always remain ambiguous. For example, in 1997, Pizza Hut sued Papa John’s for using the slogan “better ingredients, better pizza.” This was dismissed in court after it was ruled to be an objective statement. - Deceptive guarantee or warranty: You must be able to — or at least intend to — deliver what you say you’re going to deliver. The best course of action to avoid false advertising is to just be honest. Remember that with ethical business practices, transparency is always key. Mistreating employees is one of the major unethical business practices across the globe. It’s not always limited to illegal activities like child labor. It can fall into a number of gray areas or hard-to-enforce practices like: - Misclassifying employees as contractors to avoid paying for benefits. - Paying very low but technically legal wages. - Utilizing labor from developing nations to pay cheap wages. - Enforcing strict productivity rules that cause employees to sacrifice their health and safety. The thing about mistreating employees is that it doesn’t always present itself in an extreme way, like Amazon reportedly pressuring warehouse workers to the point that they urinate in bottles for fear of being disciplined for taking a bathroom break. At the heart of it, it usually boils down to choosing profits over the welfare of employees. Even something as seemingly small as a bootstrapped business having no paid-sick-day policy can be considered unethical depending on the circumstance. For example, a 2014 report from the Centers for Disease Control and Prevention found that 20% of service workers came into work at least once that year while suffering from vomiting or diarrhea, but a study also found that offering paid sick days to low-paid employees who could not otherwise afford an unpaid day off can greatly mitigate this. Amidst the COVID-19 outbreak in the United States, companies like Walmart, McDonald's and Dollar Tree have come under fire for not giving employees paid sick leave and endangering the lives of consumers who come into contact with employees who may be sick. It’s never good to have incorrect financial statements, but what’s even worse is knowingly manipulating records to make your business appear more profitable. Part of corporate responsibility is reporting correct statements, particularly to shareholders and the IRS. In 2015, Toshiba came under fire for this very practice and was forced to pay a $60 million fine as a result. The company had exaggerated profits by $1.2 billion, which in turn manipulated shareholder confidence. This isn’t something that just has to do with manipulating financial statements to get tax breaks or to bolster investor confidence — companies can also manipulate active user or consumer numbers. In 2016, Wells Fargo was fined $185 million for creating millions of fake credit card accounts. Poor Environmental Practices One of the major examples of unethical behavior is when companies negatively affect the environment and wildlife around them. This can happen with all types of businesses, from companies that pollute waterways and emit large amounts of greenhouse gasses to those who accidentally (or purposely) contribute to deforestation or the mistreatment of animals. For example, SeaWorld habitually came under fire for breeding orcas and holding the animals in captivity until they banned the practice. Similarly, Ringling Brothers was slammed for using elephants in its performances until it removed the creatures from its shows. Beyond that, major companies like Nestle have faced numerous controversies for equally controversial business conduct. For example, the snack and beverage company was met with so much resistance for trying to bottle spring water in California, an area that habitually experiences widespread droughts, that it was forced to abandon plans for its largest water-bottling plant. Greenpeace International also launched a campaign against the brand in 2010 for using palm oil in products like Kit Kat bars because the product is linked to rain forest destruction in Indonesia. Sexual Harassment and Discrimination An increasing number of companies have come under fire not only for sexual harassment and discrimination but for paying off former employees with nondisclosure agreements, essentially buying their silence on the unethical practices. For example, 21st Century Fox came under fire after sexual harassment allegations levied against news chief Roger Ailes and host Bill O’Reilly. The company later agreed to pay $90 million in shareholder settlements because the repeated instances — and the fact that it knew of O’Reilly’s allegations when it renewed his contract — created a company culture that allowed for this unethical behavior. This can be avoided with strict human resources protocols, full investigations and a company culture where victims are encouraged to speak up rather than be punished. Bribery and Lobbying A business may be tempted to give a little kickback for favorable treatment, but this is inherently dishonest. Oftentimes, bribery isn’t as cut and dry as handing a few bucks to an inspection officer for a favorable report. Some people view the legal practice of lobbying as unethical. With this business practice, companies and organizations donate large sums to politicians and lawmakers in hopes it will have some sort of influence. In general, lobbying is a legal practice, and the legality lies in the fine print. Bribery is considered paying for guaranteed power, but lobbying is considered hoping to influence power. Be careful about for what you lobby, though, as consumers view your causes under close scrutiny. For example, Chick-fil-A has long faced criticism for donating to organizations that allegedly lobby against LGBTQ rights. What Happens if You Face a Difficult Ethical Decision? Companies often feel pressured to make certain decisions. This may not exactly be unethical, but it really depends on whom or how many people it affects and to what extent. This is the main thing you need to weigh when making decisions. For example, what happens when a company that manufactures microchips finds a defect in one chip in a batch that must be immediately sent to a supplier that makes computers or risk losing the entire contract? There’s a fair chance that just one or two microchips are defective, but there’s also a chance that there is a widespread defect, and once the company puts them into computers and sells them, they will face mass public backlash that you could have prevented. What do you do: Delay the product release or hope for the best? Transparency is usually considered the most ethical practice, so in this specific situation, you may want to discuss the quality control issue with the supplier so an informed business decision can be made. How to Create an Ethical Workplace It’s not actually that hard to create an ethical workplace, though business owners are repeatedly met with challenges. Companies and business owners can: - Promote transparency: Transparent businesses allow shareholders and consumers to make informed choices. They will not be accused of pulling the wool over someone’s eyes for a sale. - Lead by example: Upper management should always be the example of what is and is not appropriate within a workplace so that lower-level employees can take the lead. - Have comprehensive workplace policies: Ideally, even if you’re a small business, this should be handled by an HR professional. These standards should be clearly communicated to staff, and they should protect employees and allow them to openly discuss sensitive matters. - Offer ethics training: Things like sexual harassment training can help reduce instances of unethical behavior among employees. - Actually punish unethical acts: A slap on the wrist for a harasser is going to do nothing but show employees that this is acceptable. Similarly, you should reward whistle blowers who help maintain an ethical workplace and call out bad behavior. - Lead with compassion: Ethical employers are compassionate employers. They are able to sympathize with the struggles of employees or even the greater society and environment and do their best to minimize harm whenever a situation should arise. - Investopedia: Business Ethics - The New York Times: Bloomberg News’s Dilemma: How to Cover a Boss Seeking the Presidency - The New York Times: The Companies Putting Profits Ahead of Public Health - New York Post: Amazon Workers Pee Into Bottles to Save Time: Investigator - Mass Device: Toshiba Agrees to Pay $60m Fine in Accounting Scandal - Justia: False Advertising - Snopes: Chick-fil-A and Same Sex Marriage - Investopedia: Why Lobbying Is Legal and Important in the US - Vox: Wells Fargo Cheated Millions of Customers. The Republican Tax Bill Is About to Hand it a Big Win. - Small Business Bonfire: 5 Extremely Common But Very Distasteful Unethical Business Practices - Corporate Research Project: Nestle - Fortune: The 10 Biggest Business Scandals of 2017 Mariel Loveland is a small business owner, content strategist and writer from New Jersey. Throughout her career, she's worked with numerous startups creating content to help small business owners bridge the gap between technology and sales. Her work has been featured in publications like Business Insider and Vice.
<urn:uuid:a133665a-e3d3-4602-8ca7-824ab45700fd>
CC-MAIN-2021-43
https://bizfluent.com/13725638/unethical-business-practices-to-avoid
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00030.warc.gz
en
0.946708
2,755
2.75
3
This is the part 5 of the forgotten civilisation series. Part 4 covered Shishunaga Dynasty and Nanda Dynasty whereas part 3 focussed on Haryanka Dynasty ! Part 1 was where we started exploring The Indus Valley Civilisation and Part 2 covering the Mahajanpadas. Let’s explore the Pandya Dynasty today. The word Pandya is derived from the ancient word “pandu” meaning “old”. According to the Indian ancient history archives, the three brothers Cheran, Cholan, and Pandyan ruled in common in the southern city of Korkai. While Pandya settled at home, his two brothers Cheran and Cholan after a parting founded their own kingdoms in north and west. Poem Silappatikaram mentions that the emblem of the Pandyas was that of a fish. History credits Alli Rani (literally “the queen Alli”) as one of the early historic rulers of the Pandyas. She is credited as a queen whose assistants were men and administrative officials and army were women. She ruled the whole western and northern coast of Sri Lanka from her capital Kudiramalai, where remains of her fort are found. She is seen as an embodiment of the Pandya gods, Meenakshi and Kannagi. The credit to the real women empowerment can be given to Pandyas. Archaeological sources of Pandya’s History Pandyas are also mentioned in the inscriptions of Maurya emperor Asoka (3rd century BCE). In his engravings, Asoka refers to the peoples of south India – the Chodas, Keralaputras, Pandyas, and Satiyaputras. These dynasties, although not part of the Maurya empire, were on friendly terms with Asoka. This proves that the Akhand Bharat, which historians have neglected, was truly a real map over 2000 years ago. The earliest Pandya to be found in an epigraph is Nedunjeliyan, figuring in the Tamil-Brahmi Mangulam inscription (near Madurai) assigned to the 3rd and 2nd centuries BCE. Silver punch-marked coins with the fish symbol of the Pandyas dating from around the same time have also been found. Pandyas from Recorded History The Pandyas are said to be established over 5000 years ago but the history is not documented properly. There have been some mentions of the Pandyas in 1600BCE. From the recorded history. The three chiefly lines of the early historic South India – the Cheras, Pandyas, and Cholas – were known as the mu-vendar (“the three vendars”). They traditionally based at their original headquarters in the interior Tamil Nadu (Karur, Madurai, and Uraiyur respectively). The powerful chiefdoms of the three ventar dominated the political and economic life of early historic south India. The numerous conflicts between the Chera, the Chola and the Pandya are well documented in ancient (the Sangam) Tamil poetry. The Cheras, Cholas, and Pandyas also established the ports of Muziris (Muchiri), Korkai, and Kaveri respectively. The gradual shift from chiefdoms to kingdoms seems to have occurred in the following period. Pandyas in the 7th–10th centuries CE The Pandya kingdom was restored by king Kadungon towards the end of the 6th century CE. With the Cholas in uncertainty in Uraiyur, South India was divided between the Pallavas of Kanchi and the Pandyas of Madurai. From the 6th century to the 9th century CE, the Chalukyas of Badami, the Pallavas of Kanchi, and Pandyas of Madurai controlled the politics of South India. The Badami Chalukyas were eventually replaced by the Rashtrakutas in the Deccan. The Pandyas took on the developing Pallava ambitions in south India, and from time to time they also joined in alliances with the kingdoms of the Deccan Plateau (such as with the Gangas of Talakad in late 8th century CE). In the heart of the 9th century, the Pandyas had managed to advance as far as Kumbakonam (north-east of Tanjore on the Kollidam river). Sendan, the third king of the Pandyas of Madurai, is known for extending his kingdom to the Cheras (western Tamil Nadu and central Kerala). Arikesari Maravarman (r. 670–700 CE), the fourth Pandya ruler, is known for his battles against the Pallavas of Kanchi. Pallava king Narasimhavarman I, the famous victor of Badami declared to have defeated the Pandyas. Chalukya king Paramesvaravarman I “Vikramaditya” is known to have fought campaigns with the Pallavas, the Gangas, and apparently with the Pandyas too, on the Kaveri basin. Under Chola influence While the Pandyas and the Rashtrakutas were busy engaging the Pallavas, with the Gangas and the Simhalas (Sri Lanka) also in the mix, the Cholas emerged from the Kaveri delta and took on the chieftains of Thanjavur (the Mutharaiyar chieftain had transferred their loyalty from the Pallava to the Pandya. The Chola king Vijayalaya conquered Thanjavur by defeating the Mutharaiyar chieftain around c. 850 CE. The Pandya control north of the Kaveri river was severely weakened by this move (and straightened the position of the Pallava ruler Nripatunga). Pandya ruler Varaguna-Varman II (r. c. 862–880 CE) responded by marching into the Cholas and facing a formidable alliance of Pallava prince Aparajita, the Chola king Aditya I and the Ganga king Prithvipati I. The Pandya king suffered a crushing defeat (c. 880 CE) in a battle fought near Kumbakonam. By 800 CE, Chola king Aditya I was the leader of the old Pallava, Ganga, and Kongus. It is a possibility that Aditya I conquered the Kongus from the Pandya king Parantaka Viranarayana (r. 880–900 CE). Parantaka I, the successor to Aditya, invaded the Pandya territories in 910 CE and captured Madurai from king Maravarman Rajasimha II (hence the title “Madurai Konda”). Rajasimha II received help from the Sri Lankan king Kassapa V, still got defeated by Parantaka I in the battle of Vellur, and fled to Sri Lanka. Rajasimha then found refuge in the Cheras, leaving even his royal emblem in Sri Lanka, the home of his mother. The Cholas were defeated by a Rashtrakuta-lead confederacy in the battle of Takkolam in 949 CE. By the mid-950s, the Chola kingdom had shrunk to the size of a small principality (its vassals in the extreme south had proclaimed their independence). It is a possibility that Pandya ruler Vira Pandya defeated Chola king Gandaraditya and claimed independence. Chola ruler Sundara Parantaka II (r. 957–73) reacted by destroying Vira Pandya in two battles (and Chola prince Aditya II killed Vira Pandya on the second occasion). The Pandyas were supported by Sri Lanka forces of king Mahinda IV. Chola emperor Rajaraja I (r. 985–1014 CE) is known to have attacked the Pandyas. He fought facing an alliance of the Pandya, Chera, and Sri Lankan kings, and defeated the Cheras and “deprived” the Pandyas of their ancient capital Madurai. Emperor Rajendra-I proceeded to occupy the Pandya kingdom and even elected a series of Chola representatives with the title “Chola Pandya” to rule from Madurai. The second half of the 12th century observed a major internal crisis in the Pandyas (between princes Parakrama Pandya and Kulasekhara Pandya). The neighboring kingdoms of Sri Lanka, under Parakramabahu I, Venadu Chera/Kerala, under the Kulasekharas, and the Cholas, under Rajadhiraja II and Kulottunga III, joined in and took sides with any of the two princes or their kins. Pandya empire (13th–14th centuries) The Pandya empire included extensive territories, at times including large portions of South India and Sri Lanka. The Pandya king at Madurai controlled these vast regions through the collateral family branches subject to Madurai. The 13th century saw the rise of seven prime Pandya “Lord Emperors” (Ellarkku Nayanar – Lord of All), who ruled the kingdom alongside other Pandya royals. Their power reached its zenith under Jatavarman Sundara-I in the middle of the 13th century. Decline of Pandya empire After the death of Maravarman Kulasekhara I (1310), his sons Vira Pandya IV and Sundara Pandya IV fought a war of succession for control of the empire. It seems that Maravarman Kulasekhara wanted Vira Pandya to succeed him. Unfortunately, the Pandya civil war coincided with the Khalji invasion in South India. Taking advantage of the political situation, the neighboring Hoysala king Ballala III invaded the Pandya territory. However, Ballala had to retreat to his capital, when Khalji general Malik Kafur invaded his kingdom at the same time. After subjugating Ballala III, the Khalji forces marched to the Pandya territory in March 1311. The Pandya brothers fled their headquarters, and the Khaljis pursued them unsuccessfully. By late April 1311, the Khaljis gave up their plans to pursue the Pandya princes and returned to Delhi with the plunder. By 1312 the Pandya control over south Kerala was also lost. Source: Environment and Urbanisation in Early Tamilakam, Studies in the History of the Sangam Age, Wikipedia, Cross-Cultural Trade in World History. Cambridge University Press.
<urn:uuid:2eec7453-b5f3-453b-a54b-161f283d75cb>
CC-MAIN-2021-43
https://nykdaily.com/2020/04/forgotten-civilisation-5-pandya-dynasty/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585270.40/warc/CC-MAIN-20211019140046-20211019170046-00190.warc.gz
en
0.958834
2,241
4
4
We are witnessing today a clash between two opposing views of human worth. The first holds that human beings have an inherent dignity conferred on them by the Creator. The other insists that human beings have no more claim to dignity than other animals, from which they differ only in the number and sequencing of DNA molecules. From tiny bacteria to human beings all are creations of accidental processes; therefore none of them can claim special status over others. We cannot ignore it as a debate that is taking place in some obscure religion or philosophy class which should not interest the rest of us. Its vast implications affect every one of us wherever we happen to be: in our homes, businesses, schools, on the streets or at the airports. This is so because a society’s treatment of other humans depends upon its perception of the status and value of humanity itself. If there is no inherent human dignity than there can be no inherent human rights. Then human rights are reduced to the level of a policy to be decided by the calculations of governments. If, on the other hand, we accept the first view then human rights become both serious and inalienable; they cannot be taken away in the name of this or that expediency. The first view is expounded by the Qur’an which declares in no uncertain terms: “Now, indeed, We have conferred dignity on the children of Adam” (17:70). This is brought out through the Story of Creation. For God created man “with My two Hands” (38:75). Further, He breathed into Adam from His Spirit (15:29). This was so because Man was created as God’s vicegerent on earth (2:30). Islam is not alone in asserting this dignity. All previous prophets had the same message. Thus both Judaism and Christianity affirmed it because man was created in the image of God (Genesis 1:27). This view was challenged by modern science. Resting on the twin pillars of Darwinism and Freudianism, its great “achievement” was in announcing that dignity and nobility of the human soul was a myth. Darwin claimed that man was not specially created. Freud added that he had no free will that would distinguish him from animals. Rather man was subject to instinctive drives, unconscious impulses, and emotions over which he had no control. It was not that science had discovered that the first view was baseless, since it had no capacity to affirm or reject claims about matters it could not observe. Rather it was that some of its proponents had developed a fanatical religious hatred against all religion because of their bad experience with some of it. As it evolved under their patronage, modern science became a new faith that claimed to have made the faith in God and the moral values based on it obsolete. Of course, it could measure the speed of light, split the atom, and analyze the structure of DNA to “prove” its claims. Those who have been mesmerized by the achievements of science have been torn between these opposing claims about human dignity. They claim that human beings have inalienable rights then proceed to forfeit those rights on one or the other pretext. They champion religious freedom then proceed to curb it. They affirm commitment to human dignity then proceed to defile it. The new measures about universal nude body scans of all air travelers are just the latest manifestation of this conflict. The Universal Declaration of Human Rights, article 12, states: "No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence…. Everyone has the right to the protection of the law against such interference." Similarly the Fourth Amendment of the US Constitution guarantees "the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures." Yet we are told that we must bare ourselves for examination by officials if we want the privilege to travel. That the privacy protection does not even protect one’s private parts. The distance between the proclamations and the policies is the distance between the two views. The noble declarations are rooted in the first view but the policies are in the second. That they may examine us just like they examine animals on a farm is to be expected if we are no better than animals. Back to the Story of Creation that gives us special insights about this particular aspect. It tells us that the prestigious status given to mankind had its jealous enemy right from the start. It was the devil himself who came up with a plan to show that Man did not deserve the honor bestowed on him. And so Satan’s very first attack was on the most important reflection of this dignity. It was launched with subterfuge and its purpose was to produce nudity. When under Satanic persuasion Adam and Eve tasted of the forbidden tree, "their shameful parts were manifested to them, and they began to piece together onto themselves some of the leaves of the Garden" (Qur’an 7:22). This narrative reminds us that the uncorrupted human nature abhors nudity. That is why Adam and Eve frantically started to search for something to cover themselves at its first occurrence. This tendency distinguishes human beings from animals, for which nudity is natural. Hence the reminder from God: "Children of Adam! Let not Satan tempt you as he brought your parents out of the Garden, stripping them of their garments to show them their shameful parts" (7:27). The immediately preceding ayah also tells us that clothing is a gift from God and concealing the parts of the body that must be concealed is its primary purpose, while protection from elements and adornment are secondary objectives. In fact that function is integral to a central value in Islam:Haya. Although normally translated as modesty for lack of a better word, haya encompasses much more than that. It is modesty, decency, moral propriety, and inhibition against all evil, with special emphasis on concealing parts of the body.Haya is the antithesis to nudity. As for its importance, Prophet Muhammad ( Sall-Allahu ‘alayhi wa sallam) said: "Every religion has a distinct call. For Islam it is haya." [Ibn Majah]. Another famous hadith says: " Haya is a branch of Iman (faith) " [Bukhari, Muslim]. It is the basic building block of Islamic morality. When it is lost everything is lost. The concept does exist in other religions as well. In Judaism the closest term is Tzniut, which represents both a moral value and specific laws that govern the dress code, and interaction between the sexes. Rabbi Aron Moss of Australia explains: "The body is the holy creation of God. It is the sacred house of the soul. The way we maintain our respect for the body is by keeping it covered." Tzniut requires covering of the body, segregation of men and women during prayers ( mechitzah), prohibition of shaking hands with a member of the opposite sex, and prohibition of being alone in a secluded place with them. For the most part these are subsets of the commands given by Islam. In Christianity the term used is modesty. One finds repeated references to Christian modesty in encyclicals and directives. One such directive instructs: "In general, clothes should hide the shape of the body rather than accentuate it. Only this kind of clothing can truly be called ‘decent’." Pope Pius XII said in the 1950s: "Vice necessarily follows upon public nudity." Of course the pop culture, augmented by the tremendous firepower of Hollywood and other mass media---and intellectually supported by the new science in its (im)moral underpinnings---has been a constant challenger to haya and modesty. It is a familiar story. As the floodgates of immodesty were opened, the Jewish and Christian teachings were washed away from the lives of their followers to the lament of their religious leaders. More than three decades ago Rabbi Zalman Posner noted: "The prevalent culture has little patience with one of these values, and the Hebrew word [ tzniut] is virtually unknown to the American Jew." And the French Catholic leader Dom Bernard Marechaux lamented in alarming tones: "The cancer of Liberalism attacks everyone and we must be careful not to be infected ourselves. … Women who go to church dress just the way women who do not go to church dress; … It is a confusion of license and worldliness. As a result … the Church is beginning to disappear in the world. Christianity is being lost." Pope Benedict XV said, "One cannot sufficiently deplore the blindness of so many women of every age and station …[who] do not see to what degree the indecency of their clothing shocks every honest man and offends God." They condemned the summer attire, the swimming suits, and every form of nudity in a loosing battle. Church leaders instructed the women to have their skirts at least eight inches below the knee when fashion designers persuaded them to go eight inches above. And everyone knows which direction they went. The scene began to change with the arrival of Muslims. Muslims could recognize the nudity in the Western societies as the same abomination that had prevailed in the pre-IslamicJahiliyya society of Arabia. They remembered that haya is part of faith and the mother of all virtues. Against all odds and pressures they upheld the banner of haya. They became the shining example of modesty in a society that had forgotten it. In this background comes the most vicious attack ever on human dignity in the form of the new nude body scanners being installed at airports. They can take pictures of the nude body from head to toe and from all around. They are being forced on every one---men, women, and children. If they go unopposed it will be a major triumph of the idea that human beings are mere animals as Darwin and Freud would have us believe. But haya is the call of the uncorrupted human nature, a universal value that should bring together all the people of conscience who value morality and decency. While some governments have rushed to introduce these machines, others have raised strong objections. Representing them, the new European Justice Commissioner, Viviane Reding, said, "we will not let anyone dictate to us rules that go against fundamental rights on anti-terrorism grounds . . . our need for security cannot justify any violation of privacy. We should never be driven by fear, but by values" (11 Jan. 2010, testimony before the European Union Civil Liberties, Justice and Home Affairs, Legal affairs and Women's committees). Which values? That will be determined by the ongoing clash between the two views of human dignity. And the picture here is less than clear. The Rabbinical Center of Europe warned that scanners would violate the rights of religious Jewish women whose modesty would be compromised. Children rights groups warned that they violated child pornography law in Britain. But Muslims seem to have opted for their own disenfranchisement by choosing to remain silent. If they continue to do that then they will have no one but themselves to blame for the terrible consequences. For them and for the whole world. 20 SAFAR 1431, 5 FEBRUARY 2010 Reference URL: http://www.albalagh.net/food_for_thought/0095.shtml
<urn:uuid:42086e64-feb1-4888-b181-1133b2ef3d3e>
CC-MAIN-2021-43
http://islam101.net/index.php/human-rights/human-rights/385-human-dignity?font-size=smaller
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00110.warc.gz
en
0.964452
2,322
2.828125
3
What is the history of the ceiling fan? The history of ceiling fans spans a large time period, starting from the invention of the electric motor in 1876. In 1752, Englishman William Carrier designed a simple fan for removing hot air from a container, but it wasn’t until 1880 that ceiling fans became popular with homes and public buildings alike. In 1882 Chicago-based manufacturer Hunter Fan Co. began making ceiling fans after acquiring patents for advancements to airflow performance by engineer J.B. Heywood. The company then created their own fan design known as “Hunter Original,” patterned after the second generation of “Holmes Patent”. This model contained a number of features also patented by Holmes, including a cage to protect the blades and an automatic brake mechanism that stopped the fan when it was not in motion. As electricity became more prevalent in the early 20th century, ceiling fans evolved into models powered by electricity. Several companies began manufacturing these ceiling fans with electric motors, including Westinghouse Electric Corp., Emerson Electric Co., and FASCO (Fans Company LTD). Numerous other manufacturers also developed their own types of electric motor ceiling fans, also known as “hugger” type fans because they hang down low enough to hug the ceiling. They typically hung from a steel or aluminum rod that could be pushed through the blades like a screw. The earliest version of this type of fan was developed in 1917 by Dayton Motor Company, called the “Leader.” It featured two blades made out of thin sheet metal that formed an X-shape. The blades were attached to the motor using a set of gears, which caused them to wobble at 23 rotations per minute (RPM). In 1926, U.S. inventor Sam Hunter introduced the first ceiling fan with individually adjustable blade speeds, called “Hunter Multi-Size.” This allowed the blades to be rotated at varying speeds by the use of a three-speed pull chain system on the bottom of the fan motor housing. Other companies began making their own versions of this type of ceiling fan during this time period as well. In 1931, FASCO invented its now trademarked term for this style fan, “oscillating spinner.” According to company history, FASCO received an order for 100 fans from Mexico and was unable to supply them with “hugger” type fans because of the weight and tipping hazard. They sent the orders, but they were returned and would not fit in the luggage compartments of FASCO’s salesmen. The company then created a design that featured two or more blades connected to a central rod capped by a ball finial. This allowed the fan blades to pivot at their connecting point, which acted like an axis allowing them to move back and forth when activated by the pull chain system In 1955, U.S. inventor James Rauch applied for a patent for what he called “rotational fan assembly,” which was granted in 1957 as United States Patent No. 2,918,480; however, the patent was assigned to Westinghouse and never used. Rauch’s ceiling fan design included a metal housing surrounding the blade shaft at its pivot point, which housed a series of gears connected directly to the motor and blades for use with an adjacent small electric switch located above the motor. The electrically controlled switch allowed users to increase or decrease the RPMs from the pull chain as needed. In 1962, U.S. inventor Roy Harrigan patented his own version of a “rotational fan assembly”, which was sold under the company name “Harrigan Electric Co.” Harrigan marketed his version as “Oscill-Fan.” This style fan used two metal rings attached around each blade near its linkages that rotated when activated by the pull chain. This motion caused the blades to wobble, which in turn created airflow at varying speeds depending on how far they pivoted back and forth. Ceiling fans are a critical part of most people’s lives due to their capacity to move air around rooms or buildings, thus creating a more comfortable environment for those inside them. While ceiling fans were first invented over 100 years ago and have evolved considerably during that time period, small electric fans that hung from ceilings date back as early as 1882. These early versions were often powered by batteries and had paper blades fitted onto a metal pole running through their center like a screw. How do ceiling fans work? Ceiling fans are one of the most versatile types of modern lighting. They have been around for a long time but there are many people who don’t fully understand ceiling fans. They work by using an electric motor to rotate the blades around at high speed so that you get a cooling airflow without needing air conditioning units installed in your home. This makes them ideal for keeping cool during hot weather when you want to save money on your bills. Unfortunately, they are often ignored by many buyers when they are considering home improvement. This is partly because people forget about them, but also because if you have never used one before it can be hard to know whether they will suit your needs at home. Even for those who are familiar with how they work, buying a ceiling fan is often shrouded in mystery. Are ceiling fans good value for money? The best ceiling fans are very good value for money, though the exact level of savings depends on your home and how it is used. A recent study by the Lawrence Berkeley National Laboratory showed that even in climates that stay warm all year round 10 per cent of energy utilized for cooling came from ceiling fans. This suggests that they are a more energy efficient option than an air conditioning unit which can quickly result in significant savings over time. For example, if you had three cooling units with an annual electricity cost of $1,500 this could be reduced to just $780 using the same units but with ceiling fans instead. On top of this, there are no installation costs which is also a significant saving. The cheapest ceiling fans are around $30-$40 but this doesn’t necessarily make them bad value for money, especially if you use yours all summer long. The best ceiling fans can cost hundreds of dollars but generally speaking they are good value for money if you plan to use them throughout the year and not just in the summer months. Additionally, models with energy efficient motors will contribute less towards your energy bills as well as those with LED lighting systems which many offer nowadays. There’s no doubt that electricity prices continue to rise so any measures you can take to reduce what you pay will be beneficial. Ceiling fans are one way of doing this as even using them during winter can help lower your heating costs. It’s important to note that you need a free-hanging ceiling fan for maximum benefit, especially if you live in a warm or tropical country. If your ceiling is too high for this type of model then a table-top one should work well as it has the same effect. Using a table-top fan also means that it can be moved around easily so your comfort isn’t compromised by being in certain rooms. Here’s some advice on choosing the best ones – Check the blade size as this will determine its effectiveness at producing airflow and whether it can cool down an area effectively. Ceiling fans with smaller blades have been designed to cool down a smaller area so they may not be suitable for your needs. Bigger blades are better if you want them to circulate a lot of air around a larger room such as a living room or conservatory. – The best ceiling fans come with remote controls and the more features on the controller, such as LCD displays and dimmer switches, the more expensive it will be. Make sure that whatever model you decide upon has all the features that you need as this will give you maximum benefit and ensure that you can use them as much as possible in any given room. – Ensure that whichever model is chosen makes little noise when in use so your comfort isn’t compromised by having something noisy nearby. If ceiling fan reviews state otherwise then it may not be an efficient and good value for money. – Brand reputation and the warranty offered should also play a big part in your buying decision. You want to ensure that whichever model you choose will perform well and it should come with a long warranty as this is reassurance that the manufacturer backs their product. This can be an expensive investment so you need to make sure it’s worth every penny and does what you desire whether cooling or heating up a room depending on where it’s located. Will a ceiling fan save me money? The first thing to be aware of about any type of light fixture, including ceiling fans, is how much energy they use and their efficiency rating (measured as lumens per watt). Some good quality contemporary ceiling fans will actually save you money because they can reduce your heating/cooling bills by circulating air more effectively throughout your living spaces compared with leaving a window or door open or using a portable air conditioner/cooler. For example, if you can reduce the temperature in your home or office by 3 degrees Fahrenheit (1.7 degrees Celsius) while you are away at work for 8 hours, this will save you about $20 per month on energy bills based on average costs in the U.S., Canada, Australia and Europe. If you happen to live somewhere hotter than average like Texas, New Mexico or Arizona in the U.S., then savings will be even more because cooling accounts for 50% of your total energy bill. This is why fans that have high-quality reversible motors that operate smoothly at higher speeds are more efficient than others with lower horsepower motors designed to run slower but over longer periods of time. Simply put, a high quality ceiling fan with a powerful motor will circulate more air throughout your room(s) and be easier to clean at the same time. To maximize energy savings that result from using your ceiling fans effectively, plan to install them in every room that you want to be cooled or heated depending on where they are located. In hot climates, this is important because ceiling fans can make you feel up to seven degrees cooler compared to leaving windows open while increasing moisture in the air with evaporating perspiration from your skin cooling you as sweat evaporates into dryer surrounding air. This is referred to as the “wind chill” effect and it works better when there is a higher speed airflow from a ceiling fan in combination with lower humidity outside. You can find many different styles of ceiling fans to match your décor and lifestyle at home improvement stores, lighting showrooms and department stores or order online from retailers like Amazon. You don’t need to spend a lot of money on a high end designer fan that has every bell and whistle (although they are available if you want them) in order to be energy efficient. However, when shopping around for the best value in any type of ceiling fan for your needs, always look at the efficiency rating in lumens per watt along with other important features such as airflow efficiency (cfm), blade span size and rotation speed in RPMs/revolutions per minute. What You Need To Know Before Buying Ceiling Fans. A ceiling fan can be a great addition to your home. Not only do they provide ventilation, but they offer a great way to cool off in the summer and keep warm in the winter. Unfortunately, many people don’t think about these fans until it’s too late. If you want to ensure that you get fans for every room of your house, here are some things that you need to know before buying ceiling fans When shopping for a ceiling fan, always remember this phrase: bigger is better. A common mistake made by homeowners who want to save money on their electric bill is forgetting that air circulation requires more airflow when there are fewer blades on the fan. This means that if you get a smaller fan, not only will it not cool your room as effectively, but it will cost more to run. This is why it’s best to go with a larger fan that circulates the air better and costs less. When deciding which size you need, don’t forget about the height of your ceilings! The first thing you need to keep in mind before purchasing a ceiling fan is where you want to put it. There are a few things to consider when trying to select a location for your new fan -What type of lighting do you have? Would this be best suited by overhead lighting or would an additional light source provide better illumination? Can this area accommodate another fixture overhead without compromising on space needed for the blades of the fan? What shape rooms do you have? Rectangular rooms (i.e. kitchens, hallways) require a fan with opposing blades to ensure proper airflow and balance, whereas square/circle shaped rooms (i.e. living room) can use a fan with flush mounted blades -What type of climate and seasons do you experience? If you live in an area that has long periods of extreme heat or cold then it is important that you look into getting a fan capable of oscillation as well as one that can be easily reversed during colder months to warm your house up. For example: If the fan were to be hung in a square/rectangular shaped room and it were installed with only one way of reversible airflow, the airflow would need to travel across each wall in order to properly circulate air throughout the space But what happens when you install a ceiling fan in a square or rectangular shaped home where there is only one electrical box? Well, unfortunately, you would not be able to use that outlet because most fans require two separate circuits; one for up and down rotation and another for forward and reverse. What are the different types of ceiling fan? Standard Ceiling Fans (Reversible) As the name suggests, you can remove the blades on these fans and easily reverse their direction for summer or winter use; allowing them to cool or heat your home. These fans also come with light kits installed inside them which provide illumination over whatever area they are being used in Downside: They tend to be more expensive than other styles as they have more parts and require a wider body for balance purposes. Also, due to their size, most standard sized ceiling fans have a smaller number of blades, making them less efficient. Standing Seam Metal Ceiling Fan This modern ceiling fan is often made from stamped steel and can range in size from four feet to six feet in diameter. The blades are typically made out of aluminum which works well with the metal body of the fan itself. These fans also come standard with light kits installed inside them for illumination purposes just like the previous style mentioned above. Downside: Very expensive compared to other styles as they require more parts to properly balance them due to their larger size. They also tend to be very loud when running at high speeds, making them unsuitable for bedrooms or living rooms where quietness is important. Low Profile Ceiling Fan These ceiling fans are developed to meet the needs of those looking for blades that stick close to the ceiling without hanging down too low. This allows them to move air in a downward motion, cooling people off as opposed to simply moving air around. Low profile ceiling fans can come standard with light kits installed inside them similar to the other two styles mentioned above. Downside: These fans usually have a smaller number of blades which means they will not be as efficient at moving air as either of the previously mentioned styles would be at their high speeds. Also, due to their small size and delicate parts, they must be installed by professionals most often making them an expensive option if you choose not to have one professionally installed. Hanging Ceiling Fan These ceiling fans are best used in smaller areas with low ceilings, leaving plenty of space above them to move air around comfortably. They are most often installed in bedrooms where they are not the only source of light for that room or hallways where wall lamps can provide adequate illumination. As far as appearance goes, hanging ceiling fans tend to have a simpler design than most other styles making them more suitable for people looking for a stylish but inconspicuous fan option. Downside: These types of fans do not typically come with light kits installed inside so if you want one with light, you will need to purchase it separately which will increase the overall price of this type of fan over others listed here. Also, due to their small size and lack of a large body, hanging ceiling fans can be dangerous for children who may get caught in the spinning blades. Should I Install Ceiling Fans Throughout My Property? Ceiling fans are a wonderful addition to any home, but only if they’re installed in the correct places. Think about these questions when choosing a place to bring a ceiling fan into your home: Do I need more airflow in the room? Do I have enough space for a ceiling fan? Is this room used often or continuously occupied from time to time? If so, does it get warm during these periods of use? Will my new ceiling fan add value to my property? Answering the above questions will help determine where you should install your new ceiling fans and why. Many homes lack proper ventilation due to openings that were built into walls that were constructed at a time when fireplaces were the main source of heat. Ceiling fans are an ideal way to gain fresh airflow and circulate warmer air, especially during colder months. Cooler mid-air creates a nice nuisance for overheated rooms creating an environment that is more conducive to comfort and better health. Ceiling fans are not just used to push down warmer air from a ceiling. They can also be configured in a way that pulls warmer air up and out of the room, therefore helping to keep heat levels down during periods of occupancy. Do I have enough Room for a Ceiling Fan? This is probably going to be the most difficult question for anyone who’s not an electrician or building contractor. The truth is you know if there’s enough space on your ceiling. You don’t need someone else telling you what you already know deep inside. If your answer is yes, then install one on every ceiling where you plan to use it – but make sure it does not obstruct any walkways such as doorways and open stairwells (it would be wise to consult with your local fire department before installing a ceiling fan in a stairwell). If your answer is no, then consider installing a wall mounted fan instead. Will my new Ceiling Fan add Value to my Property? Installing a ceiling fan in every room that gets continuously used will help you save money on both cooling and heating costs while reducing stress levels from the soothing effects they provide at the same time. If installed correctly, your fans should also last several times longer than regular light fixtures which can add value when reselling your property. Installing ceiling fans doesn’t need to be difficult or confusing – if you already have access to wiring from an existing light fixture, it’s just a matter of switching out one for another. When shopping for ceiling fans, the first things on your mind should be finding out what types of lighting fixtures are available as well as which type of blade will best suit your needs depending on the size and shape of your room. Once you know these things, it will be much easier to find the ceiling fans that best suit your home!
<urn:uuid:97dc5fcd-30c1-431b-93dc-c2732e03e344>
CC-MAIN-2021-43
https://zazzyhome.com/the-ultimate-guide-to-ceiling-fans-everything-you-ever-wanted-to-know-and-stuff-you-didnt/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00029.warc.gz
en
0.957431
4,024
3.421875
3
The Associated Students of the University of Puget Sound (ASUPS) with support from the Diversity Advisory Council (DAC), the Center for Intercultural and Civic Engagement (CICE), and the Offices of Diversity and Inclusion, and Institutional Research, seeks the campus community’s support in raising awareness about bias and its impact on campus members at the University of Puget Sound through this anti-bias campaign effort. ASUPS's goal is to share and make visible some of the experiences students have had on campus and bring awareness to this issue on our campus and throughout the nation. Students express concern that these experiences are often ignored, looked over, or minimized. The ASUPS drove and student-led anti-bias campaign consists of a video where students courageously share their experiences with campus bias and remind the campus community that bias and hate are not tolerated on our campus. In addition to the video, ASUPS has put up posters around campus, highlighting statistics about student experiences and campus community perceptions about discrimination and harassment on our campus. This campaign is intended to highlight the different ways public biases influence feelings of belonging and alienate students from diverse backgrounds. The video can also be viewed on ASUPS and CICE Facebook pages. Why understanding bias and hate matter on our Puget Sound campus According to data from the U.S. Department of Education, the number of reported campus hate crimes increased by 25 percent from 2015 to 2016. A 2016 Chronicle of Higher Education article stated that colleges and universities reported a total of 1,250 hate crimes, defined as offenses motivated by biases of race, national origin, ethnicity, religion, sexual orientation, gender, or disability. The University of Puget Sound is not exempt from issues of bias and discrimination. This year, the University of Puget Sound has received and responded to multiple public bias cases, including anti-Semitic and racist graffiti around campus. Public bias messaging, regardless of the format (writing, doodling, markings, and/or drawings), generally refers to the open communication of bias involving derogatory and insensitive language and/or images that may cause physical or psychological harm, whether or not vandalism has occurred. Examples of public bias include defacing or markings on posters, or expression in posters, e-mail, cyber-communication, messaging on classroom desks, bathroom stalls, whiteboards, and any other messaging format that does not result in property damage but does create a hostile environment that causes psychological harm to the members of the Puget Sound community. This year, the University of Puget Sound has received and responded to 27 reported incidents of bias, 16 of which were deemed a bias incident, including anti-Semitic and racist graffiti and gendered, homophobic, and racist public bias around campus. Public bias messaging, regardless of the format (writing, doodling, markings, and/or drawings), generally refers to the open communication of bias involving derogatory and insensitive language and/or images that may cause physical or psychological harm, whether or not vandalism has occurred. Examples of public bias include defacing or markings on posters, or expression in posters, e-mail, cyber-communication, messaging on classroom desks, bathroom stalls, whiteboards, and any other messaging format that does not result in property damage but does create a hostile environment that causes psychological harm to the members of the Puget Sound community. Based on the Campus Climate Survey from 2015: - 16% of students reported that they felt excluded, silenced, ignored, discriminated against, or harassed, even subtly, as a result of their gender, by other students; - 11% of students reported that they felt excluded, silenced, ignored, discriminated against, or harassed, even subtly, as a result of their race/ethnicity, by other students, and; - 62% of students agreed that certain groups feel excluded from the campus learning community at Puget Sound. Engaging with and thinking about challenging topics such as racism, sexism, homophobia, religious bigotry, and other forms of bias and discrimination may be challenging and even daunting, but that doesn't need to grow our understanding and awareness of them less important in our community. Undergraduates who identify as minoritized rose from 24.1% five years ago to 24.8% in the Fall of 2017. Students who are underrepresented minoritized have risen from 11.3% five years ago to 17.7%. Most of that increase is due to first time in college students, of whom 25.1% identified as minoritized five years ago vs. 31.0% in Fall 2017. Their first to second-year retention rate lags behind the class as a whole by 1% point on average, and the six-year graduation rate lags by 4 percentage points. Each year Puget Sound matriculates over 600 new students, and they bring with them different perspectives, ideas, and values. This is part of what makes higher education vibrant. Similarly, as an institution of higher learning, we understand the importance of providing the campus community opportunities for learning about bias and/or unlearning bias and embracing our role in proactively carving out spaces and creating opportunities for education and awareness to occur. While we are a campus community that values the open dialogue and intellectual exchange of ideas, it is important to clarify that this freedom in no way serves as permission for discriminatory and hurtful bias to occur. You are welcome to view the ASUPS Anti-Bias Campaign video above or on the ASUPS or CICE Facebook page to hear some of the students' experiences on our campus. The campaign is a proactive effort designed to help campus members have critical conversations about bias. These topics are often difficult, and at times it may be uncomfortable to engage with them. Our society's structures have taught many of us ways of thinking that are oppressive and no less a part of who we are as a community. It may be jarring to upend that thinking. This is normal. We encourage campus members to push through, reach out to friends and colleagues who can help the process, and keep learning. Below you will find educational resources and support services available for campus members. Educational Resources at Puget Sound The resources below were curated by students, faculty, and staff at Puget Sound and serve as a starting place for you to engage in these difficult conversations; no judgment, just learning. We encourage you to browse the following content and to be talking about them with your own communities, on or off-campus. - ASUPS Expanding Consciousness Page - Anti-Racist Education Hour Contact: Lorraine Kelly - Center for Intercultural and Civic Engagement - Office of Diversity and Inclusion - Race and Pedagogy Institute - Courageous Conversations - Race and Pedagogy Journal - Queer Alliance Queer Alliance is a supportive community of queer and ally students, affirming all sexual orientations and identities. We work through social, political, and educational activism for equality for all sexual minorities. - Black Student Union Provides a more cohesive and culturally understanding environment for students of color. BSU also brings culturally diverse programs and lectures to Puget Sound. Mondays at 8 p.m. in the Student Diversity Center. Contact: Brie Williams - Latinx Unidx Wednesdays at 6 p.m. in the Student Diversity Center Contact: Soli Loya-Lara - Asian Pacific Islander Collective The club seeks to engage the community on race, culture, immigration, and intersectionality as it applies to South, South East, East Asian, and Pacific Islander students. - Jewish Student Union Meeting times TBA - Asian Student Community Contact: Paul Huffman - Ka Ohana me ke Aloha Contact: Amber Odo - Visible Spectrum The mission of Visible Spectrum is to provide an inclusive environment for students of color who are in the STEM field. Contact: Simone Moore - Students of Color Study Hour Wednesdays at 8 p.m. in the CWLT (Howarth 109) - It’s on Us - Students Against Sexual Assault (SASA) A coalition of students who wish to change the culture of sexual assault on campus through educating their peers, connecting survivors with resources, and increasing awareness. Contact: Carly Dryden - Rebuilding Hope!: Sexual Assault Prevention Center of Pierce County - Green Dot - National Alliance on Mental Illness (NAMI) NAMI advocates for access to services, treatments, supports, and research and is steadfast in its commitment to raising awareness and building a community of hope for all those in need. Contact: Nathaniel Baniqued or Nina Kranzdorf - Counseling, Health and Wellness Services - Bias-Hate Education Response Team (BHERT) The Bias-Hate Education Response Team (BHERT) aims to foster greater awareness of bias and hate on campus and how incidents of bias and hate may be shaping our community. BHERT cultivates a space for proactive dialogue related to emerging trends of bias or hate incidents on campus. BHERT represents a cadre of faculty and staff who take an active role in addressing trends of hate or bias incidents, create opportunities to confront these issues, and encourage dialogue for change. BHERT Reporting Form - Title IX Title IX is a law that forbids education programs that receive federal money from sex discrimination. "No person in the United States shall, based on sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving federal financial assistance." Title IX of the Education Amendments of 1972 File a Title IX Report - Jewish Advisory Committee Contact Dave Wright, University Chaplain Community statement on non-verbal public bias The University of Puget Sound works consistently to cultivate a safe and inclusive learning environment where all campus members can contribute and flourish. Public bias, whether verbal or non-verbal, including messaging on desks, walls, stalls, doors, whiteboards, email, and social media, can evoke feelings of marginality and compromises a welcoming and educational atmosphere. Please support the culture of inclusive learning we aim to achieve at Puget Sound.
<urn:uuid:804b0439-b5cb-4502-878e-2d9692b8868a>
CC-MAIN-2021-43
https://www.pugetsound.edu/diversity-and-inclusion-puget-sound/bias-hate-education-response-team-bhert/asups-anti-bias
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00430.warc.gz
en
0.939669
2,103
2.578125
3
This article was originally written for and published at N-O-D-E on February 14th, 2016. It has been posted here for safe keeping. HYPERBORIA 101 — MOVING THROUGH THE MESH Hyperboria is a network built as an alternative to the traditional Internet. In simple terms, Hyperboria can be thought of as a darknet, meaning it is running on top of or hidden from the existing Internet (the clearnet). If you have ever used TOR or I2P, it is a similar concept. Unlike the Internet, with thousands of servers you may interact with on a day-to-day basis, access to Hyperboria is restricted in the sense that you need specific software, as well as someone already on the network, to access it. After configuring the client, you connect into the network, providing you with access to each node therein. Hyperboria isn’t just any alternative network, it’s decentralized. There is no central point of authority, no financial barrier of entry, and no government regulations. Instead, there is a meshnet; peer-to-peer connection with user controlled nodes and connected links. Commonly, mesh networks are seen in wireless communication. Access points are configured to link directly with other access points, creating multiple connections to support the longevity of the network infrastructure and the traffic traveling over it. More connections between nodes within the network is better than less here. With this topology, all nodes are treated equally. This allows networks to be set up inexpensively without the infrastructure needed to run a typical ISP, which usually has user traffic traveling up several gateways or routers owned by other companies. But what is the goal of the Hyperboria network? With roots in Reddit’s /r/darknetplan, we see that the existing Internet has issues with censorship, government control, anonymity, security, and accessibility. /r/darknetplan has a lofty goal of creating a decentralized alternative to the Internet as we know it through a scalable stack of commodity hardware and open source software. This shifts the infrastructure away from physical devices owned by internet service providers, and instead puts hardware in the hands of the individual. This in itself is a large undertaking, especially considering the physical distance between those interested in joining the network, and the complexities of linking them together. While the ultimate idea is a worldwide wireless mesh connecting everyone, it won’t happen overnight. In the meantime, physical infrastructure can be put in place by linking peers together over the existing Internet through an overlay network. In time with more participation, wireless coverage between peers will improve to the point where more traffic can flow over direct peer-to-peer wireless connections. The Hyperboria network relies upon a piece of software called cjdns to connect nodes and route traffic. Cjdns’ project page boasts that it implements “an encrypted IPv6 network using public-key cryptography for address allocation and a distributed hash table for routing.” Essentially, the application will create a tunnel interface on a host computer that acts as any other network interface (like an ethernet or wifi adapter). This is powerful in the way that is allows any existing services you might want to face a network (HTTP server, BitTorrent tracker, etc.) to run as long as that service is already compatible with IPv6. Additionally, cjdns is what is known as a layer 3 protocol, and is agnostic towards how the host connects to peers. It doesn’t matter much if the peer we need to connect to is over the internet or a physical access point across the street. All traffic over Hyperboria is encrypted end-to-end, stopping eavesdroppers operating rogue nodes. Every node on the network receives a unique IPv6 address, which is derived from that node’s public key after the public/private keypair is generated. This eliminates the need for additional encryption configuration and creates an environment with enough IP addresses for substantial network expansion. As the network grows in size, the quality of routing also improves. With more active nodes, the number of potential routes increases to both mitigate failure and optimize the quickest path from sender to receiver. Additionally, there are no authorities such as the Internet Assigned Numbers Authority (IANA) who on the Internet control features like address allocation and top level domains. Censorship can easily be diminished. Suppose someone is operating a node hosting content that neighboring nodes find offensive, so they refuse to provide access. As long as that node operator can find at least one person somewhere on the network to peer with, he can continue making his content accessible to the whole network. One of the main differences between Hyperboria and networks like TOR is how connection to the network is made. Out of the box, running the cjdns client alone will not provide access to anything. To be able to connect to the network, everyone must find someone to peer with; someone already on Hyperboria. This peer provides the new user with clearnet credentials for his node (an ip address, port number, key, and password) and the new user enters them into his configuration file. If all goes to plan, restarting the client will result in successful connection to the peer, providing the user access to the network. However, having just one connection to Hyperboria doesn’t create a strong link. Consider what would happen if this node was experiencing an outage or was taken offline completely. The user and anyone connecting to him as an uplink into the network would lose access. Because of this, users are encouraged to find multiple peers near them to connect to. In theory, everyone on the network should be running their node perpetually. If a user only launched cjdns occasionally, other nodes on the network will not be able to take advantage of routing through the user’s node as needed. With the peering system, there is no central repository of node information. Nobody has to know anyone’s true identity, or see who is behind a particular node. All of the connections are made through user-to-user trust when establishing a new link. If for any reason a node operator were to become abusive to other nodes on the network, there is nothing stopping neighboring nodes from invalidating the credentials of the abuser, essentially kicking them off of the network. If any potential new node operator seemed malicious, other operators have the right to turn him away. The most important aspect of growing the Hyperboria network is to build meshlocals in geographically close communities. Consider how people would join Hyperboria without knowing about their local peers. Maybe someone in New York City would connect to someone in Germany, or someone in San Franscisco to someone in Philadelphia. This creates suboptimal linking as the two nodes in each example are geographically distant from each other. The concept of a meshlocal hopes to combat this problem. Users physically close together are encouraged to form working groups and link their nodes together. Additionally, these users work together to seek new node operators with local outreach to grow the network. Further, meshlocals themselves can coordinate with one another to link together, strengthening regional areas. Meshlocals can also offer more in-person communication, making it easier to configure wireless infrastructure between local nodes, or organize actions via a meetup. Many meshlocals have gone on to gain active followings in their regions, for example NYC Mesh and Seattle Meshnet. INSIDE THE NETWORK After connecting to Hyperboria, a user may be at a loss as to what he is able to do. All of these nodes are connected and working together, but what services are offered by the Hyperboria community, for the Hyperboria community? Unlike the traditional Internet, most services on Hyperboria are run non-commercially as a hobby. For example, Hyperboria hosts Uppit: a Reddit clone, Social Node: a Twitter-like site,and HypeIRC: an IRC network. Some of these services may additionally be available on the clearnet, making access easy for those without a connection to Hyperboria. Others are Hyperboria-only, made specifically and only for the network. As the network grows, more services are added while some fade away in favor of new ones or disrepair. This is all community coordinated after all; there is nothing to keep a node operator from revoking access to his node on a whim for any reason. As previously mentioned, the ultimate goal of Hyperboria is to offer a replacement for the traditional Internet, built by the users. As it stands now, Hyperboria has established a core following and will see more widespread adoption as meshlocals continue to grow and support users. Additionally, we see new strides in the development of cjdns with each passing year. As time has gone on, setup and configuration have becomes simpler for the end-user while compatibility has also improved. The more robust the software becomes, the easier it will be to run and keep running. We also see the maturation of other related technologies. Wireless routers are becoming more inexpensive with more memory and processing power, suitable for running cjdns directly. We also see the rise of inexpensive, small form factor microcomputers like the Raspberry Pi and Beaglebone Black, allowing anyone to buy a functional, dedicated computer for the price of a small household appliance like an iron or coffee maker. Layer 2 technologies like B.A.T.M.A.N. Advanced are also growing, making easily-configurable wireless mesh networks simple to set up and work cooperatively with the layer 3 cjdns. Hyperboria is an interesting exercise in mesh networking with an important end goal and exciting construction appealing to network professionals, computer hobbyists, digital activists, and developers alike. It’ll be interesting to see how Hyperboria grows over the next few years, and if it is indeed able to offer a robust Internet-alternative for all. Until then, we ourselves can get our hands dirty setting up hardware, developing software, and helping others do the same. With any luck, we will be able to watch it grow. One node at a time. BY MIKE DANK (@FAMICOMAN)
<urn:uuid:ce2932a4-354d-4508-b8fb-901af9c9a6f8>
CC-MAIN-2021-43
https://famicoman.com/2016/05/26/hyperboria-101-moving-through-the-mesh/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00030.warc.gz
en
0.944489
2,119
2.671875
3
What is the chemical equation balancer A chemical equation is a way of representing different chemical reactions in a symbolic way, with the help of symbols identifying various chemical elements and formulae. Writing of a chemical equation is not easy itself, and the majority of students, with the exception of those who are fond of chemistry, find it very difficult. Even more efforts requires balancing of the chemical reaction. Fortunately, nowadays students can use a very helpful invention, the chemical equation balancer, which helps dealing with chemical reactions and their balancing without suffering and spending hours on this matter. However, before you start using the equation balancer, it is important that you learn and understand the basics of chemistry, the most important and essential properties of chemical elements, compounds, how to create chemical equations and so on. In this article, you will get familiar with all this information and learn how to use the chemical equation balancer. Keep reading carefully and you will find out that chemistry can be easier than it seems. When it comes to dealing with chemical reactions, searching for coefficients for the chemical reactions is really difficult, especially for those students for whom chemistry is not the biggest strength. There are different approaches to balance the chemical reaction, including mathematical methods, providing general method of finding the coefficients. Some students argue that mathematical approach is even better, more convenient and easier than the chemical itself, although it depends on personal skills and the level of knowledge both in the area of chemistry and mathematics. As a matter of fact, the relevance of mathematical approach in dealing with chemical reactions build the ground for the idea of creating the chemical equation balancer, as long as it works with the help of algebraic technique. The equation balancer may appear to be really useful for students when determining the right coefficients. More than that, when using the chemical equation balancer you will serve a lot of efforts, time and if you find the coefficients with its help, you can be absolutely sure that you result will be correct and fully satisfying the chemical assignment. Having said that, the usage of the equation balancer can be appropriate only in the case if you already know at least the basics of the process of balancing chemical reactions. There is no guarantee that you will be allowed to use this application program during your examinations, and there is a great chance that you will not. Therefore, even though during semester you can get advantages from such great invention as the chemical equation balancer, you should still be ready to solve any chemical problem, deal with chemical reactions and be able to balance the chemical equations without additional help in the form of the chemical equation balancer. Below in the article, we provided a number of guidelines and tips that will help you understand the order, methodology and logic of the process of balancing the chemical equation. Getting started with balancing the chemical equations When talking about chemical equations, one should mean a representation of that or another chemical reaction in the form of symbols identifying various chemical elements that take part in the reaction. The chemicals taking part in the reaction should be written on the left-hand side of the equation and the products resulting from the chemical reaction should be written on the right-hand side. When dealing with a chemical reaction and writing the chemical reaction, one should know the law of conservation of mass. This law states the fact that there is no possibility for atoms to be created or destroyed during the chemical reaction. This means that by the end of the reaction the same number of atoms should remain the same, otherwise your reaction will be not correct. In other words, the number of atoms that the reactants contain needs to balance the number of the atoms that you get in the products of the chemical reaction. As a matter of fact, this is why the chemical equation has to be balanced and what the chemical equation balancer serves for. You have to make sure that the overall number of atoms is the same and that the chemical reaction is completed correctly. Below in the article you will find guidelines describing the process of balancing the chemical equation and learn how to deal with it without the help of the chemical equation balancer. Dealing with a chemical reaction If it is hard for you to understand the meaning of a chemical reaction, you need to simplify the definition as much as possible and try to imagine a real example of that or another chemical reaction. For instance, remember the process of baking cookies. It is the easiest way to understand what a chemical reaction means. In order to bake some cookies, you have to mix the needed ingredients together. These ingredients may vary in accordance with the recipe, but as a rule, you need some eggs, sugar, butter and so on. When you get the mixture of all the needed ingredients baked, you get the cookies ready. The same principle concerns the process of chemical reactions. There is also a recipe explaining how to manage that or another chemical reaction. The ingredients of a chemical reaction are determined as «reactants», and what you get in the end of the reaction is called «products». If you want to understand the principle of a chemical reaction better, you have to study the periodic table, because it is the source of all the essential and most important information than one should use and know when dealing with chemistry. The periodic table presents the elements that everything around us consists of. The basic building blocks of chemistry are considered to be the atoms, and all of them you can easily find in the periodic table, together with their main properties. The periodic table can usually be found in any textbook on chemistry. But also, you can find it whenever you want on the internet with even more precise and detailed explanation of how to use it, what every symbol means, how to apply that or another element to the chemical reaction and so on. In order to get the better understanding of the periodic table and all the information provided in it, you have to imagine and memorize how all the elements provided in the periodic table really exist in the nature. For example, if you look at a diamond you can be sure that you look at a pure carbon that a diamond consist of. A great number of elements exist in the form of gases and you cannon see them with your own eyes. They exist at the temperature of the room. The condition under which this existence becomes possible is that they are bounded to themselves because it keeps them stable. Another important thing that you should know is how to manage describing that or another molecule in the form of writing. As a matter of fact, it is a specific listing provided sequentially including the atoms of a particular molecule. The sequence is usually followed by certain number indicating the quantity of a particular kind of the atom that the element consist of. Apart from the importance of theoretical background for the capability to deal with chemical reactions and balancing chemical equations, you should be aware of the necessity of practice. At first, it will take you a lot of time to deal with a simple chemical assignment, but if you devote enough of time to the practice, you will learn to do everything quickly. In addition, you will have enough of experience to not to make mistakes when dealing with chemical problems. There must be dozens of different tasks in your chemistry textbook or you can find them on the internet. In any case, the importance of practice is really big especially for the beginners. Therefore, it is better to spend some time and efforts during the semester, because it will be too hard and overwhelming to learn everything at once just before the examination. Balancing the chemical equation step-by-step - First of all, you have to write down the existing chemical equation. The chemical reaction takes place when the components of the reaction interact with each other. It may be expressed in burning, etc. For example, when propane meets oxygen it starts to burn and the result of the reaction, is the producing of water by interaction of the elements. - The next step of the balancing process is going to be writing down the exact number of atoms that every element of the chemical reaction consists of. Pay equal attention to both sides of the equation and to every element. - Keep in mind that the number of elements that you will have to balance may vary significantly. In case if you need to balance more than only one element, you have to select the particular element that only one molecule of the chemical reactant contains and also, select the element that the molecule of the final products of the reaction includes. Remember dealing with the element on the left-hand first and than with the right-hand part of the chemical equation. - The next step is going to be coping with the reaction coefficients. Here, you will have to add the needed coefficient to the atom on the right-hand side of the chemical equation in order to balance the number of atoms on the both sides. Remember that when dealing with the chemical equation, you have the possibility to change the coefficients, although you are not allowed to alter the subscripts. - Keep balancing elements in accordance with the principle provided in the previous step until you get the same number of atoms on the both sides of the chemical equation. - Once you have the elements balanced, you will have to write down the equation by means of using symbols and formulas. - The next step will be to check the received number of elements that you have found on the both sides of the chemical reaction: those taking part on the left-hand side where the reactants are indicated, as well as the elements on the right-hand side where the products of the chemical reaction are indicated. These are the essential steps to undertake when managing the chemical equation. In any case remember that when you have already balanced your chemical equation you can use the chemical equation balancer just in order to check whether it is right or not. Thus, you will find out whether you have mistakes in your equation or make sure that everything is correct. Having said that, we would like to warn you that you shouldn’t use fractions when coping with the chemical equation as coefficients, as long as it is impossible to get only half of an atom or even half of a molecule in the chemical reaction. However, within the process of balancing the chemical equation, you can turn to the help of fractions. They can assist you within the process, although you will not get the chemical equation balanced until it includes coefficients that use fractions. At the end of the process of balancing the equation, you will have to rid of the fractions. In order to cope with it, you will have to multiple the overall chemical equation that you have, including both sides of it by the number indicating the denominator of that or another fraction. If you use and follow all the guidelines provided in the article carefully, you will definitely cope with your chemical assignment, with or without the help of the chemical equation balancer. Our Service Charter Excellent Quality / 100% Plagiarism-FreeWe employ a number of measures to ensure top quality essays. The papers go through a system of quality control prior to delivery. We run plagiarism checks on each paper to ensure that they will be 100% plagiarism-free. So, only clean copies hit customers’ emails. We also never resell the papers completed by our writers. So, once it is checked using a plagiarism checker, the paper will be unique. Speaking of the academic writing standards, we will stick to the assignment brief given by the customer and assign the perfect writer. By saying “the perfect writer” we mean the one having an academic degree in the customer’s study field and positive feedback from other customers. Free RevisionsWe keep the quality bar of all papers high. But in case you need some extra brilliance to the paper, here’s what to do. First of all, you can choose a top writer. It means that we will assign an expert with a degree in your subject. And secondly, you can rely on our editing services. Our editors will revise your papers, checking whether or not they comply with high standards of academic writing. In addition, editing entails adjusting content if it’s off the topic, adding more sources, refining the language style, and making sure the referencing style is followed. Confidentiality / 100% No DisclosureWe make sure that clients’ personal data remains confidential and is not exploited for any purposes beyond those related to our services. We only ask you to provide us with the information that is required to produce the paper according to your writing needs. Please note that the payment info is protected as well. Feel free to refer to the support team for more information about our payment methods. The fact that you used our service is kept secret due to the advanced security standards. So, you can be sure that no one will find out that you got a paper from our writing service. Money Back GuaranteeIf the writer doesn’t address all the questions on your assignment brief or the delivered paper appears to be off the topic, you can ask for a refund. Or, if it is applicable, you can opt in for free revision within 14-30 days, depending on your paper’s length. The revision or refund request should be sent within 14 days after delivery. The customer gets 100% money-back in case they haven't downloaded the paper. All approved refunds will be returned to the customer’s credit card or Bonus Balance in a form of store credit. Take a note that we will send an extra compensation if the customers goes with a store credit. 24/7 Customer SupportWe have a support team working 24/7 ready to give your issue concerning the order their immediate attention. If you have any questions about the ordering process, communication with the writer, payment options, feel free to join live chat. Be sure to get a fast response. They can also give you the exact price quote, taking into account the timing, desired academic level of the paper, and the number of pages.
<urn:uuid:92074830-e843-4424-adfd-a92870ee1a04>
CC-MAIN-2021-43
https://essaygazebo.com/2017/09/22/dealing-with-the-chemical-equation-balancer/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00430.warc.gz
en
0.94608
2,818
3.40625
3
Electrostatic discharge (ESD) is the sudden flow of electricity between two electrically charged objects caused by contact, an electrical short or dielectric breakdown. A buildup of static electricity can be caused by tribocharging or by electrostatic induction. The ESD occurs when differently-charged objects are brought close together or when the dielectric between them breaks down, often creating a visible spark. ESD can create spectacular electric sparks (lightning, with the accompanying sound of thunder, is a large-scale ESD event), but also less dramatic forms which may be neither seen nor heard, yet still be large enough to cause damage to sensitive electronic devices. Electric sparks require a field strength above approximately 40 kV/cm in air, as notably occurs in lightning strikes. Other forms of ESD include corona discharge from sharp electrodes and brush discharge from blunt electrodes. ESD can cause harmful effects of importance in industry, including explosions in gas, fuel vapor and coal dust, as well as failure of solid state electronics components such as integrated circuits. These can suffer permanent damage when subjected to high voltages. Electronics manufacturers therefore establish electrostatic protective areas free of static, using measures to prevent charging, such as avoiding highly charging materials and measures to remove static such as grounding human workers, providing antistatic devices, and controlling humidity. ESD simulators may be used to test electronic devices, for example with a human body model or a charged device model. One of the causes of ESD events is static electricity. Static electricity is often generated through tribocharging, the separation of electric charges that occurs when two materials are brought into contact and then separated. Examples of tribocharging include walking on a rug, rubbing a plastic comb against dry hair, rubbing a balloon against a sweater, ascending from a fabric car seat, or removing some types of plastic packaging. In all these cases, the breaking of contact between two materials results in tribocharging, thus creating a difference of electrical potential that can lead to an ESD event. Another cause of ESD damage is through electrostatic induction. This occurs when an electrically charged object is placed near a conductive object isolated from the ground. The presence of the charged object creates an electrostatic field that causes electrical charges on the surface of the other object to redistribute. Even though the net electrostatic charge of the object has not changed, it now has regions of excess positive and negative charges. An ESD event may occur when the object comes into contact with a conductive path. For example, charged regions on the surfaces of styrofoam cups or bags can induce potential on nearby ESD sensitive components via electrostatic induction and an ESD event may occur if the component is touched with a metallic tool. The most spectacular form of ESD is the spark, which occurs when a heavy electric field creates an ionized conductive channel in air. This can cause minor discomfort to people, severe damage to electronic equipment, and fires and explosions if the air contains combustible gases or particles. However, many ESD events occur without a visible or audible spark. A person carrying a relatively small electric charge may not feel a discharge that is sufficient to damage sensitive electronic components. Some devices may be damaged by discharges as small as 30 V. These invisible forms of ESD can cause outright device failures, or less obvious forms of degradation that may affect the long term reliability and performance of electronic devices. The degradation in some devices may not become evident until well into their service life. A spark is triggered when the electric field strength exceeds approximately 4–30 kV/cm — the dielectric field strength of air. This may cause a very rapid increase in the number of free electrons and ions in the air, temporarily causing the air to abruptly become an electrical conductor in a process called dielectric breakdown. Perhaps the best known example of a natural spark is lightning. In this case the electric potential between a cloud and ground, or between two clouds, is typically hundreds of millions of volts. The resulting current that cycles through the stroke channel causes an enormous transfer of energy. On a much smaller scale, sparks can form in air during electrostatic discharges from charged objects that are charged to as little as 380 V (Paschen's law). Earth's atmosphere consists of 21% oxygen (O2) and 78% nitrogen (N2). During an electrostatic discharge, such as a lightning flash, the affected atmospheric molecules become electrically overstressed. The diatomic oxygen molecules are split, and then recombine to form ozone (O3), which is unstable, or reacts with metals and organic matter. If the electrical stress is high enough, nitrogen oxides (NOx) can form. Both products are toxic to animals, and nitrogen oxides are essential for nitrogen fixation. Ozone attacks all organic matter by ozonolysis and is used in water purification. Sparks are an ignition source in combustible environments that may lead to catastrophic explosions in concentrated fuel environments. Most explosions can be traced back to a tiny electrostatic discharge, whether it was an unexpected combustible fuel leak invading a known open air sparking device, or an unexpected spark in a known fuel rich environment. The end result is the same if oxygen is present and the three criteria of the fire triangle have been combined. Damage prevention in electronicsEdit Many electronic components, especially integrated circuits and microchips, can be damaged by ESD. Sensitive components need to be protected during and after manufacture, during shipping and device assembly, and in the finished device. Grounding is especially important for effective ESD control. It should be clearly defined, and regularly evaluated. Protection during manufacturingEdit In manufacturing, prevention of ESD is based on an Electrostatic Discharge Protected Area (EPA). The EPA can be a small workstation or a large manufacturing area. The main principle of an EPA is that there are no highly-charging materials in the vicinity of ESD sensitive electronics, all conductive and dissipative materials are grounded, workers are grounded, and charge build-up on ESD sensitive electronics is prevented. International standards are used to define a typical EPA and can be found for example from International Electrotechnical Commission (IEC) or American National Standards Institute (ANSI). ESD prevention within an EPA may include using appropriate ESD-safe packing material, the use of conductive filaments on garments worn by assembly workers, conducting wrist straps and foot-straps to prevent high voltages from accumulating on workers' bodies, anti-static mats or conductive flooring materials to conduct harmful electric charges away from the work area, and humidity control. Humid conditions prevent electrostatic charge generation because the thin layer of moisture that accumulates on most surfaces serves to dissipate electric charges. Ionizers are used especially when insulative materials cannot be grounded. Ionization systems help to neutralize charged surface regions on insulative or dielectric materials. Insulating materials prone to triboelectric charging of more than 2,000 V should be kept away at least 12 inches from sensitive devices to prevent accidental charging of devices through field induction. On aircraft, static dischargers are used on the trailing edges of wings and other surfaces. Manufacturers and users of integrated circuits must take precautions to avoid ESD. ESD prevention can be part of the device itself and include special design techniques for device input and output pins. External protection components can also be used with circuit layout. Due to dielectric nature of electronics component and assemblies, electrostatic charging cannot be completely prevented during handling of devices. Most of ESD sensitive electronic assemblies and components are also so small that manufacturing and handling is done with automated equipment. ESD prevention activities are therefore important with those processes where components come into direct contact with equipment surfaces. In addition, it is important to prevent ESD when an electrostatic discharge sensitive component is connected with other conductive parts of the product itself. An efficient way to prevent ESD is to use materials that are not too conductive but will slowly conduct static charges away. These materials are called static dissipative and have resistivity values below 1012 ohm-meters. Materials in automated manufacturing which will touch on conductive areas of ESD sensitive electronic should be made of dissipative material, and the dissipative material must be grounded. These special materials are able to conduct electricity, but do so very slowly. Any built-up static charges dissipate without the sudden discharge that can harm the internal structure of silicon circuits. Protection during transitEdit Sensitive devices need to be protected during shipping, handling, and storage. The buildup and discharge of static can be minimized by controlling the surface resistance and volume resistivity of packaging materials. Packaging is also designed to minimize frictional or triboelectric charging of packs due to rubbing together during shipping, and it may be necessary to incorporate electrostatic or electromagnetic shielding in the packaging material. A common example is that semiconductor devices and computer components are usually shipped in an antistatic bag made of a partially conductive plastic, which acts as a Faraday cage to protect the contents against ESD. Simulation and testing for electronic devicesEdit For testing the susceptibility of electronic devices to ESD from human contact, an ESD Simulator with a special output circuit, called the human body model (HBM) is often used. This consists of a capacitor in series with a resistor. The capacitor is charged to a specified high voltage from an external source, and then suddenly discharged through the resistor into an electrical terminal of the device under test. One of the most widely used models is defined in the JEDEC 22-A114-B standard, which specifies a 100 picofarad capacitor and a 1,500 ohm resistor. Other similar standards are MIL-STD-883 Method 3015, and the ESD Association's ESD STM5.1. For comportment to European Union standards for Information Technology Equipment, the IEC/EN 61000-4-2 test specification is used. Another specification (Schaffner) C = 150 pF R = 330 Ω that gives high fidelity results. Mostly the theory is there, minimum of the companies measure the real ESD survival rate. Guidelines and requirements are given for test cell geometries, generator specifications, test levels, discharge rate and waveform, types and points of discharge on the "victim" product, and functional criteria for gauging product survivability. A charged device model (CDM) test is used to define the ESD a device can withstand when the device itself has an electrostatic charge and discharges due to metal contact. This discharge type is the most common type of ESD in electronic devices and causes most of the ESD damages in their manufacturing. CDM discharge depends mainly on parasitic parameters of the discharge and strongly depends on size and type of component package. One of the most widely used CDM simulation test models is defined by the JEDEC. Other standardized ESD test circuits include the machine model (MM) and transmission line pulse (TLP). - Automotive Electronics Council, which defines in some of its standards, ESD test qualification requirements for electronic components used in vehicles - Dielectric wireless receiver - Electric arc - Electromagnetic pulse - Electrostatic Discharge Association - Electrostatic voltmeter - Latchup, for qualification testing of semiconductor devices, ESD and latchup are commonly considered together - Spark gap - Static electricity - Wimshurst machine - Henry B. Garrett and Albert C. Whittlesey: Spacecraft charging, an update; IEEE Trans. Plasma Science, 28(6), 2000. - CRC Handbook of Chemistry and Physics (PDF) - "Fundamentals of Electrostatic Discharge". In Compliance Magazine. May 1, 2015. Retrieved 25 June 2015. - GR-1421, Generic Requirements for ESD-Protective Circuit Packed Containers, Telcordia. - "Baytems ESDzap - Lightweight ESD Simulator Product Overview" (PDF). Baytems. Aug 25, 2012. Retrieved 2012-08-25. - Media related to Electrostatic discharge at Wikimedia Commons
<urn:uuid:392e07c5-c9bf-433d-85ed-7d64861823c8>
CC-MAIN-2021-43
https://en.m.wikipedia.org/wiki/Electrostatic_discharge
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00310.warc.gz
en
0.915636
2,513
4.1875
4
Advances in LED Thermal Management Contributed By Electronic Products LEDs are often described in marketing materials as "cool" lighting, and in fact LEDs are cool to the touch because they generally don't produce heat in the form of infrared (IR) radiation. On the other hand, LEDs generate heat in the diode semiconductor structure (in addition to photons) and this heat must exit the system through conduction and convection. Consequently, luminaire designers must be conscious of potential heat dissipation challenges and how those challenges may affect LED performance, longevity, and even lamp safety. Elevated junction temperatures have been shown to cause an LED to produce less light (lumen output) and less forward voltage. Over time, higher junction temperatures may also significantly accelerate chip degeneration, perhaps by as much as 75 percent with an increase from about 100°C to 135°C during regular use. Engineers and material scientists have been and are developing new LED-related thermal management solutions including improved drivers, diaphragm-driven forced convection methods, better heat sinks, and even the introduction of graphite foam as a cooling medium. This article will first describe three junction temperature considerations--basic thermal resistance, power dissipation, and junction temperature measurement--then briefly look at advances in each of the aforementioned approaches to improved LED thermal management. Junction temperature considerations When considering LED thermal management, there are generally three factors that tend to act on junction temperature. These are the ambient air temperature, the thermal path between the LED junction and the surrounding environment (the thermal path, of course, should be optimized to encourage natural heat convection) and the LED's efficiency. Ambient temperatures will vary by application, so that luminaire designers will want to pay attention to how designs will be used in real world environments. For example, a few years ago the Rensselaer Polytechnic Institute's Lighting Research Center and the Alliance for Solid-State Illumination Systems and Technologies experimented with LEDs in various open air, semi-ventilated, and enclosed environments. Board temperatures for a 12-watt LED reached 60°C in the enclosed environment, a 26-watt LED's board temperature rose to 119°C. As for efficiency, LED efficiency will vary based on several factors with some devices converting as much as 80 percent — or perhaps even more — of the input electrical power to heat. Efficiency becomes more of an issue as LEDs increase in power. For example, when LEDs were primarily used as indicator lights, current levels were only a few milliamps whereas hundreds of milliamps or even amps are becoming commonplace in present day applications. Measuring thermal resistance, power dissipation, and junction temperature In an excellent application note associated with its XLamp XR family of LEDs, Cree offers suggestions regarding how to measure LED thermal resistance, power dissipation, and junction temperature. The thermal resistance between two points, which is often measured in degrees C per watt, may be thought of as "the ratio of the difference in temperature to the power dissipated." This ratio should be calculated for the LED junction to the thermal contact or solder point typically found at the bottom of an LED package and for the thermal contact to the ambient. The sum of these measurements represents the thermal resistance for the LED as a whole. Power dissipated, again according to Cree, "is the product of the forward voltage and the forward current of the LED." To assure satisfactory lifetime of the device, good efficiency, and proper LED color, junction temperature must be maintained within a specified band. Junction temperature (Tj) may be calculated by adding the product of the LED's overall thermal resistance (R) and the power dissipated (Pd) to the ambient temperature (Ta). Tj = (R x Pd) + Ta Further information on calculating LED junction temperature can be found in a previous article Calculating LED Junction Temperature in Lighting Applications in Digi-Key’s Lighting TechZoneSM. Improved LED drivers and temperature sensors Since electrical characteristics, such as the forward voltage of the LEDs, will drift with temperature this has to be taken into account when designing driver circuitry. Thus LED drivers are at the front line of LED thermal management. Manufacturers such as National Semiconductor or Texas Instruments, for example, are developing and producing LED drivers and companion temperature sensors that work together to adjust the current flow to an LED based on that LED's junction temperature profile. The temperature sensors are designed with enough margin to be able to both detect an over-temperature problem and at the same time not trigger a false alarm under normal operating temperatures. LED drivers also often employ a thermal shutdown failsafe mechanism, so if the drivers exceed a specified temperature, typically 125°C to 150°C, they will turn off along with the LED. The driver-as-thermal-manager approach may also be combined with occupancy monitoring solutions that will reduce an LED's lumen output when it is turned on in an unoccupied room or as a room's natural light illumination changes over the course of a day. Improving the heat sink Without good heat sinking, the junction temperature of the LED rises, and this causes the LED characteristics to change. Heat sinks seek to transfer heat from the LED to the air, which as a more fluid medium, may naturally move the heat away from the LED, helping to keep the junction temperature lower. When considering any heat sink luminaire designers should consider the heat sink's surface area, aerodynamics, thermal transfer, and mounting (including flatness). Generally, thermal transfer takes place on the heat sink's various fin surfaces, so that a heat sink with a greater surface area either as the result of more fins or a larger overall physical size should move more heat away from the LED. On balance, however, heat sinks should also be aerodynamic so that heated air may move quickly and freely, as a result a heat sink with many small but densely packed fins may actually discourage air flow and undermine the expected advantage of having greater surface area. As a further consideration, there should be a balance between surface area and fin thickness, since a thinker fin tends to offer superior thermal transfer when compared to relatively thinner fins. So again, if a heat sink maker increases the thermal transfer rate with thick fins, there is a cost in terms of total surface area. Lastly, the contact point between the heat sink and the LED should be as flat as possible, if there is space between the LED and the heat sink mounting, the thermal path will not be as direct. To address the various requirements for surface area, aerodynamics, thermal transfer, and even mounting flatness, researchers at The Singapore Institute of Manufacturing Technology have proposed a new method of heat sink manufacturing called liquid forging. This technique allows for pore-less heat sink designs in very complex geometric shapes specifically selected to boost airflow without surrendering too much surface area. According to the researchers, "liquid forging is an innovative hybrid casting and forming process [wherein] molten metal/alloy is poured into a die cavity and squeezed under pressure during solidification to form metal components in a single process." Heat sinks manufactured using the method and a combination of aluminum alloys with a copper base have been shown to have superior thermal performance (at approximately four times better thermal conductivity) than more typical, commercial extruded, machined, and die-cast heat sinks. The technique also allows for more intricate fin and pin design to potentially improve heat convection thanks to a better balance between thermal mass and surface area. While the non-porous surface created with liquid forging eliminates air pockets or other air flow obstacles. There is, of course, still work to be done before liquid forged heat sinks are readily available, but the technology is promising. Separately, companies such as Nuventix are combining improvements in heat sink design with forced convection to dramatically increase the movement of air through the heat sink. The Nuventix solution is called SynJet. SynJet uses an oscillating diaphragm to create pulses of high velocity and rather turbulent air flow. This airflow pulls air in its wake, which is referred to as entraining, and thereby increases the volume of air moving over the heat sink. According to Nuventix, this airflow also improves heat transfer. Figure 1: Nuventix SynJet significantly improves airflow. The SynJet is in production and readily available for luminaire designers to employ. Graphite foam for cooling Another advance in LED thermal management comes from the U.S. Department of Energy's Oak Ridge National Laboratory, which has developed graphite foam that wicks heat away from an LED lamp, reducing operating temperatures by 10 degrees or more. This particular LED thermal management advance is aimed at large outdoor lighting systems, such as those used by municipalities for roadside lamps or by businesses in parking lots and structures. The foam's graphite crystal structure contains a combination of internal air pockets and networked ligaments that wick heat away from the LED in a fashion similar to a more conventional heat sink. Graphite foam is light and porous with 25 percent density, making it easy to machine into heat sinks, but with the superior thermal conductivity afforded by the pure-carbon material (compared to conventional metal heat sinks). This technology is already being employed in some designs and may spark research into similar material science solutions for LED thermal management. High temperatures can both shorten the life and impact the brightness of LEDs. Studies have shown that decreasing the operating temperature by 10 degrees can double the lifetime of LEDs. As a result, virtually all luminaires require some type of heat-sink or other thermal management technology. This article has examined junction temperature, discussed how to measure thermal resistance and power dissipation, and presented several solutions for reducing LED operating temperatures. Disclaimer: The opinions, beliefs, and viewpoints expressed by the various authors and/or forum participants on this website do not necessarily reflect the opinions, beliefs, and viewpoints of Digi-Key Electronics or official policies of Digi-Key Electronics.
<urn:uuid:8bb753ef-8975-4af6-8755-389e666e66c7>
CC-MAIN-2021-43
https://www.digikey.com/en/articles/advances-in-led-thermal-management
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00670.warc.gz
en
0.929167
2,053
3.296875
3
Can governments offer a superior alternative to global digital cryptocurrencies? The last major democracy-enhancing disruption seems to have been the smartphone, although it merely extends the existing technological infrastructures into your hand. The disruption of existing markets is not often a democracy-enhancing endeavor anymore. It seems as if the majority of new technological innovations involve software that is actually increasing wealth inequality because the incentive to create a new technological innovation is now primarily driven by wealth generation for or from the wealthy. Today’s new major software innovations are leading to gig-economies, rent-seeking, and precarious living, while existing networks that were initially set up to be democratizing are becoming pay-to-play and/or filled with targeted advertising. App stores create the illusion of massive variety and opportunity, but instead create new barriers to entry with developer fees. Tech and media conglomeration is leading to more wealth inequality, less choice, less access to information, and fewer diverse points of view. New technology companies revolve around AI, Big Data, and Analytics at the global enterprise level while midsize national company growth shrinks. Paywalls in the media and subscription services in social networks are leading to more social stratification, political discord, and less access to information. Software solutions are replacing jobs, leading to more gig-work. Automation and robotics are reducing jobs, leading to more wealth inequality and gig-work. Uber and Lyft are examples of creating precarious gig-workers. Advertising and subscriptions seem to be the only way to monetize social networks, leading them to adopt both, creating a culture of monetizing social thought and rent-seeking, further entrenching wealth inequality and encouraging more gig-work. What are the implications of further increasing wealth inequalities and widespread gig-work? People are being left out of civic decision making processes. They’re being left out of access to information, leading to poor choices at the voting booth. People have less material choice overall in general, equating to fewer healthy nutritional choices, and reduced access to healthcare. As services switch to subscription-based monetization, people face more fees, paywalls, and subscriptions in general, ensuring that the path to poverty becomes unavoidable. Younger individuals are strapped with student loans, too. They become caught in a loop of survival, leaving no time to speak out about corruption or how the wealthy affect government as they are merely living to survive, not living to thrive. This also hampers the ability to organize, and to get support they must jump through hoops of bureaucracy. This bureaucracy is often affected by austerity measures, potential corruption, and out of touch leaders, giving young individuals a sour view and distrust of government programs that were initially intended to help them. Meanwhile, the wealthy are making increasingly larger amounts of money from money on securities markets and rent-seeking. Celebrities, billionaires, and other large organizations are investing in cryptocurrencies, influencing a generation of young people who are now looking to cryptocurrency to gain a financial edge as their work and income prospects diminish. What does this mean for the near-future economy, and the near-future of digital currency? Cryptocurrencies are flourishing on social networks like Reddit and Twitter. Data analytics for targeted advertising and suggestion engines or recommendation systems found in social networks can manipulate the culture of society and affect peoples' habits and norms by subtly nudging social posts and advertisements into their feeds. Twitter is doing this, for example, by showing recommended tweets and recommended topics as well as sponsored posts. After starting a new account, users may be offered up various new topics to follow, and they are increasingly finding out-of-place cryptocurrency tweets and ads in their brand-new feed, even if they had not expressed interest in that topic. If someone was not familiar with the symbols and terminology used they may be inclined to learn more about cryptocurrency from these sponsored posts and begin to develop what is know as “fear of missing out”, FOMO. In many ways the old economic systems are not providing a path out of poverty or providing ways to avoid falling into poverty. People are trying the new systems of cryptocurrencies, only to unknowingly participate in increasing wealth inequality even moreso. A generation of financially hard-pressed individuals are now trying to escape destitution by gambling on securities markets and the new cryptocurrency economies. The wealthy are investing in the currencies that will make them more wealthy, making those the most popular currencies, while large banks and financial institutions are allowing and encouraging their wealthiest customers to invest in cryptocurrencies as part of securities portfolios. These new economic systems don’t have the fail-safes and social protections that older economic systems still running today have built-in that try to: reduce wealth inequality; avoid depressions through stimulus; reduce poverty with direct cash transfers; help with emergencies; provide funding for essential public services such as clean water, fixed roads, community colleges, fire services and police services; combat political and economic corruption. In fact, the new economic systems are typically anti-government and/or anti-state by intentional design with blockchain and decentralization, including being anti-establishment and thus anti-democratic. In the new economic systems there are no methods for a state to fund crucial public services like schools, neither are their funding methods to combat systemic corruption. A new type of corruption then arises in a global non-state context where national and international jurisdiction may be difficult to reach. What does a future filled with rising wealth inequality, software automation, robotics, more gig-work (less hours, less income), massive student loan debts, economic globalization, and machine learning hold for people? What happens when new jobs are increasing wealth inequality, while incomes aren’t keeping up with how they should be rising? What happens when many people are trying to empower themselves using fun and cool sounding cryptocurrencies that they hardly know anything about or understand how they work, and how those cryptocurrencies are intended to be biased / to work against the individual's best interest? While there is currently not a better government alternative to these new economies, people who aren’t getting on board with the new economic systems feel a very powerful sense of "fear of missing out" and the idea of being left behind. As trust in traditional economic, political, and social institutions seem to be diminishing, eye-catching Vegas-inspired gamified apps are being created to make it incredibly easy and fun to get on board with cryptocurrencies without knowing the larger implications. Big media companies push news articles about the new emerging economy, touting revolutionary innovation in these sectors, and news aggregators like Google include cryptocurrency news outlets and ads in their feeds. Tech-hubs around the world are encouraging the use of cryptocurrencies, and big payment processors are creating easy paths to getting on board. Big financial institutions are doing this as well. Social networks like Twitter are nudging users with recommended posts and have major investments in the cryptocurrency space, and Facebook has worked for several years to create their own cryptocurrency (Diem) using blockchain. This is creating a strong socioeconomic cultural feedback-loop all across the world. It's creating a self-perpetuation of habitual global digital cryptocurrency usage. These new pay-to-play software investments, or “innovations”, involve an increasingly high-risk culture within global securities markets. This culture features a strong draw of sports-fan-like addictive gambling with near-instant results that are associated with negative consequences and biases such as a reluctance to bet against desired outcomes. The cult-like nature can be seen as cryptocurrency market-watchers track "Whales"—large quantities of cryptocurrency in digital wallets—and their movements through various economic markets, similar to tracking big winners in fantasy sport leagues. That culture doesn’t exactly encourage altruism; instead, it fosters apathy or indifference towards issues of increasing wealth inequality and poverty facing their fellow citizens. It is essentially a money-cult without the physical money in hand. Can nation-states and governments offer a superior alternative to global digital cryptocurrencies and counter the downward spiraling cultural trend that is being entrenched as the new economic norm? If governments fail to produce an updated modern digital economic system that complements the already existing systems before a generation becomes engulfed by global cryptocurrencies, governments are at risk of being blamed for all the damages of continually increasing inequality that new software and cryptocurrencies are amplifying. Worse yet, governments may become powerless to encourage the adoption of such an alternative due to the entrenched trade-enabled network effects, the decentralized nature of the cryptocurrency systems, the entrenched hardware, and the cult-like anti-government stances that support the wealthier investors already lobbying governments. Grimmer still, beyond increasing examples of corruption and ransomware attacks, some of the most popular cryptocurrencies aim to become the singular global currency and may end up creating the most violence via cartels or authoritarian despots that may enforce monetary scarcity—potentially indefinitely. The anonymity and decentralized nature of some cryptocurrencies may encourage violence to get the "vault keys" in order to gain access to vast wealth in cryptocurrency wallets, while some of these keys may be lost due to time, memory, faulty hardware, and entropy which may make the cryptocurrency scarcity far worse. Extortionists, rent-seekers, scammers, toll-access gatekeepers, organized crime syndicates (ransomware), and mafia-like modern digital tyrants will potentially operate in less democratic areas while affecting democratic institutions and democratically-regulated markets globally. They challenge the rule of law and potentially weaken democratic institutions and norms through bribery, coercion, and media manipulation with the help of cryptocurrencies. It is irresponsible for nations to assume that they would be immune from these groups and their tactics, or that banning cryptocurrencies entirely would eradicate such groups. Nation-states and governments simply cannot afford to be left behind and outclassed by global digital cryptocurrencies. Democracy, national patriotism, anti-trust laws, regulation, social security and social protections, government-funded public services, human rights issues, rule of law, policing of corruption, civil resistance, and politics as they stand today... have no place in a world swamped by global digital cryptocurrencies with no superior government alternative. Thankfully, and crucially, solutions are being drafted to create better digital currencies such as a federal central bank digital currency. This federal central bank digital currency will be built on top of and complement the existing monetary system, so all the socially empowering government programs will continue, and potential new programs and state innovations may be created. The trust in the federal dollar would feed into the trust of the federal digital currency as these systems work together. In what ways must a federal central bank digital currency outclass the rest? The new federal digital currency should encourage individuals to be more active and informed in civics using their smartphones with local updates relevant to them. It would need to help bolster the existing fail-safes and social protections previous generations fought for, and it would probably expand them to help restore trust in social, economic, and political institutions. The apps and new local innovation centers and partnerships with libraries should provide information on how the poverty-stricken and homeless can participate in cash transfer programs. It should also show how anyone can take advantage of social-entrepreneurship federal incentive programs and grants. It should have features built into the system to allow for direct deposits (UBI). Creators and innovators would then have a new and safe way to get paid online, and donations for individuals and non-profit organizations would then become far easier to manage without the need for tools like Paypal, Stripe, Venmo and other payment processors. The government would need to find new revenue sources to pay for public services and initiatives through new forms of payment processing taxes from online transactions, trades, exchanges, and possibly new kinds of automation and automation-software/robotics taxes to offset the fast rising gig-economy and offset increasing automation. Existing software “innovations” like Twitter, Shopify, Square, Cashapp, and others would be incentivised to use the federal system that would have lower fees and streamlined methods of interacting with the new digital currency so that rapid-adoption may take place before Facebook’s "Diem" cryptocurrency and its associated cultural-shift takes root. While all of these applications of the federal digital currency may help tremendously, this new currency alone would not shift the ever increasing tide of wealth inequality, cultural degradation, and the mounting financial and existential problems the younger generation already face. Besides considering removing the paralysis of student loan debts and taxing the wealthy more to reduce wealth inequality, what needs to be done is to turn disruption and innovation of existing markets back into the democracy-enhancing and empowering endeavors. The United Nations Sustainable Development Goals (SDGs) are a great start, but what is missing from their goals is an understanding of the initial motivation to begin a new entrepreneurial venture and how that relates to innovation. Before a company can become a company and hire employees and contribute to the market cycle, there must be entrepreneurial motivation. The threat of starvation or destitution is a powerful initial motivator for some entrepreneurs, leading them to focus on what they know that can bring in an income without considering if it is good or bad for society as a whole. This may also lead to the wrong type of initial motivation and the wrong kind of entrepreneurial spirit and company culture. On the other hand, those who do not face such dire wealth-starting-points have less understanding and foresight of the longer-term consequences of their new entrepreneurial ventures. This leads to the types of “innovations” and “disruptions” seen as "safe bets" from venture firms that are becoming more frequent today. Currently there exists a void where in its place should be two initiatives: Firstly, a financial grounding (UBI) that individuals can stand on so that avoiding the threat of destitution is no longer a primary motivator for profit-seeking entrepreneurialism. President Biden’s American Rescue Plan Act of 2021 is a great step regarding the first initiative. Secondly, social-entrepreneurship needs to be far more encouraged through federal awareness raising (public service announcements), federal provisions of financial incentives, incubation, and provisions of grants. The federal digital currency would complement this support and the two programs together would feed into each other's success.
<urn:uuid:c051111c-59ca-4273-aeb7-8002f9c5362a>
CC-MAIN-2021-43
https://stuartml.com/the-near-future-of-digital-currency
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00669.warc.gz
en
0.946792
2,910
2.59375
3
There are many security threats and your privacy, files and personal information is under attack. What can you do to stay safe when using your Windows computer? Use these top security tips. Most security threats are the result of computers being connected to the internet and if it was not online there would be far fewer problems. However, few people want to live offline and never use the internet. It is just too useful. So much of what we do these days requires a web connection that it is considered to be a basic right. Everyone needs to have access to online services, sites and information. However, the internet also allows malicious actors to hack into your computer, or at least try to, and to steal files and information. They may also try to infect your computer with malware and adware. These tips help you avoid security problems online and offline, but mostly online because that is where many threats come from. (This article contains affiliate links.) 1 Use complex passwords From reading an online newspaper to accessing your bank account to filing your tax returns, passwords are required at so many places. Hackers try to guess people’s passwords and they start with common ones like ‘password’, ‘123456’ and similar easily guessed ones. They also try words in the dictionary because people use single or multiple dictionary words because they are easy to remember. Use long and complex passwords that contain letters, numbers and if allowed, symbols like #!%() and so on. They are very difficult to guess, almost impossible in fact, so they keep your online accounts secure. 2 Use a password manager Unfortunately, the best passwords are very complex ones and this makes them impossible to remember. A password manager utility is essential and they remember your passwords for you and also enter them when you need to login somewhere, like at a website in a browser. They can also help you generate secure passwords. Don’t forget to use a suitably complex master password for your password manager. It may be hard to remember, but at least you only have to remember one. Write it down and store it somewhere safe and not obvious. A sticky note attached to the computer screen is not a good place! Web browsers have built-in password managers, but many people like to use a third party one because they have more features. Popular free password managers include Bitwarden, LastPass and KeePass. There are many paid password managers like Dashlane, RoboForm, 1Password, LastPass and so on. 3 Change your passwords It is very tempting to use the same password across multiple sites, services and apps because it simplifies things. You may have one or more passwords that you have re-used. Get rid of them by changing any password that is used twice or more. The problem with using the same password for several online sites and services is that if a hacker somehow gets hold of your password, they can use it to get into everywhere you have used it, creating multiple security problems. You should also check whether your passwords have been exposed in security breaches. You can check whether passwords have been leaked in security breaches at Have I Been Pwned. 4 Avoid web browser extensions Web browser extensions are a security risk. The problem is that extensions have a lot of permissions and can usually see everything on the web pages you visit. This may include login details like usernames or email addresses, passwords and more. Extensions can track your location, see your browsing history and much more. Not all browser extensions are bad, but some have been found to collect information and transmit it to some server on the internet or to inject adverts, change links, and even insert malware. If you install browser extensions, you are putting a lot of trust in the developer not to spy on you or collect information. I went to the Chrome Web store and clicked an extension at random, one of the highlighted ones. On selecting the Privacy tab, this is what it showed. It collects personally identifiable information, authentication information, location and user activity. What does all this mean exactly? Just take a look at the screenshot – does this extension really need to do this? This is what it can or might do, it does not mean that it actually does all of this or that the information collected is misused, but it has the potential to do all of this. It can collect your name, email address, age, passwords, security questions, PIN numbers and so on. It is best security practice to avoid extensions altogether, but if you must have them, keep them disabled and only enable them when you need them. Don’t let them run all the time. 5 Use a VPN – when out, even at home? If you work away from the office or home on a laptop computer (or even a phone), you will no doubt make use of Wi-Fi hotspots in cafes, hotels, airports, trains and stations, shopping malls and so on. Wi-Fi is everywhere these days, but security may be low at these public hotspots and there is no guarantee that they do not track you, monitor your activities or gather information about you. Some Wi-Fi hotspots do not have any encryption and have an open connection that anyone can join. Who knows what the owner or other people on the network are doing? A little better are those that have encryption and require login with a password, but even these are not perfect. Most monitor, filter and possibly log your web browsing activities. A VPN adds security and privacy when using public Wi-Fi hotspots and it creates an encrypted connection to the internet that lets you browse the web without the hotspot owner or anyone else on the network being able to see what you do. It also unblocks the web and bypasses the hotspot’s filters, which are sometimes very restrictive. 6 Encrypt sensitive and private files What if your computer was stolen? What if someone got their hands on it? Could they access your files? Yes, they could if the contents of the disk are not encrypted. The computer could be booted up or the disk could be removed and attached to another computer to read all your personal files and private information. Not all files on the disk are important, but we all have information on the computer’s disk that we would rather not fall into the hands of a thief or hacker. Things we need to remember, like bank details, accounts, maybe saved scans of important documents and so on. VeraCrypt is a good free encryption utility. Go to the Tools menu and select Volume Creation Wizard. The simplest option is to create an encrypted file container. Follow the instructions, which are very easy, and create a file as big as you need, such as 5, 10 or 20 GB. After creating it, return to the main VeraCrypt window, select a drive letter, select the file and mount it. It looks and works just like an extra disk drive on the computer. Any files stored in it are inaccessible when the drive is unmounted. If you just want to secure a few files like your bank details, a scan of your passport or whatever, Encrypto from MacPaw is a good free utility. It is tiny program that displays a little window on the desktop. Drop a file on it and it encrypts it with a password. Encrypted files dropped on the window are decrypted, provided you have the password of course. Don’t forget to delete the original unencrypted file. 7 Encrypt the disk It is best to encrypt the whole disk (SSD or HDD) so that nothing can be accessed without your authority. BitLocker can do this, but it is only available in certain versions of Windows and not the most common Home edition. To see if you have it, click the Start button and type ‘bitlocker’ or use the search box in the top right corner of the Control Panel window. If Windows does not have BitLocker, there are alternatives and VeraCrypt enables you to encrypt whole drives, even the startup drive. After encrypting it, the computer can only be started by entering the password with massively increases security. Obviously, if you forget it, you will be in serious trouble, so write it down. Backup the disk before encrypting it for safety. 8 Secure backups with encryption It is easy to forget that backups on external disks are a security risk. Even if you encrypt all the files on your computer, the USB backup drive next to it might contain all your files unencrypted. External drives are easy to steal, so your data needs protecting. Any good backup software will offer to encrypt backups and password protect them for security so that only you can access the contents. Every backup app is different, so look in the configuration settings for the option to encrypt data. 9 Avoid clicking popup messages in browsers Be very suspicious of any messages that appear on visiting a website saying that you must install or update something. This used to be common with Flash and a fake installer would not install Flash, but some adware or malware. Flash is dead now and no-one uses it, but malware creators have just moved on to other things. They may tell you that Chrome needs updating, a plugin or extension must be installed, and so on. Whether it is a popup message or whether it appears on the page in the browser, leave the site if you are asked to install or update anything. It is a scam. 10 Avoid clicking links in emails Phishing emails are very common, but they seem to vary with the email service. Perhaps some automatically filter them out. I don’t know why, but they are more common with some email services than others. Phishing emails are often easily spotted when you know what to look for, like not mentioning you by name. ‘Dear customer’ or ‘Dear [email protected]…’ are dead giveaways. No matter how convincing an email is, don’t click links in them. Always open a browser, go to the website, like your bank, Amazon, PayPal, Apple, Netflix or whoever, and login to your account. You will see messages or notifications if there is a problem. 11 Update Windows Windows used to let you disable updates, but it is hard with Windows 10. However, they can be put off for a time. Do not put off Windows Update, in fact, it is a good idea to check for updates and get them sooner rather than later because they always contain a number of important fixes for security flaws. Press Windows+I in Windows 10 to open the Settings app and click Update & Security. Click Check for updates and install them if they are available. 12 Update software Software can contain security flaws and should be kept up to date. Web browsers will automatically update with new features, bug fixes and security patches every so often, but it is possible to get them early by manually checking for updates. In Chrome and Edge for example, open the menu and select Help, About. It checks for an update and downloads it and installs it if one is available. Other software may also have security fixes, so find out how to update it. Often there is a Check for updates menu option. There may be an option in an application’s settings to automatically check for updates. 13 Enable the firewall A firewall prevents unknown and malicious incoming network connections from the local network or the internet. It basically keeps hackers and malware out. It should be enabled by default, but don’t just assume this, check for it. Press Windows+I to open Settings and type ‘firewall’ into the search box. Click Firewall and Network Protection to open Windows Defender Firewall. Many Control Panel functions are now in the Settings app, but Windows Firewall is still there if you want to access it that way. Make sure it is turned on. 14 Enable all Windows Security features Windows 10 has good security built in and it is all some people need to keep their computer and files safe from malware. It should be enabled automatically if you do not have any other security software installed, but it is a good idea to check that this is so. Press Windows+I to open the Settings app and click Update & Security > Windows Security. The protection areas should all say ‘No actions needed’. Click Open Windows Security to open the app (it can also be opened by clicking the shield icon in the popup panel in the taskbar). Everything should say ‘No action needed’. Investigate any item that says anything else. It may be that something is turned off and this is a security risk. 15 Use antivirus software There was a time when Windows came with no security software at all and everyone had to use third party applications. Although Windows Security is included in Windows 10, some people still prefer a third-party app because they usually provide even more security than is bundled with Windows. If Windows Security is not sufficient and you want to go further than the basics, such as protecting online browsing, credit cards and other important information, there is no shortage of alternatives. Try Avast, AVG, Trend Micro, Bitdefender, McAfee to mention just a few. 16 Use a standard user account If you are the only user of the computer, you have an administrator account. This has the most power and permissions. You can do anything. This also applies to malware if it gets onto your computer somehow. Limit what malware can do by using a standard user account, which has fewer permissions. Open the Windows 10 settings app and click Accounts > Family & other users. Create a new account by clicking Add a family member or Add someone else to this PC. It tries hard to get you to sign in with a Microsoft account or email account, but at every step of the way there is an option to skip it. For example, click I don’t have this person’s sign-in information on the first step and Add a user without a Microsoft account on the second step. Eventually you get to a step where you can simply create a username and password. There are some limitations with a standard account and occasionally you will need to login with your admin account to do things like install software or configure Windows settings, but use the standard account as much as you can to increase security. 17 Set User Account Control to high User Account Control prevents programs from making changes to Windows. When a program tries to change something non-trivial, a message appears on screen warning you and there is an option to allow it or block it. It is a useful security feature that prevents malicious apps from doing things you do not want. Open the Windows 10 Settings app and enter ‘User Account Control’ into the search box. Click it in the search results and then set it to one notch below the maximum setting. This is the best combination of security and ease of use. It is more secure on the highest setting, but also more irritating as it is more easily triggered. 18 Set a password for screen saver Whenever you leave your computer, press Windows+L before you walk away. It locks it and it can only be used again by entering your password. This prevents anyone else in the office or home from using your computer without your permission. If it was unlocked, they would have admin access. Another security tweak is to set screen saver and set a password. If you forget to lock your computer when you walk away, perhaps to get a coffee or for a break, the screen saver will be activated and it will prevent anyone else from using the computer. Open the Settings app and type ‘screen saver’ into the search box, then click it in the results. Select a screen saver like Mystify or Ribbons, which hides whatever you are working on, and tick the box On resume, display log-on screen. 19 Enable controlled folder access Controlled folder access is a Windows security feature that prevents programs from changing files and folders they have no business in changing. It limits the damage malware can do and can help to protect from ransomware, which encrypts your files until you pay a fee, sometimes a very large fee. It is a powerful security feature, but it can prevent some programs from working properly, so it is a feature you may not be able to live with. Turn it on and see if there are any problems with the software you use. It can always be turned off if necessary. Open the Settings app and enter ‘controlled folder access’ into the search box then click it in the search results. It is a simple on/off switch and when it is on, there is an option to allow an app to bypass it. Try it if there is a problem with an app. Turn it off if there are too many problems – it depends on what software you use.
<urn:uuid:bf372114-c7c4-4830-b6dd-9d3e52b8a995>
CC-MAIN-2021-43
https://rawinfopages.co.uk/19-essential-security-tasks-you-must-do-on-your-pc-to-stay-safe/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00349.warc.gz
en
0.937846
3,477
2.84375
3
By Jeff Kronenfeld Mental health has become one of the central themes of 2020 thanks to COVID-19 and the resulting societal shutdown. In fact, the psychological spillover from coronavirus is projected to evolve into an entirely separate pandemic, according to the Journal of the American Psychiatric Nurses Association (JAPNA). Like the virus itself, the “second pandemic” is nothing to ignore. The United Nations, World Health Organization and other academic sources such as the Journal of the American Medical Association have also sounded the alarm about a potential mental health crisis coming down the pipeline. The JAPNA study, however, calls for the implementation of “new mental health interventions” and “collaboration among health leaders” in order to prepare for mobilization when the masses are seeking psychological assistance. While psychedelic medicines were not explicitly cited in the study, these drugs offer an array of treatments that just so happen to address many of the mental health issues brought on by the COVID-19 pandemic, including depression, anxiety, PTSD, and paranoia. Specifically, psychedelic-assisted psychotherapy, which is on the brink of legalization in Oregon, may serve as one such model to assuage the psychological fallout from COVID-19. Causes of the Mental Health Pandemic So, how can COVID trigger a mental health crisis? That answer is: Easily. At the time of writing, over 121,000 Americans have died from COVID-19 and more than 2.3 million have been infected, according to data from John Hopkins University. The authors of the JAPNA article note that survivors of ICU treatment face an elevated risk for depression, posttraumatic stress disorder (PTSD), sleep disturbance, poor quality of life, and cognitive dysfunction. Those who contract COVID are not the only ones facing psychological trauma from the pandemic, however. Healthcare workers on the frontlines are at a heightened risk of experiencing severe trauma, PTSD, anxiety, and depression from COVID. Family members of coronavirus patients also face heightened distress, fear, and anxiety, all of which are likely aggravated by the restrictions on hospital visits and lack of testing. The rapid influx of COVID-19 cases also has the potential to decrease capacity for treating other patients, such as those experiencing psychological issues. Moreover, even people who have not directly dealt with COVID may experience mental health troubles. A lot of anxiety exists around virus exposure, which is triggered when having to leave the house for basic reasons, such as going to the grocery store or bank. The media’s inconsistent, doomsday coverage of the pandemic adds to the confusion around what’s going on, resulting in extreme fear, information overwhelm, and hysteria. The unintended consequences of a nationwide shut down is also proving to have a negative impact on mental health, according to a study published in European Psychiatry (EP). Lack of social interaction, specifically, is a well-known risk factor for depression, anxiety disorders and other mental health conditions. Further, the study warns that the longer such policies are in effect, the more risk they pose to those with preexisting mental health issues. “Most probably we will face an increase of mental health problems, behavioral disturbances, and substance-use disorders, as extreme stressors may exacerbate or induce psychiatric problems,” the EP authors write. News from the economic front is also concerning. The IMF projects global GDP will contract by 3 percent this year—the most severe decline since the Great Depression—with the US GDP predicted to drop by a whopping 5.9 percent. Data from the Bureau of Labor Statistics show more than 40 million Americans have filed for unemployment benefits since mid-March, a number that will likely increase. For many, job security means financial stability, which generally ties into one’s mental wellness. Research published in Clinical Psychological Science found that people who lost their job, income and housing during the Great Recession were at a higher risk of depression, anxiety and substance abuse. This is particularly troubling considering the Great Recession only caused a .1 percent drop in global GDP, a decline 30 times less severe than the financial crisis caused by COVID-19. Moreover, suicide rates in the US are directly related to unemployment. In fact, for every unemployment rate percentage increase, the suicide rate rises 1.6 percent in the US, according to a study in the Social Science and Medicine journal. Looking at all of these factors combined, a mental health crisis seems imminent. A report from the Well Being Trust predicts that COVID-19 and its associated stressors will cause anywhere from 27,644 to 154,000 deaths from alcohol, drugs and suicide. The results of a recent poll by the Kaiser Family Foundation suggest our trajectory could already be trending towards the worst-case scenario. The poll shows that 56 percent of Americans surveyed believe the outbreak has negatively impacted their mental health. But that number rose to 64 percent for those who experienced income loss. How Can Psychedelics Help? Psilocybin, MDMA and ketamine combined with psychotherapy show promise for treating an array of mental health conditions— many of which happen to be brought on by the pandemic. Studies show that psilocybin-assisted therapy decreases depression and anxiety in patients with life-threatening diseases, such as cancer. Participants reported reduced feelings of hopelessness, demoralization, and fear of death. Even 4.5 years after the treatment, 60 to 80 percent of participants still demonstrated clinically significant antidepressant and anti-anxiety responses. While we do not advocate for those sick with coronavirus to eat mushrooms, these studies suggest that psilocybin may be effective in treating the extreme fear, anxiety and depression activated by the virus and global shutdown. MDMA-assisted psychotherapy also promises major relief from pandemic-related trauma. Multiple studies show that it is a profound tool in the treatment of PTSD for military veterans, firefighters and police officers with no adverse effects post-treatment. MDMA therapy could be particularly beneficial to healthcare workers, survivors of extreme COVID cases or those who lost a loved one to the disease— all of which can inflict significant trauma, and therefore, PTSD. “We found that over 60 percent of the participants no longer had PTSD after just three sessions of MDMA-assisted psychotherapy,” says Brad Burge, the director of strategic communications at MAPS. “We also found that those benefits persisted and people actually tended to continue getting better over the next year without any further treatments.” Ketamine (and the esketamine nasal spray) treatment, on the other hand, is already available in North America. It’s especially effective in assuaging the tension of treatment resistant depression, bipolar disorder, chronic pain, and PTSD —all of which could be exacerbated by pandemic-related stressors. Keep in mind, however, that using psychedelics at home is different than receiving psychedelic-assisted psychotherapy. Catherine Auman, a licensed family and marriage therapist with experience in psychedelic integration, warns that now may not be the best time to use psychedelics, especially in a non-clinical setting. She worries that pandemic-related stressors could impact a patient’s psychological state. “Psychedelics are powerful substances and are best to do at a time in a person’s life when they’re feeling more stable, not less,” Auman explains. “This is good advice whether someone is using them recreationally or therapeutically.” Will COVID-19 Impede Psychedelic Research and Delay Public Access? The pandemic has impeded both psychedelic research efforts and access to currently available therapies. We’re essentially at a standstill until COVID is controlled. MAPS is among few—if not the only—organization with FDA permission to carry on research, but at a reduced scale. When we first spoke with Burge for this story, MAPS was on its first session of Phase 3 MDMA clinical trials. More recently, however, the FDA allowed MAPS to end the first round of Phase 3 early with only 90 out of 100 of the planned participants enrolled. Burge confirmed MAPS is already preparing for their second and last Phase 3 clinical trial. He predicts the DEA could reschedule MDMA by as early as 2022. Usona Institute temporarily paused all in-person activities related to its Phase 2 clinical trials looking at psilocybin for major depressive disorder, according to its April newsletter. Usona is still recruiting participants for clinical trials at five sites, however. Compass Pathways is not currently accepting any new patients in its clinical trials looking into the impact of psilocybin on treatment-resistant depression, according to a statement. They continue to support already enrolled patients remotely, when possible within the protocol. Pre-screening of potential study participants continues where possible, too. Field Trip Health is a recently formed network of clinics offering ketamine-assisted psychotherapy. The facility opened its first clinic in Toronto in March. But, after seeing one patient, it promptly shut down due to the accelerating spread of COVID-19. The decision for Field Trip Health to close its clinic was relatively easy, according to Ronan Levy, the company’s executive chairman. They didn’t have large numbers of patients actively receiving treatment yet. But, the pandemic has forced the organization to quickly adapt. “We launched a digital online therapy program, so patients can self-refer or have referrals to our psychotherapists, who are trained in psychedelic-assisted psychotherapy, with specific protocols and behavioral therapies,” says Verbora, Field Trip Health’s medical director. “Long term, as these clinics start to open up again, we’ll have dual streams. We’ll be able to sort patients in the clinic for ketamine-assisted psychotherapy, but some of their care may be able to be done from home.” While the COVID-19 pandemic has hampered research efforts in the short term and, the movement around the healing properties of psychedelic medicine is still going strong. “The path to acceptance might be slowed down a little bit due to COVID,” Verbora says. “But the current path that’s being undertaken by a number of different groups and institutions is one that’s going to lead to profound changes in the way we approach mental health.” The timing couldn’t be more perfect. About the Author Jeff Kronenfeld is an independent journalist and fiction writer based out of Phoenix, Arizona. His articles have been published in Vice, Overture Global Magazine and other outlets. His fiction has been published by the Kurt Vonnegut Memorial Library, Four Chambers Press and other presses. For more info, go to www.jeff-k.com.
<urn:uuid:1362bbc8-2d67-430f-8a54-51b9764b757e>
CC-MAIN-2021-43
https://www.psychedelicstoday.com/2020/07/20/the-second-pandemic-is-psychedelic-assisted-therapy-the-answer-to-the-mental-health-crisis-caused-by-covid-19/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587770.37/warc/CC-MAIN-20211025220214-20211026010214-00630.warc.gz
en
0.94197
2,244
2.828125
3
Nearly 150 years ago, three Arab horsemen were galloping through the desert of Transjordan after completing a secret mission on behalf of a French diplomat stationed in Jerusalem. Local Bedouin had attacked them, injuring one of the riders, but they still got away with an invaluable prize: an archaeological treasure that would reshape the early history of the Bible, of ancient Israel, and, in a way, of God himself. That treasure was a nearly 3,000-year-old inscription in which the king of Moab boasts of his victories against the Kingdom of Israel and its god YHWH. Called the stele of Mesha, it contains the earliest known extra-biblical mention of the deity worshipped by Jews, Christians and Muslims and, since its discovery in 1868, it has fueled the argument over the historicity of the Bible. On one hand, the stele confirms some of the names and circumstances found in the biblical texts on the monarchic period, and may even mention King David himself; and it attests to the existence of a strong cult of Yahweh in ancient Israel. On the other hand, it suggests that the culture and religion of the ancient Israelites may have been radically different from Judaism today. The ancient Hebrews may have been much closer to their much-maligned Canaanite adversaries than the Bible lets on. - Mysterious 6,500-year-old culture in Israel was brought by migrants, researchers say - Is this where the Israelites camped on their way to Canaan 3,200 years ago? - 2,000-year-old horned gold earring found near Jerusalem's Temple Mount “Scholars have had the Bible for millennia, and parts of it are considered plausible by historians, but when you find an inscription that comes from the distant past, from the very time when these things happened, it suddenly become real,” says Matthieu Richelle, a professor of Hebrew Bible and one of the researchers behind an exhibition at the College de France, in Paris, celebrating the 150th anniversary of the stele’s discovery. Found and lost The history of the artifact’s recovery is no less sensational than its content. The inscription was first reported by an Alsatian missionary, who had seen it among the ruins of Dhiban, an ancient Moabite town east of the Dead Sea. At a time when amateur archaeologists and explorers were already scouring the Levant for evidence of the Bible’s historical accuracy, the news set off a race between colonial powers – mainly France, England and Germany – to take possession of the stele. It was Charles Clermont-Ganneau, an archaeologist and diplomat at the French consulate in Jerusalem, who had sent those horsemen to take an impression, also known as a “squeeze,” of the text. This was done by placing a wet paper sheet on the stone and pressing it into the indentations created by the letters. But while the paper was drying, Clermont-Ganneau’s envoys became involved in a brawl with a local Bedouin tribe. With their leader injured by a spear, they snatched the squeeze off the stone while it was still wet (tearing into several pieces in the process) before escaping. This act would prove vital for the preservation of the text, because soon after, the Bedouin decided to destroy the stele, breaking it into dozens of fragments. Some historians claim they did so because they believed there might be a treasure inside, but Richelle says it was likely an act of defiance toward the Ottoman authorities, who were pressuring the Bedouin to hand over the stone to Germany. It took years for Clermont-Ganneau and other researchers to locate and acquire most of the fragments, but in the end the French scholar managed to piece together about two-thirds of the stele, reconstructing most of the missing parts thanks to that impression that had been so adventurously saved. The reconstructed stele is still today on display at the Louvre Museum in Paris. The vessels of God In the text, King Mesha recounts how Israel had occupied the northern regions of his land and “oppressed Moab for a long time” under Omri and his son Ahab – the biblical monarchs who reigned from Samaria and made the Kingdom of Israel a powerful regional player in the first half of the 9th century B.C.E. But Mesha goes on to tell how he rebelled against the Israelites, and conquered their strongholds and towns in Transjordan, including Nebo (near the traditional burial place of Moses) from whence he “took the vessels of YHWH and dragged them in front of Chemosh,” the main Moabite god. The "Mesha" in the stele is clearly identifiable as the rebellious Moabite ruler by the same name who appears in 2 Kings 3. In the biblical story, the king of Israel, Jehoram son of Ahab, sets out to put down Mesha’s rebellion together with his allies, the king of Judah, Jehoshaphat, and the king of Edom. The Bible tells of miracles wrought by God, who makes water appear to quench the thirst of the Israelite army, which then goes on to righteously smite the Moabites in battle. But the account ends with an abrupt anticlimax. Just when the Moabite capital is about to fall, Mesha sacrifices his eldest son upon the walls, “and there was great indignation against Israel: and they departed from him, and returned to their own land.” (2 Kings 3:27) While the events narrated in the two texts appear quite different, one of the most surprising aspects of Mesha’s inscription is how much it reads like a biblical chapter in style and language, scholars say. Mesha explains that the Israelite king Omri succeeded in conquering Moab only because “Chemosh was angry with his land” – a trope that finds many parallels in the Bible, where the Israelites’ misfortunes are invariably attributed to the wrath of God. It is again Chemosh who decides to restore Moab to its people and speaks directly to Mesha, telling him “Go take Nebo from Israel,” just as God routinely speaks to Israelite prophets and leaders in the Bible. And in conquering Nebo, Mesha recounts how he massacred the entire population as an act of dedication (“cherem” in the original) to his gods – the exact same word and brutal practice used in the Bible to seal the fate of Israel’s bitterest enemies (for example the Amalekites in 1 Samuel 15:3). Although there are only a handful Moabite inscriptions out there, scholars had no trouble translating the stele because the language is so similar to ancient Hebrew. “They are closer than French and Spanish are,” explains Andre Lemaire, a philologist and historian who teaches at the Ecole Pratique des Hautes Etudes in Paris. “We hesitate whether to call them two distinct languages or just dialects.” So, the first key lesson of the Mesha stele may be that while the Bible often describes the Moabites and other Canaanites as vile pagans who conduct human sacrifices, there were huge cultural and religious overlaps between the early Israelites and their neighbors. “When the stele was discovered and published for the first time there were a lot of people who claimed it was a fake, because they couldn’t imagine there would be a Moabite inscription displaying the same ideology as the Bible,” says Thomas Romer, an expert in the Hebrew Bible and professor at the College de France and the University of Lausanne. “Today we can see that, on the contrary, the biblical authors were participating in a common religious ideology.” One god, two gods, many gods Based on the stele, it appears that the Yahweh that 9th century B.C.E. Israelites worshipped had more in common with the Moabite deity Chemosh than with Judaism’s later concept of a single, universal deity. The fact that Mesha found a temple of Yahweh to plunder in Nebo contradicts the Bible’s contention that the exclusive worship of a single God had already been established and centralized at the Temple of Jerusalem in the time of King Solomon. The biblical narrative is also sorely challenged by the findings at the site of Kuntillet Ajrud, in the Sinai desert, where archaeologists discovered inscriptions on rock dedicated to “Yahweh of Samaria” and “Yahweh of Teman” – showing that this god was worshipped in multiple incarnations at different sanctuaries. Dated to the early 8th century B.C.E. (just a few decades after the Mesha stele), these inscriptions at Kuntillet Ajrud also include a crude engraved drawing of a male deity and a female deity, and describe the latter as Yahweh’s “Asherah.” This has led many scholars to conclude that at that time, around 3,000 years ago, there was no prohibition on making images of God, and that Yahweh had a wife. This is another possible parallel with Mesha, who tells us that when he massacred the 7,000 inhabitants of Nebo, he dedicated them to “the Ashtar of Chemosh.” Just like Yahweh had his Asherah, it is possible that the Ashtar mentioned in the stele may have been Chemosh’s wife, notes Romer. Mesha also gives us a clue that perhaps there were even more gods that the Israelites embraced. Before taking Nebo, the Moabite king conquered another stronghold built by the king of Israel east of the Dead Sea, Atarot, where once again he wipes out the local population (as an offering to Chemosh himself this time), and drags “the hearth of the altar of his Well-Beloved in front of Chemosh.” Who was this Well-Beloved (DWDH in the original) who was worshipped at Atarot? Experts are divided on this point. Lemaire, the French epigraphist, suggests it was merely a different name for Yahweh. Romer and Richelle point out that since the conquest of Atarot is mentioned before that of Nebo, it would be strange for Mesha to use an alternative appellation first and only name Yahweh on second reference. They believe it is more likely that DWDH was a separate local deity worshipped by the Israelites of Atarot. Whatever the number of divine figures we are dealing with, scholars agree that the Mesha stele reflects a world in which both Israelites and Moabites were not monotheists, but practiced, at best, a form of monolatry, which is the worship of a principal god while maintaining the belief in the existence of many deities. “In this inscription, you see very clearly that by this time Yahweh was the god of Israel and Chemosh was the god of Moab,” says Lemaire. “It was not a universal god, each kingdom had more or less its own national, territorial god.” In this world, the gods of other peoples were not worshipped, and might even be reviled, but their existence was recognized. The idea of a universal, all-powerful God was adopted only much later by the Jews, probably as a way to explain the destruction of the Temple of Jerusalem and the Babylonian exile in the 6th century B.C.E., explains Israel Finkelstein, an archaeologist at Tel Aviv University. When human sacrifice works While the Bible, written and compiled from different documents over centuries, was edited to reflect this faith in a universal God, we can find echoes of the earlier belief system of multiple national deities between the lines of the sacred text. For example, Jephthah’s question in Judges 11:24 – “Wilt not thou possess that which Chemosh thy god giveth thee to possess?” implies that whoever wrote that verse believed that the Moabite deity really existed. The same goes for the abrupt conclusion of the Israelite siege of Mesha’s capital in 2 Kings 3. The Bible is very critical of human sacrifice, as in the parable of Abraham and Isaac, so it is surprising to find a story in which an enemy of Israel is rewarded for such an abominable act and manages to repel the chosen people. The biblical text does not specify to whom Mesha sacrificed his son and whose “wrath” arose to defeat Israel – though Chemosh is the best candidate for the role. These verses are likely the remnant of an older story, perhaps a chronicle of the Kingdom of Israel, which would have reflected the belief in other gods, says Romer. “This is memory of a military conflict that didn’t end very positively for the Israelites,” he says. “Maybe originally the text spoke of the wrath of Chemosh against Israel and then the redactor would have probably dropped the name of the Moabite god.” But is it possible to reconcile the very different version of events narrated in 2 Kings and in the Mesha stele? One possibility is that the two texts are somewhat out of sync, with the Bible relating a first part of the conflict, which Mesha barely survived, and the stele relating a subsequent, more successful expansion of Moab into Israel’s Trasnsjordanian territories, suggests Lemaire. “We should look at both texts critically,” cautions Finkelstein. The Biblical text has multiple layers and its original core was probably not compiled before the 7th century B.C.E., some two centuries after the events it narrates, he says. Although we cannot date it precisely, Mesha’s stele was written much closer to the facts, but may include elements of Moabite propaganda, Finkelstein says. The text likely reflects the realities of the Levant sometime after 841 B.C.E., when Hazael, the king of Aram-Damascus, conquered vast swaths of Israel and other neighboring kingdoms. Though Mesha keeps all the glory for himself, it is very likely that the Moabites were allies or vassals of the Arameans and simply took advantage of Israel’s recent defeat to liberate what they saw as part of their ancestral lands, Finkelstein says. King David in the house Hazael’s own victory is recorded in the so-called Tel Dan stele, discovered by Israeli archaeologists in 1993. In the inscription, which is believed to be more or less contemporary to Mesha’s, Hazael boasts of killing the king of Israel and the king of “beitdavid”, i.e., the House of David.Many researchers interpret "beitdavid" as a reference to the kingdom of Judah and its founding father, which would ostensibly make it the only extra-biblical mention of David. But in fact, Lemaire, the French epigraphist, has been insisting since the 1990s that Mesha’s stele also mentions “beitdavid” (in a section where the Moabite king talks about how, after beating Israel, he expanded his territory south by taking a place called Horonaim). This part of the text, at the bottom of the stele, is fragmentary and damaged. Only the letters B VD are clearly legible, and the other scholars interviewed for this article did not agree with Lemaire’s extrapolation of the missing letters. But if Lemaire’s reading is correct, this would be the second mention of King David outside the bible, and would further strengthen the argument that, at least in the 9th century B.C.E., he was considered the founder of the dynasty reining over Jerusalem. The fragile squeeze of Clermont-Ganneau is on a rare public display at the College de France exhibition from Sunday through Oct. 19, Richelle notes, adding that it is possible that some observant researchers may yet unlock more secrets of the Mesha stele.
<urn:uuid:698f9268-afb0-49b1-b8e3-96709de3bddf>
CC-MAIN-2021-43
https://www.haaretz.com/archaeology/.premium.MAGAZINE-what-yahweh-s-first-appearance-in-history-tells-us-about-early-judaism-1.6469415
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00030.warc.gz
en
0.966471
3,448
3.03125
3
The Ancient Andalusian: the Horse of Kings By Melanie Huggett, in association with the Pacific Association of the Andalusian and Lusitano Horse To look at an Andalusian, it is not hard to see why they are nicknamed the “Horse of Kings.” An elegant appearance and impressive movement make the Andalusian difficult to ignore. Praised by man for millennia, this ancient breed has long been thought to be the ideal riding companion. The Andalusian dates back to 30,000 BC, and archaeological evidence in the Iberian Peninsula (modern day Spain and Portugal), indicates that the origins of the Iberian Horse date back to at least 25,000 BC, when their ancestors, the Sorraia, walked the rugged Iberian Peninsula. Cave paintings as old as 20,000 BC have been discovered in this area, featuring a profile that looks remarkably like the Andalusian’s. Other artifacts suggest man may have been riding them as early as 4,000 BC. The Andalusian’s handsome qualities complement the discipline of dressage. Shown here is Mareo, ridden by Sergio Velez, an Andalusian stallion at Olympic Andalusians, in Duncan BC. Photo: Horse Source Ltd., Courtesy of Olympic Andalusians These prehistoric horses were the foundation for the modern Andalusian, which was sculpted over thousands of years by the various human groups who occupied the region. Groups such as the Iberians, Celts, and Phoenicians brought their own horses to the peninsula, which were then selectively bred with the native horses, to create the Andalusian we know today. The Andalusian itself went on to develop into many other breeds. Though many believe the Barb horse of North Africa gave the Andalusian its convex shaped head, the reverse is actually true: the Andalusian imbued the Barb horse with its profile. The Lipizzan, Alter Real, Kladruber, Quarter Horse, and many European Warmbloods all trace their parentage to the Andalusian. The Andalusian was long known as the perfect war horse. The ancient Greeks and Romans used them for their cavalry mounts. Xenophon, the famous Greek cavalry officer, highly praised the “gifted Iberian horses” and their role in helping Sparta defeat the Athenians around 450 BC. For many centuries there was no equal. However, the breed suffered a slight decline when heavily armoured knights became the principle cavalry, and a heavier mount was needed to support their weight. The Andalusian regained its enormous popularity during the Renaissance, when it became well known for its great ability in High School Dressage. Classical riding academies emerged at royal courts across Europe. It was during this time that the horse was known as the “Royal Horse of Europe,” as it became the principle mount for nobility in countries such as France, Germany, Italy, and Austria. 80% of Andalusians are grey, just like PK Regalo shown here, who is a pure Andalusian stallion at Bello Escasso Farms in Delta BC. Photo: Courtesy of Kara Lingam In Spain, the Andalusian has long been used for bull-fighting. The same characteristics that make the horse excel at dressage also make it an exceptional mount when facing a bull or cow. Their courage, agility, intelligence, and ability to turn quickly on the haunches, make them ideal for dealing with an angry bull. Though the sport is controversial, Andalusians are still used today in Spain for this purpose. On average the Andalusian stands 15.2 to 16.2 hands high. They have a slightly convex or straight profile, large, lively eyes, and small wide-placed ears. They seem to have a permanently raised eyebrow. Their necks are broad and well-arched, and they have thick, flowing manes which, when combined with their arched necks, give them an elegant appearance. The shoulder is sloped and muscular. They have a short to medium back, sturdy but fine legs, and a low set tail. Their tails are bountiful and long. It is considered a fault to cut, trim, or pull an Andalusian’s mane or tail. Overall, they have substantial but graceful bodies. 80 percent of all Andalusians are grey, with another 15 percent bay. Although Spanish registries have denied registration of Andalusians who were not grey, bay, or black in the past, in 2003 they re-allowed chestnut and dilutions such as palomino and dun, all of which are present in the breed, though rare. “Andalusians are easy to ride with their flowing elastic movement,” says Bunny Caton, co-owner of Alberta Andalusians, in Eckville, Alberta. Their conformation makes them superb athletes and graceful movers, with strength, agility, impulsion, and natural balance. Their rounded crest and croup, coupled with a short back, give them easy collection, though they require conditioning of their hind muscles before they can achieve a good extended trot. With their docile nature, Andalusians make wonderful family horses. Here, Richelle Eger reads a bedtime story to an Andalusian foal. Photo: Courtesy of Bette-Lyn Eger Andalusians are also very willing, kind, and docile, and are said to learn very quickly. “Having a kind, giving temperament, they are extremely sociable with their human companions,” says Bette-Lyn Eger of the Pacific Association of the Andalusian and Lusitano Horse (PAALH). However, they are also sensitive, and harsh or disrespectful training methods can lead to a nervous, unmanageable horse. Their intelligence also leads some people to push them quickly through training before they are truly ready. Their “ease [of training] and ‘try’ may make people ask for more than the horse has to give,” says Caton, whose Andalusians have placed as champions at many shows in North America. An Andalusian performs best when given time to gain a solid foundation. The breed does not mature until approximately seven years of age. The Andalusian makes a superb mount for both children and adults. They bond with their human partners, and any other livestock kept with them. "Visiting a [Andalusian] farm, we noticed a mare following just behind our three-year-old son. Concerned, we went to fetch him until the owner interceded, stating the mare was following him to keep an eye out for his safety,” said Eger, who also co-owns Mystique Andalusians, a breeding farm in Roberts Creek, BC. “They look out for their family.” According to Eger, the Andalusian “is highly interactive with their owners, and will do their best to please when treated with respect.” They are a breed who does best by being kept busy, with lots of attention given by their owner. “This is not a breed to purchase and leave in a stall without interaction,” she warns. Andalusians are quick and courageous, and excel at cattle work. They are a foundation breed of the modern Quarter Horse. Photo: Courtesy of Bette-Lyn Eger With their versatility, however, it should not be hard for an owner to find work for their Andalusian companion. The Andalusian excels in a wide variety of equestrian sports, and is capable of both English and Western disciplines. Today, the Andalusian can be seen in the dressage ring, jumping over fences, on the ranch doing cattle work, or in harness. An excellent cow horse, the Andalusian has an uncanny ability to read the thoughts of their rider and cattle. According to Caton, “working cattle requires quick thinking and athletic traits along with good bone and hooves,” qualities which the Andalusian has in spades. “Quick thinking and agility are natural and easy for an Andalusian,” she says. The Quarter Horse is said to get its “cow sense” from its Andalusian blood. Andalusians are still well known for their ability to do high school dressage maneuvers. With their natural ability at the piaffe and passage, they perform much the same as they did at the royal courts centuries ago. Zorro del Bosque, a 2006 pure Spanish Andalusian colt. Zorro is owned by Keilan Ranch in Quesnel BC. Photo: Rhonda Doram The word Andalusian, derived from the Spanish province of Andalusia, is often used as a generic term for horses that originated on the Iberian Peninsula: the Andalusian, Lusitano, Purebred Spanish Horse (PRE), Purebred Portuguese Horse (PSL), and Spanish Portuguese Horse (PSP). Before 1960, all these types were considered the same breed and registered together in Europe (though the Spanish had created their own studbook for the PRE earlier, in 1912, the Portuguese did not create a studbook for their own horse, the PSL, until 1960). In order to be registered in the PRE or PSL studbooks, a horse must be of a particular conformation and breed type set by the registries, with both parents being revised. In Canada and the US, Andalusians are registered by the International Andalusian and Lusitano Horse Association (IALHA). This association is dedicated to the education, promotion, and preservation of the Andalusian breed, and does so through clinics, shows, and publications. Under the IALHA, all Andalusian-type horses can be registered, as well as partbreds, as long as the sire and dam are registered with the association. Horses are considered purebred Andalusian regardless of whether they are PRE, PSL, or a cross of the Andalusian and Lusitano, the PSP. Three-quarter Andalusian Graciano (Sham), proves just how versatile this breed can be! Sham placed seventh at the 2008 Mount Cheam Horse Trials, in Chilliwack BC, and is owned by Bello Escasso Farms of Delta. Photo: Courtesy of Kara Lingam PAALH, based in western Canada, recognizes and promotes the Andalusian, PRE, Lusitano, PSL, and their cross (PSP), along with numerous partbreds. They are the only Canadian association to do so. The PAALH was created to bring together people who share in the love of the Andalusian and Lusitano horse. Despite their versatility, the Andalusian is a very rare breed in Canada, with only 400 horses registered in 2005. Those that do exist, however, showcase the abilities and personality of the breed well. The Andalusian is a family horse, and a kind partner. They have been man’s companion throughout the centuries, praised for their intelligence, athleticism, and grace. “The temperament and willingness of the Andalusian creates a partnership second to none,” says Caton. “One begins to believe they can read your mind!” 2008 Canadian Andalusian Show & Fiesta Each year, PAALH hosts the Canadian National Andalusian Show and Fiesta (CNASF). Top Andalusians from across Canada and the Pacific Northwest travel to the show to share their passion for the Andalusian horse. The Canadian National Andalusian Show & Fiesta is a celebration of the Andalusian breed, held at Chilliwack Heritage Park in Chilliwack BC. Its versatility, strength, agility, beauty, and history are all on display for visitors to appreciate. Photo: Susan Kerr A variety of nationally recognized competitions, youth competitions, and other classes are offered to showcase the versatility, beauty, and talent of the breed. In addition to modern style dressage, jumping, and western pleasure classes, CNASF also celebrates the heritage of the breed with classical dressage, traditional costume and Doma Vaquara, a style of riding developed in Spain from working with cattle. The Saturday evening Fiesta of the Royal Horse features a variety of exhibitions, such as Mexican Charro, a form of cowboy dressage, musical freestyle, where costumed riders perform maneuvers to music, Flamenco dancing, and many others. The Fiesta is complimentary and sure to be enjoyed by all. Main photo: Colleen Pedrotti - Camelia de la Corazon, 1993 pure Spanish Andalusian mare, with her 2005 palomino pure Spanish/Portuguese filly, enjoy some quiet time, grazing on Quesnel BC pasture at Kielen Ranch.
<urn:uuid:ac655eb6-1ec8-4ed8-a20c-5a72ddbed0b6>
CC-MAIN-2021-43
https://horsej-intellectsolutio.netdna-ssl.com/popular/breed-profiles/ancient-andalusian-horse-kings
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00710.warc.gz
en
0.964511
2,665
3.28125
3
This article or section is being renovated. Taqiyya (تقية alternative spellings taqiyeh, taqiya, taqiyah, tuqyah) is a form of religious dissimulation, or a legal dispensation whereby a believing individual can deny his Islamic faith or commit otherwise illegal or blasphemous acts while they are at risk of significant persecution. It is based on Qur'anic verses that instruct Muslims not to "take for friends or helpers Unbelievers rather than believers... except by way of precaution," and to not utter unbelief "except [while] under compulsion". This practice is emphasized in Shi'ite Islam whereby adherents may conceal their religion when they are under threat, persecution, or compulsion. Taqiyya, as it is known today, was developed to protect Shi'ites who were usually in the minority and under pressure from the Sunni majority. In the Shi'ite view, taqiyya is lawful in situations where there is overwhelming danger of loss of life or property and where no danger to religion would occur thereby. The term taqiyya is generally not used among Sunnis. However, it has been discussed in a positive and clear manner by several Sunni scholars such as Ibn Kathir and Al-Suyuti, and the actual concept itself does exist within Sunni jurisprudence (Fiqh). There are also a few historically documented cases of Sunnis practicing taqiyya where it was necessary. For example, in the inquisition miḥna during the Caliphate of al-Ma’mun, a number of Sunni scholars used taqiyya, attesting to the Qur’an as having been created despite believing the opposite. The word "taqya" in the Quran There are 10 versions (so called "qira'aat") of the Arabic Quran that are named by their "readers". The most famous are Hafs and Warsh. In the verse 3:28, there is a word that was written in the Uthmani script as تقىة. Most versions (like Hafs and Warsh) read it as تقاة (tuqaat), however one version (by Ya'qub al-Yamani) uses the word taqiya (تقية): Let not believers take disbelievers as allies rather than believers. And whoever [of you] does that has nothing with Allah, except when taking precaution against them in prudence (تقية, according to qiraa'a by Ya'qub al-Yamani). And Allah warns you of Himself, and to Allah is the [final] destination. Taqiyya or its meaning in the Quranic tafsir (interpretation) 3:28 - Tafsir al-Jalalayn Let not the believers take the disbelievers as patrons rather than that is instead of the believers — for whoever does that that is whoever takes them as patrons does not belong to the religion of God in anyway — unless you protect yourselves against them as a safeguard tuqātan ‘as a safeguard’ is the verbal noun from taqiyyatan that is to say unless you fear something in which case you may show patronage to them through words but not in your hearts this was before the hegemony of Islam and the dispensation applies to any individual residing in a land with no say in it. God warns you He instills fear in you of His Self warning that He may be wrathful with you if you take them as patrons; and to God is the journey’s end the return and He will requite you. Tafsir al-Jalalayn, trans. Feras Hamza, altafsir.com . Royal Aal al-Bayt Institute for Islamic Thought. Kingdom of Jordan, https://www.altafsir.com/Tafasir.asp?tMadhNo=1&tTafsirNo=74&tSoraNo=3&tAyahNo=28&tDisplay=yes&UserProfile=0&LanguageId=2. 3:28 - Tafsir Ibn 'Abbas (Let not the believers take) the believers ought not to take [the hypocrites:] 'Abdullah Ibn Ubayy and his companions [and] (disbelievers) the Jews (for their friends) so as to become mighty and honourable (in preference to believers) who are sincere. (Whoso doeth that) seeking might and honour [by taking the hypocrites and disbelievers as friends] (hath no connection with Allah) has no honour, mercy or protection from Allah (unless (it be) that ye but guard yourselves against them) save yourselves from them, (taking (as it were) security) saving yourselves from them by speaking in a friendly way towards them with, while your hearts dislikes this. (Allah bideth you beware (only) of Himself) regarding the shunning of unlawful killing, unlawful sex, unlawful property, consuming intoxicants, false testimony and associating partners with Allah. (Unto Allah is the journeying) the return after death. Tafsir Ibn 'Abbas, trans. Mokrane Guezzou., altafsir.com . Royal Aal al-Bayt Institute for Islamic Thought. Kingdom of Jordan, https://www.altafsir.com/Tafasir.asp?tMadhNo=0&tTafsirNo=73&tSoraNo=3&tAyahNo=28&tDisplay=yes&UserProfile=0&LanguageId=2. Misuse of the Word Critics of Islam are often ridiculed when some of them conflate the doctrine of taqiyya with lying in general. When the subject comes up, most ex-Muslims attest that they had never heard of taqiyyah until they saw people being accused of it on the Internet. Lying in general, as well as in specific situations such as commercial transactions is condemned in various hadith, and the Qur'an condemns various groups for (allegedly) lying about Allah and Muhammad. However, there are some situations in the hadith literature in which Muhammad endorses deception, such as deceiving the opponent in warfare, to facilitate the murder of one of his enemies, when it is better to break an oath than to keep it, or to bring reconciliation between parties (Lying and Deception). This is not termed "taqiyya" though and is in fact a strategem of war that has been employed across the ages and cultures of the world. As with an other believer in an ideology they are personally invested in, generally when a Muslim seems to be lying about Islam, they are likely either to be simply deluding themselves, or misleading people for much the same reasons as adherents of other religions, who sometimes lie to further or defend their faith, rather than deliberately following some widespread "secret" doctrine of "taqiyya" - Momen, Moojan, "An Introduction to Shi'i Islam", Yale University Press, pp. 39, 183, ISBN 978-0-300-03531-5, 1985. - Stewart, Devin, "Islam in Spain after the Reconquista", Teaching Materials, The Hagop Kevorkian Center for Near Eastern Studies at New York University, http://www.nyu.edu/gsas/program/neareast/test/andalusia/2_p8_text.html. - "Let not the believers Take for friends or helpers Unbelievers rather than believers: if any do that, in nothing will there be help from Allah: except by way of precaution, that ye may Guard yourselves from them..." - Quran 3:28 - "Any one who, after accepting faith in Allah, utters Unbelief,- except under compulsion, his heart remaining firm in Faith - but such as open their breast to Unbelief, on them is Wrath from Allah..." - Quran 16:106 - "Taqiyah". Oxford Dictionary of Islam. John L. Esposito, Ed. Oxford University Press. 2003. Retrieved 25 May 2011. - "(unless you indeed fear a danger from them) meaning, except those believers who in some areas or times fear for their safety from the disbelievers. In this case, such believers are allowed to show friendship to the disbelievers outwardly, but never inwardly. For instance, Al-Bukhari recorded that Abu Ad-Darda' said, "We smile in the face of some people although our hearts curse them. Al-Bukhari said that Al-Hasan said, "The Tuqyah is allowed until the Day of Resurrection." - Tafsir Ibn Kathir, The Prohibition of Supporting the Disbelievers - "Let not the believers take the disbelievers as patrons, rather than, that is, instead of, the believers — for whoever does that, that is, [whoever] takes them as patrons, does not belong to, the religion of, God in anyway — unless you protect yourselves against them, as a safeguard (tuqātan, ‘as a safeguard’, is the verbal noun from taqiyyatan), that is to say, [unless] you fear something, in which case you may show patronage to them through words, but not in your hearts: this was before the hegemony of Islam and [the dispensation] applies to any individual residing in a land with no say in it." - Tafsir al-Jalalayn (Surah 3 Ayah 28), trans. Feras Hamza, 2012 Royal Aal al-Bayt Institute for Islamic Thought - "All scholars of the Muslim Ummah agree on the fact that at times when one is forced, one can denounce Islam." - Husain bin Masood al-Baghawi, Tafsir Ma'alim at-Tanzeel, published in Bombay, vol. 2, P. 214. - "It is permissible to swear at Rasulullah when one is under duress and to recite the Kalima of Kufr in the fear of losing property or of getting murdered provided that the heart is at comfort." - Nizam al-Din al-Shashi, Usul al-Shashi, Chapter "Al Dheema", p. 114. - Virani, Shafique N., "The Ismailis in the Middle Ages: A History of Survival, a Search for Salvation", New York: Oxford University Press, p. 48, ISBN 978-0-19-531173-0, 2009.
<urn:uuid:86ed7c15-4a23-4924-9f10-02dfe99cf657>
CC-MAIN-2021-43
https://www.wikiislam.net/wiki/Taqiyya
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00150.warc.gz
en
0.932324
2,239
2.90625
3
Top of the page This topic will tell you about the early testing, diagnosis, and treatment of colorectal cancer. If you want to learn about colorectal cancer that has come back or has spread, see the topic Colorectal Cancer, Metastatic or Recurrent.If you want to learn about anal cancer, see the topic Anal Cancer. Colorectal cancer means that cells that aren't normal are growing in your colon or rectum. These cells grow together and form polyps. Over time, some polyps can turn into cancer. This cancer is also called colon cancer or rectal cancer, depending on where the cancer is. It occurs most often in people older than 50. The exact cause of colorectal cancer is not known. Most cases begin as small growths, or polyps, inside the colon or rectum. Colon polyps are very common. If they are found early, usually through routine screening tests, they can be removed before they turn into cancer. Colorectal cancer usually doesn't cause symptoms until after it has started to spread. See your doctor if you have any of these symptoms: If your doctor thinks that you may have this cancer, you will need a test, called a colonoscopy (say "koh-luh-NAW-skuh-pee"), that lets the doctor see the inside of your entire colon and rectum. During this test, your doctor will remove polyps or take tissue samples from any areas that don't look normal. The tissue will be looked at under a microscope to see if it contains cancer. Sometimes another test, such as a sigmoidoscopy (say "sig-moy-DAW-skuh-pee"), is used to diagnose colorectal cancer. Colorectal cancer is usually treated with surgery, chemotherapy, or radiation. Screening tests can find or prevent many cases of colon and rectal cancer. They look for a certain disease or condition before any symptoms appear. Experts say that most adults should start regular screening at age 50 and stop at age 74. Talk with your doctor about your risk and when to start and stop screening. Your doctor may recommend getting tested more often or at a younger age if you have a higher risk. Screening tests include stool tests, such as FIT, that can be done at home and a procedure, such as a flexible sigmoidoscopy, that is done at your doctor's office or clinic. Health Tools help you make wise health decisions or take action to improve your health. Most cases begin as polyps, which are small growths inside the colon or rectum. Colon polyps are very common. Some polyps can turn into cancer. Some people have a medical or family history that can increase their risk for colorectal cancer. Because of their history, they may be more likely to develop polyps that could turn into colorectal cancer. Colorectal cancer in its early stages usually doesn't cause any symptoms. Symptoms occur later, when the cancer may be harder to treat. The most common symptoms include: Cancer is the growth of abnormal cells in the body. These extra cells grow together and form masses, called tumours. In colorectal cancer, these growths usually start as polyps in the large intestine (colon or rectum). If colon polyps aren't found and removed, they may turn into cancer. Cancers in the colon or rectum usually grow very slowly. It takes most of them years to become large enough to cause symptoms. If the cancer is allowed to grow, over time it will invade and destroy nearby tissues and then spread farther. Colorectal cancer spreads first to nearby lymph nodes. From there it may spread to other parts of the body, usually the liver. It may also spread to the lungs, and less often, to other organs in the body. The long-term outcome, or prognosis, for colorectal cancer depends on how much the cancer has grown and spread. Experts talk about prognosis in terms of "5-year survival rates." This means the percentage of people who are still alive 5 years or longer after their cancer was found. It is important to remember that these are only averages. Everyone's case is different. And these numbers don't necessarily show what will happen to you. The estimated 5-year survival rate for colorectal cancer is:footnote 1 These numbers are taken from reports that were done from 2007 to 2013, before newer treatments were available. So the actual chances of your survival are likely to be higher than these numbers. A risk factor for colorectal cancer is something that increases your chance of getting this cancer. Having one or more of these risk factors can make it more likely that you will get colorectal cancer. But it doesn't mean that you will definitely get it. And many people who get colorectal cancer don't have any of these risk factors. There are lifestyle actions you can take to lower some of the risk factors for colorectal cancer. These actions include: Getting older is a risk factor for colorectal cancer. Your race and ethnicity Ashkenazi Jews (Jewish people whose ancestors came from Eastern Europe) who have inherited certain genes are at a higher risk for getting colorectal cancer. Your family's medical history You are more likely to get colorectal cancer if one of your parents, brothers, sisters, or children has had the disease. Your risk is higher if this family member had colorectal cancer younger than 50 years old, or if more than one family member had the disease. Some common gene changes increase the chance of colorectal cancer. These changes are familial adenomatous polyposis (FAP) and Lynch syndrome, also called hereditary non-polyposis colorectal cancer (HNPCC). Many people with these changed genes will get colorectal cancer if they aren't carefully watched. Genetic testing can tell you if you carry a changed, or mutated, gene that can cause FAP or HNPCC. Your medical history Your chances of getting colorectal cancer are higher if you have had: Call your doctor if you have any symptoms of colorectal cancer, such as: Because colorectal cancer often doesn't cause any symptoms, talk with your doctor about screening tests. Screening helps doctors find a certain disease or condition before any symptoms appear. Your family doctor or general practitioner can check your symptoms of colorectal cancer. You may be referred to a specialist, such as a gastroenterologist. If your doctor thinks you may have colorectal cancer, he or she may advise you to see a general surgeon or a colorectal surgeon. Colorectal cancer is treated by surgeons, medical oncologists, and radiation oncologists. If your doctor thinks you may have colorectal cancer, he or she will ask you questions about your medical history and give you a physical examination. Other tests may include: For people who have an increased risk for colorectal cancer, regular colonoscopy is the recommended screening test. It allows your doctor to remove polyps (polypectomy) and take tissue samples at the same time. When you are diagnosed with colorectal cancer, your doctor may order other tests to find out if the cancer has spread. These tests include: Routine screening can reduce deaths from colorectal cancer. Your risk for colorectal cancer gets higher as you get older. Experts say that most adults should start regular screening at age 50 and stop at age 74. If you are at higher risk, your doctor may recommend you start screening before age 50 or continue after age 74. Talk with your doctor about your risk and when to start and stop screening. You and your doctor will work together to decide what your treatment should be. You will consider your own preferences and your general health. But the stage of your cancer is the most important tool for choosing your treatment. Staging is a way for your doctor to tell how far, if at all, your cancer has spread. Surgery is almost always used to remove colorectal cancer. Sometimes a simple operation can be done during a colonoscopy or sigmoidoscopy to remove small polyps and a small amount of tissue around them. But in most cases, a major operation is needed to remove the cancer and part of the colon or rectum around it. If cancer has spread to another part of your body, such as the liver, you may need more far-reaching surgery. Chemotherapy uses medicines to destroy cancer cells throughout the body. Several medicines are often used together. Radiation therapy uses X-rays to destroy cancer cells. This is used for some types of cancer in the rectum. Radiation therapy is often combined with surgery or chemotherapy. To learn more, see Other Treatment. Cancers that have not spread beyond the colon or rectum may need only surgery. If the cancer has spread, you may need radiation therapy, chemotherapy, or both. Surgery, chemotherapy, and radiation can have serious side effects. But your medical team will help you manage the side effects of your treatment. This may include medicines for pain after surgery or medicines to control nausea and vomiting if you have chemotherapy. Talk with your doctor and medical team about your side effects. Some side effects, such as pain or tingling in your hands or feet that gets worse (peripheral neuropathy), may be a sign that your medicines need to be changed. For tips on how to manage side effects at home, see Home Treatment. After you have had colorectal cancer, your chances of having it again go up. It's important to keep seeing your doctor and be tested regularly to help find any returning cancer or new polyps early. After your treatment, you will need regular checkups by a family doctor, general practitioner, medical oncologist, radiation oncologist, or surgeon, depending on your case. Colorectal cancer comes back in about half of people who have surgery to remove the cancer.footnote 2 The cancer may be more likely to come back after surgery if it was not found in an early stage. Cancer that has spread or comes back is harder to treat, but sometimes treatments are successful. For more information, see the topic Colorectal Cancer, Metastatic and Recurrent. Finding out that you have cancer can change your life. You may feel like your world has turned upside down and you have lost all control. Talking with family, friends, or a counsellor can really help. Ask your doctor about support groups. Or visit the Canadian Cancer Society website at www.cancer.ca. To learn more about colon and rectal cancer, go to the website of the: Your risk for colorectal cancer gets higher as you get older. If you are not at high risk, experts recommend regular screening for adults ages 50 to 74. footnote 3 Talk with your doctor about your risk and when to start and stop screening. If you have a very strong family history of colon cancer, you may want to talk to your doctor or a genetic counsellor about having a blood test to look for changed genes. Genetic testing can tell you if you carry a changed, or mutated, gene that can cause colon cancer. Having certain genes greatly increases your risk of colon cancer. But most cases of colon cancer aren't caused by changed genes. During treatment for colorectal cancer, you can do things at home to help manage your side effects and symptoms. If your doctor has given you instructions or medicines to treat these problems, be sure to also use them. In general, healthy habits such as eating a balanced diet and getting enough sleep and exercise may help control your symptoms. You can try home treatments: Other problems that can be treated at home include: Having cancer can be very stressful. Finding new ways of coping with your stress may improve your overall quality of life. These ideas may help: Your feelings about your body may change after treatment. Dealing with your body image may involve talking openly with your partner about your worries and discussing your feelings with a doctor. Having cancer can change your life in many ways. For help with managing these changes, see the topic Getting Support When You Have Cancer. For more information about learning how to live with cancer: Chemotherapy is the use of medicines to control the cancer's growth or relieve symptoms. Often the medicines are given through a needle in your vein. Your blood vessels carry the medicines through your body. Sometimes the medicines are available as pills. And sometimes they are given as a shot, or injection. Several medicines are used to treat colorectal cancer. There are also several medicines available for treating side effects. A combination of drugs often works better than a single drug in treating colorectal cancer. The most commonly used drugs are: Hair loss can be a common side effect with some types of chemotherapy. But hair loss usually isn't a side effect of these drugs. Your doctor may prescribe medicines that can help relieve side effects of chemotherapy. These side effects can include mouth sores, diarrhea, nausea, and vomiting. Your doctor may prescribe medicines to control nausea and vomiting. There also are things you can do at home to manage side effects. See Home Treatment for more information. Chemotherapy and radiation may be combined to treat some types of colorectal cancer. Radiation or chemotherapy given before or after surgery can destroy microscopic areas of cancer to increase the chances of a cure. Surgery to remove cancer is almost always the main treatment for colorectal cancer. The type of surgery depends on the size and location of your cancer. Side effects are common after surgery. You may be able to reduce the severity of your side effects at home. See Home Treatment for more information. Your doctor may suggest radiation therapy or chemotherapy if he or she thinks the cancer may come back (recur). If the cancer has spread to nearby lymph nodes, you may need chemotherapy after your surgery. Or if your surgery shows that the cancer has spread outside your colon or rectum, you may need radiation therapy. Sometimes after a bowel resection, the two ends of the colon or rectum can't be sewn back together. When this happens, a colostomy is performed. But most people don't need a colostomy. Radiation therapy uses X-rays to destroy colorectal cancer cells and shrink tumours. It is often used to treat rectal cancer, usually combined with surgery. It is used less often to treat colon cancer. It may also be combined with chemotherapy. Radiation may be given: Compared to surgery alone, radiation given before surgery may reduce the risk that rectal cancer will return, and it may help you live longer.footnote 2 People sometimes use complementary therapies along with medical treatment to help relieve symptoms and side effects of cancer treatments. Some of the complementary therapies that may be helpful include: Mind-body treatments like the ones listed above may help you feel better. They can make it easier to cope with cancer treatments. They also may reduce chronic low back pain, joint pain, headaches, and pain from treatments. Before you try a complementary therapy, talk to your doctor about the possible value and side effects. Let your doctor know if you are already using any of these therapies. Complementary therapies are not meant to take the place of standard medical treatment. But they may improve your quality of life and help you deal with the stress and side effects of cancer treatment. You may be interested in taking part in research studies called clinical trials. Clinical trials are based on the most up-to-date information. They are designed to find better ways to treat people who have cancer. People who don't want standard treatments or aren't cured by standard treatments may want to take part in clinical trials. CitationsNational Cancer Institute (2017). SEER cancer stat facts: Colon and rectum cancer. National Cancer Institute. https://seer.cancer.gov/statfacts/html/colorect.html. Accessed November 10, 2017.Lewis C (2007). Colorectal cancer screening, search date November 2006. Online version of BMJ Clinical Evidence: http://www.clinicalevidence.com.Canadian Task Force on Preventive Health Care (2016). Recommendations on screening for colorectal cancer in primary care. Canadian Medical Association Journal, published online March 15, 2016. DOI: 10.1503/cmaj.151125. Accessed April 6, 2016. Adaptation Date: 8/18/2021 Adapted By: Alberta Health Services Adaptation Reviewed By: Alberta Health Services To learn more about Healthwise, visit Healthwise.org. © 1995-2021 Healthwise, Incorporated. All rights reserved. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
<urn:uuid:68e9b7b2-ea05-40da-aca8-fa5604c85a01>
CC-MAIN-2021-43
https://myhealth.alberta.ca/health/Pages/conditions.aspx?hwid=hw198266
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00149.warc.gz
en
0.953007
3,552
3.421875
3
Termites are not only destructive but they can also be a nuisance when they are all over your property. In attempt to bring termite colonies to an end, traps and bait stations/systems and have been used as one of the ways of eliminating these insects. This article discusses more about how these control mechanisms work and the different types . What are Termite Baits and how do they work? What they are Termite baits are stations structured in such a way that they use the termites’ natural behavior to attract them and kill them. These baits are simply defined as feeding stations which are made of wood material and an additional toxic substance. Basically, to make a good termite bait, you require paper, cardboard (or an alternative termite food), and any substance which is lethal to termites but acts slowly. When making your termite bait, you must remember that you are in a competition to win termites from going to other sources of food like structural wood, stumps, and tree roots. Termite baits in most cases target subterranean termites since they are the ones which live underground. Termite baits are about 30 cm in length and they resemble cylindrical tubing which is vented. Each termite trap has a cap which is lockable. There are tiny ventilation grills on the top part. Termite baits are placed deep into the ground. However, the top of the trap is left to be in level with the soil. There are three main reasons as to why the baits are placed this way. Firstly, it is to avoid obstructions when you go about your daily activities in the garden. Secondly, this is so that they are not exposed since they are supposed to serve as a trap and finally, they are placed this way since most termites are found underground or crawling on the soil. You can either have your bait station underground or you can just set up the bait stations next to mud tubes off active termites. Usually, you do not place treated wood on the baits set underground not unless you have detected some termite activity in the station. How they work/how are they used? Termite baiting systems work effectively. This is because, they take into account the three main steps followed in most processes which aim at killing termites. Below are the reasons why bait systems work; They attract termites The first thing the baiting stations do is to attract termites. This is achieved by ensuring Tasmanian oak timber forms part of the bait. This is one of the timbers considered by termites to be their favorite food. Another attractant used in making the baiting stations is Focus. Focus has the ability to attract termites when they are up to three meters away. Hexaflumuron is made part of the baiting stations This is an insect regulator which inhibits normal growth of termites. When exposed to it, they can no longer shed their exoskeleton. The exoskeleton of termites is totally destroyed as soon as they are exposed to hexaflumuron. This growth inhibitor also contains cellulose which attracts termites to the bait. Hexaflumuron is chosen since it acts slowly and does not cause instant death of the termites. Remember, your intention is to ensure it is transferred to the rest of the colony. If you were to use a substance which causes instant death, the termites would lie dead on or next to the termite bait giving a signal to the other termites that there is danger and they should stay far away from the bait. In summary, there are two main things that the bait stations achieve due hexaflumuron; - It makes it hard for the termites to chew timber by softening their mandibles. - It makes their exoskeleton very weak and soft. This way, the termites cannot undergo the growth process. This not only terminates the current termites but it also puts an end to their future since they cannot reproduce. Baiting station eliminate the entire colony Even though not all termites are attracted to the baits, most of them do. When the termites that have been trapped escape back to their colony, they spread the poison. This leads to eventual death of all members of the colony. Termite Bait traps- do it yourself (DIY) You can make bait traps at home without involving a professional. The only thing you must remember when using homemade bait stations is that they may not deliver 100% excellence in eliminating termites when used alone. No matter the bait station you make, it must be a non-repellant since the idea behind termite trap is to attract termites. Below are some of the easy to make termite bait traps; A cardboard station This is the simplest baiting method which involves incurrence of very small costs. All you will need here is to take a cardboard and place it in areas which are suspected to be infested with termites. This cardboard serves both as a monitoring tool and as a way of fighting termites. You are advised to keep on checking the cardboard regularly so that you do not give time for the termites to come and leave. When you realize a high number of termites on the cardboard, you are supposed to immediately burn the cardboard so that all the termites on it die. Borax termite bait station There are plenty of ways in which borax bait stations can be made. The simplest of them all involves use of borax and a baseboard. After buying borax from your nearest store, you are required to place it on the baseboard and then locate the place where you have noticed a termite infestation. Borax is poisonous and termites exposed to it spread it to the rest of the colony members causing death of the entire colony. A slightly complicated borax bait station is one which involves use of borax, wood, shovel and polystyrene foam. The wood treated using this ingredient is placed in the area where the termites have been spotted. Termites get infected with this foam and this is the reason for their death. They however do not die immediately and this allows for the termites to take the poison to the rest of the colony. When you use this bait, keep children and pets away. Boric acid bait station To make boric acid bait stations, you will need a number of things; sugar, spoon, cellulose material such as a cardboard, and a bowl. - Add the sugar and boric acid into the bowl in equal ratios. - Using the spoon, stir well to ensure you make a consistent mixture. - Pour the mixture on the cellulose material (cardboard) evenly. - Place the cardboard in the location which is infested with termites. You are always advised to ensure that children and pets stay away from the place where you place your cardboard. Spectracide Terminate bait system This is a termite bait system which has sulfluramid as the active ingredient. This ingredient hinders the termite’s body system from converting food to energy. The termites require energy to go on their activities. For instance, the worker termites require energy to go get food for the colony. If they do not have the energy to do that, the whole colony will starve to death. Just like most of the termite baiting stations, the active ingredient in spectracide is a slow acting chemical giving time for spread of the poison to numerous termites. Super way termite bait stations This bait station is easy to install and you do not need to call a professional to assist you in any way. Once you purchase it from the nearest store, you are supposed to install it in the location where you have spotted the termites. Orange oil bait Spraying orange oil on a cardboard then placing the cardboard on the area infested with termites is another easy way to trap and kill termites. Termite Bait Stations/Systems Reviews Nemesis termite monitor bait system This bait system works by stopping the chitin rebuilding function of the termites’ body. This as a result causes the termite to die. The system is made using special wooden cartridges which contain poison (chlorfluazuron). Some of advantages of this baiting system over other systems is that; it has low toxic levels making it safe for use around people, it is easy to make any replacements to the baiting systems since all you will need is replace the wooden cartridges with new ones, and it effectively defends your house since the termites focus on the baits. The shortcomings of this baiting system are that; it is slightly more expensive than the other baiting systems, it may require you to involve a professional, and it requires you to do inspection regularly. Green termite bait systems This bait system is also very effective and it is usually placed underground. When termites go on searching for food, they will come across the green termite bait and stop by. With this system, you are in position to see exactly what is going on underground as it is structured in such a manner that it provides a window. As soon as you see the termites in the bait system, you then add a small amount of termiticide which is then carried by these termites to the rest of the colony. Based on information collected from thousands of homes, this baiting system leads to more than 99% success in killing of the termites. Sentricon bait system Created by Dow AgroScience, sentricon bait system is ranked among the most effective and most widely used termite bait systems. This is due to its excellence in performing the three major roles pf a baiting system which are; - Bait deliver - Continued monitoring The sentricon baiting system contains noviflumuron which is a slow acting chemical. This explains why, when termites are exposed to this bait, they stop growing and ultimately die. Following its success in termite elimination, hundreds of thousands of baiting stations baited using sentricon have been set up. Hexaflumuron termite bait This is a termite bait system made using hexaflumuron which prevents the termite from shedding its exoskeleton and hence bringing an end to its growth. The most common termite bait containing hexaflumuron is shatter termite bait. When the termites fed on the shatter termite bait, it takes about four weeks before the entire colony is brought to an end. This termite bait cannot be used since the termites that have fed on the bait leave some pheromones which prevent the other termites from feeding on the bait station. This is a do it yourself baiting system which has been found to be very effective in killing termites. Hexaflumuron is the active ingredient in spectracide terminate and as earlier discussed in the article, it hinders further growth of termites. This bait contains cellulose which contains sulfluramid. This bait system has been offered in the market since the early 90’s and millions of people continue to buy today following its effectiveness. Subterranean termites are the ones that are mostly trapped using this baiting system. Once they have eaten most of the stake making up the system, a spring loaded flag which is orange in color pops up as indicator that there is need for replacement of the bait. Terro termite Killer This termite killer is made up of very effective insecticides. It consists of water, cypermethrin (a substance which affects the termites’ nerve action), tetramethrin (causes nerve action on crawling termites), piperonyl butoxide (potentiates the effects of using tetramethrin), and some aroma additives. There are several benefits that come with use of terro termite killer to eliminate termites. Some of the benefits are discussed below; - First of all, it comes in different form such as sprays, jells, powders, traps, crayons, ointments, and concentrates. You have the chance to choose the form which will be most applicable for you. - That aside, terro termite killer has long lasting effect. Upon application, its effects can go on for several weeks. - It kills termites over a wide area in just a few hours. The major disadvantage of using this method of termite elimination is that, if you choose sprays, they may cause respiratory problems to you. This is because some of the insecticides used are harmful to human health. Termite Bait Stations vs Liquid Treatment Discussions on which between termite bait stations and liquid treatments is better suggest that liquid treatments are the best. Although the initial cost of setting up a termite baiting station is little, the amount of money you will spend repairing and maintaining the bait station in the long run may even exceed your budget. On the other hand, trimedor barrier will applied once and it kills the termites without requiring further inspection and maintenance. This means that, once you have applied the liquid treatments, you simply sit back and watch it kill all the termites. Looking at the effectiveness, bait station are not 100% effective and they require back up of other ways of eliminating termites. Unlike most of the baiting systems, liquid treatments are very close to 100% success in eliminating termites. Although the initial cost of applying the liquid treatments is high, you will realize that they are more economical and efficient in the long run. - What do Termites look like? Pictures, Size, Color & Look-alikes - Termites: Where From, Habitat, Eat, Noise, Types +more! - Subterranean Termites Swarmers Treatment Cost, Damages & Pictures - Flying Termites with Wings (Swarmers) Pictures & How to get Rid - Formosan Termites Pictures, Signs Treatment & Damage - What Causes & Attracts Termites? - Termite Droppings-Drywood, what they look like, Health Risks, Clean vs Sawdust - How to Get Rid, Kill Termites + Treatment Cost - Do Termites Bite Humans? Pictures & Remedies - Termite Inspection Cost, Procedure, how long it takes & Training - Flying Ants vs Termites-Differences & Similarities - Drywood Termites Treatment Cost, Get rid & Damage - Termite Bond Cost, Benefits - Dampwood termites Pictures, Damage & Treatment - Termite Colony-Queen, Soldier, Worker & King - Termite Tenting & Fumigation- Cost, Preparation, Safety & Cleaning After - Termite Baits, Stations & DIY Traps - Termites Damage, Pictures, Repair Cost, Ceiling, Floors, Wood & Fix - Best Termite Sprays-Orange Oil (DIY), Boric Acid & Spectracide Reviews - Termite Barriers (Shields): Types, Cost, how they Work & Products - Signs of Termites Infestation-How do you know if have you them? - How to Prevent Termites-Tips to Protect your Home - Termites Life Cycle & Span-Eggs, Larvae, Baby & Adults
<urn:uuid:d9d437b5-da7e-4b28-a32a-d637e9331570>
CC-MAIN-2021-43
https://pestbugs.org/termites/baits-stations-traps-diy/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00550.warc.gz
en
0.935084
3,221
2.953125
3
Giuseppe Maria Crespi (March 14, 1665 – July 16, 1747), nicknamed Lo Spagnuolo ("The Spaniard"), was an Italian late Baroque painter of the Bolognese School. His eclectic output includes religious paintings and portraits, but he is now most famous for his genre paintings. Giuseppe Maria Crespi |Died||16 July 1747 (aged 82)| |Known for||Painting Genre| Giuseppe Crespi, together with Giambattista Pittoni, Giovan Battista Tiepolo, Giovan Battista Piazzetta, Canaletto and Francesco Guardi forms the traditional great Old Masters painters of that period. Crespi was born in Bologna to Girolamo Crespi and Isabella Cospi. His mother was a distant relation of the noble Cospi family, which had ties to the Florentine House of Medici. He was nicknamed "the Spanish One" (Lo Spagnuolo) because of his habit of wearing tight clothes characteristic of Spanish fashion of the time. By age 12 years, he apprenticed with Angelo Michele Toni (1640–1708). From the age of 15–18 years, he worked under the Bolognese Domenico Maria Canuti. The Roman painter Carlo Maratti, on a visit to Bologna, is said to have invited Crespi to work in Rome, but Crespi declined. Maratti's friend, the Bolognese Carlo Cignani invited Crespi in 1681–82 to join an Accademia del Nudo for the purpose of studying drawing, and he remained in that studio until 1686, when Cignani relocated to Forlì and his studio was taken over by Canuti's most prominent pupil, Giovanni Antonio Burrini. From this time hence, Crespi worked independently of other artists. His main biographer, Giampietro Zanotti, said of Crespi: "(He) never again wanted for money, and he would make the stories and caprices that came into his imagination. Very often also he painted common things, representing the lowest occupations, and people who, born poor, must sustain themselves in serving the requirements of wealthy citizens". Thus it was for Crespi himself, as he began a career servicing wealthy patrons with artwork. He is said to have had a camera optica in his house for painting. By the 1690s he had completed various altarpieces, including a Temptation of Saint Anthony commissioned by Count Carlo Cesare Malvasia, now in San Niccolò degli Albari. He journeyed to Venice, but surprisingly, never to Rome. Bearing his large religious canvas of Massacre of the Innocents and a note from Count Vincenzo Rannuzi Cospi as an introduction, Crespi fled in the middle of the night to Florence in 1708, and gained the patronage of the Grand Duke Ferdinand III de' Medici. He had been forced to flee Bologna with the canvas, which while intended for the Duke, had been fancied by a local priest, Don Carlo Silva for himself. The events surrounding this episode became the source of much litigation, in which Crespi, at least for the next five years, found the Duke a firm protector. An eclectic artist, Crespi was a portrait painter and a brilliant caricaturist, and was also known for his etchings after Rembrandt and Salvator Rosa. He could be said to have painted a number of masterpieces in different styles. He painted few frescoes, in part because he refused to paint for quadraturists, though in all likelihood, his style would not have matched the requirements of a medium then often used for grandiloquent scenography. He was not universally appreciated, Lanzi quotes Mengs as lamenting that the Bolognese school should close with the capricious Crespi. Lanzi himself describes Crespi as allowing his "turn for novelty at length to lead his fine genius astray". He found Crespi included caricature in even scriptural or heroic subjects, he cramped his figures, he "fell in to mannerism", and painted with few colors and few brushstrokes, "employed indeed with judgement but too superficial and without strength of body". The Seven SacramentsEdit One celebrated series of canvases, the Seven Sacraments, was painted around 1712, and now hangs in the Gemäldegalerie Alte Meister, Dresden. It was originally completed for Cardinal Pietro Ottoboni in Rome, and upon his death passed to the Elector of Saxony. These imposing works are painted with a loose brushstroke, but still maintain a sober piety. Making no use of hieratic symbols such as saints and putti, they utilize commonplace folk to illustrate sacramental activity. |The Seven Sacraments| Crespi and the genre styleEdit Crespi is best known today as one of the main proponents of baroque genre painting in Italy. Italians, until the 17th century, had paid little attention to such themes, concentrating mainly on grander images from religion, mythology, and history, as well as portraiture of the mighty. In this they differed from Northern Europeans, specifically Dutch painters, who had a strong tradition in the depiction of everyday activities. There were exceptions: the Bolognese Baroque titan of fresco, Annibale Carracci, had painted pastoral landscapes, and depictions of homely tradespeople such as butchers. Before him, Bartolomeo Passerotti and the Cremonese Vincenzo Campi had dallied in genre subjects. In this tradition, Crespi also followed the precedents set forth by the Bamboccianti, mainly Dutch genre painters active in Rome. Subsequently, this tradition would also be upheld by Piazzetta, Pietro Longhi, Giacomo Ceruti and Giandomenico Tiepolo to name a few. He painted many kitchen scenes and other domestic subjects. The painting of The Flea (1709–10) depicts a young woman readying for sleep and supposedly grooming for a nagging pest on her person. The environs are squalid—nearby are a vase with a few flowers and a cheap bead necklace dangling on the wall—but she is sheltered in a tender womb of light. She is not a Botticellian beauty, but a mortal, her lapdog asleep on the bed-sheets. In another genre scene, Crespi captures the anger of a woman at a man publicly urinating on wall, with a picaresque cat also objecting to the man's indiscretion. Later works and pupilsEdit True to his eclecticism, is the naturalistic St John Nepomuk confessing the Queen of Swabia, made late in Crespi's life. In this painting, much is said by partially shielded faces. His Resurrection of Christ is a dramatic arrangement in dynamic perspectives, somewhat influenced by Annibale Carracci's altarpiece of the same subject. While many came to work in the studio, Crespi established after Cignani's departure, few became notable. Antonio Gionima was moderately successful. Others included Giovanni Francesco Braccioli; Giacomo Pavia; Giovanni Morini; Pier Guariente; Felice and his brother Jacopo Giusti and Cristoforo Terzi. He may also have influenced Giovanni Domenico Ferretti. While the Venetian Giovanni Battista Piazzetta claimed to have studied under Crespi, the documentation for this is nonexistent. Two of Crespi's sons, Antonio (1712–1781) and Luigi (1708–1779) became painters. According to their account, Crespi may have used a camera obscura to aid in depiction of outdoor scenes in his later years. After his wife's death, he became reclusive, rarely leaving the house except to go to daily mass. Partial anthology of worksEdit Woman Tuning a Lute, about 1700–05 (MFA, Boston, 69.958) Woman with Pandurina, Strasbourg Museum of Fine Arts Count Fulvio Grati, 1700-1720, Thyssen-Bornemisza Museum Cardinal Prospero Lambertini, 1740, Palazzo d'Accursio Ecstasy of St Margaret of Cortona, 1701, Diocesan Museum (Cortona) - The Marriage at Cana, Art Institute of Chicago - Holy Family (1688), Parish Church of Bergantino - Madonna del Carmine - Temptation of St. Anthony (1690), San Niccolò degli Albari, Bologna - Aeneas, The Sibyl and Charon, Kunsthistorisches Museum, Vienna - Hecuba blinding Polynestor, Musées Royaux des Beaux-Arts, Brussels - Tarquin and Lucretia, National Gallery, Washington D.C. - The Triumph of Hercules, The Four Seasons, The Three Fates, Neptune and Diana, frescoes of Palazzo Pepoli Campogrande, Bologna - The Finding of Moses & David and Abigail, Museo di Palazzo Venezia, Rome - Love triumphant (L'Ingegno), Musée des Beaux-Arts de Strasbourg - Chiron Teaches Achilles (1700s), Kunsthistorisches Museum, Vienna, Austria - The Ecstasy of Saint Margaret of Cortona (1701), Duomo, Bologna - Massacre of the Innocents (1706), Uffizi, Florence, Pinacoteca Nazionale, Bologna, and National Gallery, Dublin - The Fair at Poggio a Caiano (1709), Uffizi - The Nurture of Jupiter (1729), Kimbell Art Museum, Fort Worth - Singer at Spinet with an Admirer (1730s), Uffizi - Village Fair with dentist (1715–20), Pinacoteca di Brera, Milan - Series of The Seven Sacraments (1712), Gemäldegalerie, Dresden - Meeting between James Stuart and the Prince Albani, Národní Galerie, Prague - Annunciation with Saints (1722), Sarzana Cathedral - The Crucifixion (Pinacoteca di Brera, Milan) - Self-portrait (1725-1730), Pinacoteca di Brera, Milan - The Assumption of the Virgin (1730), Archivio Arcivescovile, Lucca - Two altarpieces for the church of the Gesù, Ferrara (1728–1729) - Four altarpieces for the church of the Benedictine Monastery of San Paolo D'Argon, province of Bergamo (1728–1729) - Martyrdom of Saint John the Evangelist - Joshua Stopping the Sun (1737), Colleoni Chapel, Bergamo - Martyrdom of Saint Peter of Arbuès (1737), Collegio di Spagna, Bologna - Self-portrait, Pinacoteca Nazionale, Bologna - The Family of Zanobio Troni, Pinacoteca Nazionale, Bologna - The Lute Player, Museum of Fine Arts, Boston - The Hunter, Pinacoteca Nazionale, Bologna) - The Messenger, Staatliche Kunsthalle, Karlsruhe - Courtyard Scene, Pinacoteca Nazionale, Bologna - Searching for Fleas,(Louvre); variants (Uffizi), Museo Nazionale di San Matteo, Pisa, and Museo di Capodimonte, Naples - The Woman Washing Dishes, Galleria degli Uffizi - A Peasant Family with Boys Playing, London - Peasants Playing Musical Instruments, London - Peasants with Donkeys, London - Importunate Lovers, Hermitage - Peasant Flirtation, London - Menghina from the Garden meets Cacasenno - Music Library Pinacoteca Nazionale, Bologna - Cupids at Play, El Paso Museum of Art - St John Nepomuk Hears Confession from the Queen of Bohemia, Turin, Galleria Sabauda - Man With Helmet, Nelson-Atkins Art Museum, Kansas City, Missouri - Lanzi p. 162. - "Artist Info". www.nga.gov. Retrieved 2017-08-18. - Lanzi p. 162-3. - "Giuseppe Maria Crespi | Italian painter". Encyclopedia Britannica. Retrieved 2017-08-18. - ;Guida di Pistoia per gli amanti delle belle arti con notizie, by Francesco Tolomei, 1821, page 177-178. - Hobbes, 1849, p. 68 - public domain: Chisholm, Hugh, ed. (1911). "Crespi, Giuseppe Maria". Encyclopædia Britannica. 9 (11th ed.). Cambridge University Press. p. 412. This article incorporates text from a publication now in the - Hobbes, James R. (1849). Picture collector's manual adapted to the professional man, and the amateur. T&W Boone; Digitized by Googlebooks. p. 68. - Spike, John T. (1986). Giuseppe Maria Crespi and the Emergence of Genre Painting in Italy. Fort Worth: Kimball Museum of Art. pp. 14–35. - Luigi, Lanzi (1847). Thomas Roscoe (ed.). The History of Painting in Italy; from period of the revival of the arts to the eighteenth century. Henry G. Bohn; Digitized by Googlebooks from Oxford University copy on January 31, 2007. pp. 162–165. - Domenico Sedini, Giuseppe Maria Crespi, online catalogue Artgate by Fondazione Cariplo, 2010, CC BY-SA. Media related to Paintings by Giuseppe Maria Crespi at Wikimedia Commons
<urn:uuid:ac950ee8-d2b4-4878-848d-c6776348a2e6>
CC-MAIN-2021-43
https://en.m.wikipedia.org/wiki/Giuseppe_Crespi
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00030.warc.gz
en
0.913257
3,078
2.71875
3
Microcontroller Interfacing – AdvancedBitahwa Bindu Microcontrollers have become very useful in embedded design as they can easily communicate with other devices, such as sensors, switches, LCD displays, keypads, motors and even other microcontrollers. A microcontroller is basically used as the brain or intelligent processing unit to control other devices connected (interfaced) to it in embedded systems just like a PLC in industrial automation. To interface a device to a microcontroller simply means to Connect a device to a microcontroller. This article will make it easier to anybody with very limited experience in electronics to learn how to interface to a PIC Microcontroller some advanced components Graphical LCD, Quad 7-Segment Display, SD Card, DC Motor, GSM modem, GPS module, Real Time Clock and so on. Many interface methods have been developed over the years to solve the complex problem of balancing circuit design criteria such as cost, size, weight, power consumption, reliability, availability. 1. Multiplexing of 7-Segment Displays A 7-segment display is the earliest type of an electronic display that uses 7 LEDs bars arranged in a way that can be used show the numbers 0 – 9. (actually 8 segments if you count the decimal point, but the generic name adopted is 7-segment display.) These devices are commonly used in digital clocks, electronic meters, counters, signalling, and other equipment for displaying numeric only data. Figure 2: Connecting a 2-digit 7-segment display to a PIC microcontroller A 1-digit 7-segment display can only show numbers from 0 to 9, a 2-digit display can show numbers between 0 and 99, a 3-digit between 0 and 999, a 4-digit between 0 and 9999, and so on. Figure 2 shows a 2-digit 7-segment display connected to a PIC microcontroller and figure 3 shows 2-digit and 4-digit seven segment displays. Figure 3: 2-digit and 4-digit Seven segment displays When more digits are required to be displayed, we need to come up with a better technique to connect more than 1-digit 7-segment displays to a microcontroller because if we connect them like the 1-digit display we will soon run out of input/output pins. A 1-digit 7-segment display requires 7 output pins, a 2-digit would require 14 and a 4-digit would require 28, this is definitely not an efficient way of using a microcontroller. The common widely used technique is to multiplex the digits to save input/output pins. All the digits share the same microcontroller pins plus few more pins to connect the digits to ground or to positive power depending on whether a common cathode or anode segments are used. With multiplexing, a 2-digit display will require only 9 pins, a 3-digit display will require 10 pins, a 4-digit display will require 11 pins, and so on. Another advantage of multiplexing 7-segment LEDs is to reduce the power consumption considerably. In multiplexed applications, all the digit segments are driven in parallel at the same time, but only the common pin (e.g. anode or cathode) of the required digit is enabled. By enabling or disabling the digits so fast that it gives the impression to the eye that both displays are ON at the same time as the human eye cannot differentiate it when the speed is too high. This technique is based on the principle of Persistence of Vision of our eyes. If the frames change at a rate of 25 ( or more) frames per second, human eye can’t detect that visual change. For example, let say we want to display the number ‘67’ on a 2-digit common cathode display. The steps are given below: - Send data to display ‘6’ on both digits. - Enable the left digit by grounding its cathode pin (send a high to the base of the transistor) and disable the right digit. - Wait for a while. (a short delay) - Send data to display ‘7’ on both digits. - Enable the right digit by grounding its cathode pin (send a high to the base of the transistor) and disable the left digit. - Wait for a while (a short delay). - Go back to step 1. By doing this rapidly, the eye won’t notice any fluctuation. The common pins of each digit are usually controlled using transistors switches, almost any NPN transistor such as the BC108-Type transistor could be used for this purpose. A 1KΩ resistor can be used to limit the base current to about 4mA enough to saturate the transistor. Figure 1 shows how a 2-digit display can be connected to a microcontroller using NPN transistors to control the segment lines. Notice that setting the base of a transistor to logic HIGH will turn the transistor ON and hence will enable the common cathode pin connected to it. To learn more read these articles: 2. Interfacing a Graphical LCD Display Graphics LCD displays (GLCDs) are commonly used in applications, where we may want to display not only basic characters but can graphical data as well such as a bar-chart or x-y line graph and some shapes like rectangles, circles and so on. GLCDs are also used in many consumer applications, such as mobile phones, GPS systems, but also in industrial automation and control, where various plant characteristics can easily be monitored or changed especially if a touch-screen facility is used. There are many types of Graphical LCD screens and controllers, for small applications, the 128 X 64 pixel monochrome GLCD with the KS0108 controller is one of the most commonly used displays. For larger display requirements the 240 X 128 pixel monochrome GLCD screen with the T6963 (or RA6963) controller could be selected. For colour Graphical Display applications, The TFT displays seem to be the best choice currently. In this section we shall be looking at how the standard 128 X 64 GLCD can be interfaced with a PIC microcontroller. The display is connected to the PIC Microcontroller through a 20-pin as shown on figure 4 below. Figure 4: The 128 X 64 pixel monochrome GLCD with the KS0108 controller 3. Interfacing GSM/GPRS Modem with PIC Microcontroller A GSM modem is a wireless modem that works with a GSM wireless network. GSM stands for Global System for Mobile communications, this architecture is used for mobile communication in most of the countries in the world. A wireless modem acts basically like the traditional dial-up modem, the main difference is that a dial-up modem sends and receives data through a fixed telephone line while a wireless modem sends and receives data through radio waves. Besides the dial-up connection, GSM modems can also be used for sending and receiving SMS and some support the GPRS technology for data transmission. It is very easy to interface a GSM Modem to a PIC Microcontroller as most GSM modems have a serial interface. The USART serial input pin RX and TX of the microcontroller are connected to the TXD and RXD pins of the GSM Modem. Some GSM modems have PCMCIA Type II or USB interfaces. Figure 5 below shows a block diagram of a GSM module connected to USART module of a PIC Microcontroller. Figure 5: GSM module connected to a PIC Microcontroller Depending from the type of serial port on the Microcontroller hardware, a level translator circuit may be needed to make the system work. If the Microcontroller USART voltage level is 5V as in most of the cases, most the GSM/GPRS modems USART voltage level is about 2.8V – 3V, you need a voltage level translator circuit. A simple diodes/resistors network could do the job as shown on figure 6 below. Three diode in series are used to drop down voltage of TX pin of microcontroller to to 2.9 volt (each diode drops 0.7V) which is in acceptable range for RXD pin of gsm module. Similarly a diode, a resistor and 5 volt source is used to increase voltage of TXD pin of GSMmodule to 5 volt which is logic high for RX pin of pic microcontroller. There are GSM board on the market that one can use to quickly interface to a PIC. The SmartGM862 Board from Mikroelekronika is one example of many boards. The SmartGM862 is a full-featured development tool for Telit GM862-QUAD GSM/GPRS module or the GM862-GPS version. It features GM862 module connector, voltage regulator, antenna holders, speaker and microphone screw terminals and more. DIP switch is provided for configuring UART communication lines with the target microcontroller. It can be connected to development boards via IDC10 connector. Figure 7: Connecting the SmartGM862 Board to EasyPIC7 V7 Development Board To learn more read these articles: 4. Interfacing ENC28J60 Ethernet Controller with PIC MicroController Ethernet is the leading wired standard for networking as it enables to connect a very large number of computers, microcontrollers and other computer-based equipment to one another. With just a network switch, many different devices can easily communicate with one another with Ethernet, allowing different devices and equipment to be accessed remotely and this also provides a cost-effective and reliable means of remote control and monitoring. Most of computers nowadays have an Ethernet port implemented on them so it is with many electronic devices. Many microcontrollers have built-in Ethernet peripheral, like the PIC18F97J60, this PIC18 Microcontroller has an integrated 10Mbps Ethernet communications peripheral but many other microcontrollers don’t have a built-in Ethernet peripheral. When a microcontroller which does not have an integrated Ethernet peripheral is used, Microchip offer a serial Ethernet chip that can easily be used by any microcontroller with an SPI interface to provide Ethernet capability to the application. The ENC28J60 is a popular 28-pin serial Ethernet chip, 10BASE-T stand alone Ethernet Controller with SPI interface, on board MAC & PHY, 8 Kbytes of Buffer RAM and an SPI serial interface. With a small foot print package size the ENC28J60 minimizes complexity, board space and cost. The interface between the microcontroller and the Ethernet chip is based on the SPI bus protocol, The SI, SO, and SCK pins of the Ethernet chip are connected to SPI pins (SDO, SDI and SCLK) of the microcontroller. The Ethernet controller chip operates at 3.3V, its output SO pin cannot drive the microcontroller input pin without a voltage translator if the microcontroller is operated at 5V. Figure 8 below shows how the ENC28J60 Ethernet controller can be interfaced to a PIC Microcontroller. Figure 8: ENC28J60 Ethernet Controller Connections To make the design of Ethernet applications easy, there are ready made boards that include the EC28J60 controller, voltage translation chip and an RJ45 connector. Figure 4 belows shows the the mikroElektronika Serial Ethernet Board. This is a small board that plugs in directly to PORTC of the EasyPI CV7 development board via a 10-way IDC plug simplifying the development of embedded Ethernet projects. This board is equipped with an EC28J60 Ethernet controller chip, a 74HCT245 voltage translation chip, three LEDs, a 5 to 3.3 voltage regulator and an RJ45 connector with an integrated transformer. Figure 9: Connecting the Serial Ethernet Board to EasyPIC7 V7 development board To learn more read: 5. Interfacing DC Motor A DC Motor cannot be driven directly from a Microcontroller’s pin. Normally DC Motors require high current and high voltage than a Microcontroller can handle as Microcontrollers usually operates at +5 or +3.3V supply and it I/O pin can provide only up to 25mA current which on most cases is not enough for a motor. Typical small DC Motors require 12V supply and about 300mA current which way beyond what a Microcontroller can handle, however there are a couple of interfacing techniques that can be used. The solution is to use an H-bridge circuit constructed from four MOSFET transistors, as shown on figure 10 below. Figure 10: H-Bridge DC Motor Driving circuit To Switch OFF/STOP the motor, a logic ‘0’ should be applied to RB0, RB1, RB2 and RB3. To Switch ON the Motor Clockwise, a logic ‘1’ should be applied to RB0 and RB2 while leaving RB1 and RB3 on logic ‘0’. To Reverse (Anticlockwise) the Motor, RB1 and RB3 should be set high (1) while RB0 and RB2 set low (0). Because a motor is an inductive load, a back emf could destroy the transistors when the motor switches OFF, the four Diodes are used to suppress the back emf. Figure 11 shows a Motor Control Circuit using the L293D. We can drive two DC Motors with one L293D, in this example we are using only the first pair of drivers to drive one DC Motor. First pair of drivers are enabled by connecting EN1 to Logic HIGH. IN1 and IN2 are connected to RB0 and RB1 of PIC Microcontroller respectively which are used to provide control signal to the DC Motor. DC Motor is connected to OUT1 and OUT2 of the L293D. Figure 11: L293D Motor Driving Chip Circuit To learn more read: 6. Interfacing SD Card A memory card (also called a flash memory card) is a solid-state electronic data storage device used for storing digital information. They are commonly used in many electronic devices, including digital cameras, mobile phones, laptop computers, MP3 players and also in many applications where a large amount of data has to be stored either once or continuously like in data loggers. Memory cards are small, rewritable and are able to retain data without power. The card has nine pins, as shown in the figure 12 below, and a write-protect switch to enable/disable writing onto the card. Figure 12: SD Card pins A standard SD card can be operated in two modes: the SD Bus mode and the SPI Bus mode. In SD Bus mode, all the pins of the card are used, data is transferred using four pins (D0–D3), a clock (CLK) pin, and a command line (CMD). In SPI Bus mode using a chip select (CS) and a CLK line. The following pins are used in SPI Bus mode: - Chip select: Pin 1 - Data in: Pin 2 - Clock: Pin 5 - Data out: Pin 7 - Positive: Pin 4 - Ground: Pin 3 and 6 The Card operates with 3.3V supply voltage and these are the logic levels: Maximum logic 0 output voltage, VOL = 0.4125 V Minimum required logic 1 input voltage, VIH = 2.0625 V Maximum logic 1 input voltage = 3.6 V Maximum required logic 0 input voltage, VIL = 0.825 V When connected to a PIC microcontroller, the output voltage (2.475 V) of the SD card to the PIC is enough to drive the input circuit of the microcontroller, but the typical logic 1 output voltage of a PIC microcontroller pin is 4.3 V, and this is too to apply to the card, where the maximum voltage should not exceed 3.6 V. As a result of this, it is required to use resistors at the inputs of the SD card to lower the input voltage. Figure 13 below shows a typical SD card interface to a PIC microcontroller in SPI mode. In this figure, 2.2K and 3.3K resistors are used as a potential divider circuit to lower the SD card input voltage to approximately 2.48 V, as shown below. SD card input voltage = 4.3 V × 3.3 K / (2.2 K + 3.3 K) = 2.48 V. Figure 13: SD card connected in SPI mode to Port C of PIC Microcontroller. SD cards can consume up to 100–200 mA while reading or writing onto the card. This is usually a high current, and an appropriate voltage regulator capable of supplying the required current must be used in the design. The card consumes approximately 150 μA in sleep (the card goes automatically in sleep mode if it doesn’t receive any command in 5ms). Watch the video Tutorial: To learn more read these articles:
<urn:uuid:15633249-07f8-469d-8010-a3525b319982>
CC-MAIN-2021-43
https://www.studentcompanion.co.za/microcontroller-interfacing-advanced/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00589.warc.gz
en
0.873746
3,581
3.28125
3
History Revisionists: What Did Nazis Do With Their Nations Historic Statues And Monuments? History is written by the victors. This is true. The ancient Egyptians were notorious for it. Babylonians and Persians were too scared to record negative historical events of their rulers. Romans always added a ‘glory of Rome’ slant and bias in recording some of their historical events. But, in the more common era, how did the Nazis handle their nations history and how did they record their own events? We must first understand HOW the Nazi party came to power. Let’s first look at The Nazi party of Germany before they began World War 2. But, first, we must call them by their own actual party name: The National Socialist German Workers Party. They were made official in 1921 and unknown Hitler joined it that year. At their founding, they were actually just called the German Workers Party, but were socialist in principle. Hitler soon emerged as a charismatic public speaker and began attracting new members with speeches blaming Jews and Marxists for Germany’s problems. The tactic of deflecting, blaming, and vilifying opponents and opposing opinions. His popularity grew and by mid 1921, he was the leader of the officially renamed National Socialist German Workers Party. Hitler gave speech after speech in which he stated that unemployment, rampant inflation, hunger and economic stagnation in postwar Germany would continue until there was a total revolution in German life. Most problems could be solved, he explained, if communists and Jews were driven from the nation. His fiery speeches swelled the ranks of the Nazi Party, especially among young, economically disadvantaged Germans. In 1923, Hitler and his followers used armed groups (SA toughs) to provide security at their rallies and to stir up unrest elsewhere in the country. They were then used to staged the Beer Hall Putsch in Munich, a failed takeover of the government in Bavaria, a state in southern Germany. Hitler had hoped that the “putsch,” or coup d’etat, would spark a larger revolution against the national government [kinda like Antifa and CHAZ/CHOP of Seattle and Porland]. 9 November 1923, Hitler led a demonstration through the streets of Munich, aiming to take control of the war ministry building. Armed police blocked their route, and violence broke out on both sides. Fourteen Nazis and four policemen were killed. He was jailed for that. Hitler’s subsequent trial for treason and imprisonment made him a national figure. A judge sympathetic to the Nazis’ nationalist message allowed Hitler and his followers to show open contempt for the Weimar Republic, which they referred to as a “Jew government.” After his release from prison, he set about rebuilding the Nazi Party and attempting to gain power through the election process. Never again would he attempt an armed uprising. Instead, the Nazis would use the rights guaranteed by the Weimar Constitution—freedom of the press, the right to assemble, and freedom of speech—to win control of Germany. How much of this is similar to today? A lot. In 1929, Germany entered a period of severe economic depression and widespread unemployment. The Nazis capitalized on the situation by criticizing the ruling government and began to win elections. They promised to restore Germany’s standing in the world and Germans’ pride in their nation as well as end the depression, campaigning with slogans such as “Work, Freedom, and Bread!” In the July 1932 elections, they captured 230 out of 608 seats in the “Reichstag,” or German parliament. Big-business circles had begun to finance the Nazi electoral campaigns, and swelling bands of SA toughs increasingly dominated the street fighting with opposition groups that accompanied such campaigns. Are we seeing this today? Yep. The German republic President, Paul von Hindenburg, named Hitler as chancellor of Germany on January 30, 1933. Hitler used the powers of his office to solidify the Nazis’ position in the government during the following months. Hermann Göring’s role in particular was very important. He was a minister without portfolio who got to control the police force of Prussia, the larger part of Germany. For the Nazis, this was reason to celebrate their ‘national revolution’. Then the arrests and intimidation were increased. The government banned the Communist Party. By 15 March, 10,000 communists had been arrested. In order to house all these political prisoners, the first concentration camps were opened. College students and schools encouraged and assisted in book burning of non-German writings. Mass censorship of books, writings, speeches, organizations, and anything that disagreed with the Nazi party was imposed [kinda like Twitter, Google, Facebook, YouTube, and Amazon today]. Nazis tore down statues too. Fragments of a statue of mounted Polish King Wladyslaw II Jagiello lie on the ground after it was destroyed by Nazi troops following the 1939 invasion of Poland. Then, Nazi troops await the order to pull down a bust of Austrian Chancellor Engelbert Dollfuss after the annexation of Austria by Nazi Germany in 1938. Nazis then ordered the destruction of French statues once they invited and occupied France. Even forced the French to do it themselves. Tearing down statues is not just a Nazi thing, the head of a monument to Russian Tsar Alexander III during its dismantling in central Moscow. The gigantic statue was ripped down soon after the 1917 revolution that led to communist rule in Russia. Even the Taliban destroyed history. A landscape of Bamiyan shows the gap in the rock where Afghanistan’s famous giant Buddha stood for centuries before being destroyed by the Taliban in 2001, a move the group said was “in accordance with Islamic law.” And Islamic State militants pushes over a statue inside a museum in Mosul, northern Iraq, in 2014 or 2015. The Muslim extremists smashed several ancient treasures in the museum — which the militants deemed “idolatrous” — with sledgehammers and power tools. Sounding familiar today? Yep. The Nazis started advocating clear messages tailored to a broad range of people and their problems. The propaganda aimed to exploit people’s fear of uncertainty and instability. These messages varied from ‘Bread and Work’, aimed at the working class and the fear of unemployment, to a ‘Mother and Child’ poster portraying the Nazi ideals regarding woman. Jews and Communists also featured heavily in the Nazi propaganda as enemies of the German people. Goebbels used a combination of modern media, such as films and radio, and traditional campaigning tools such as posters and newspapers to reach as many people as possible. It was through this technique that he began to build an image of Hitler as a strong, stable leader that Germany needed to become a great power again. The elections of March 5, 1933—precipitated by the burning of the Reichstag building only days earlier—gave the Nazi Party 44 percent of the votes, and further unscrupulous tactics on Hitler’s part turned the voting balance in the Reichstag in the Nazis’ favour. On March 23, 1933, the Reichstag passed the Enabling Act, which “enabled” Hitler’s government to issue decrees independently of the Reichstag and the presidency; Hitler in effect assumed dictatorial powers. On July 14, 1933, his government declared the Nazi Party to be the only political party in Germany. The idea that their party is most righteous and most justified, a party of absolute self-righteousness. An idea that is very present in far-left movements. Hitler also reduced the authority and influence of the regional German police departments and instituted his own police force to impose his ideals on the people. Known as the ‘brown shirts” and where from the SS came out of. Makes me wonder what the left will put in place if they defund the police? Nazi Party membership became mandatory for all higher civil servants and bureaucrats, and the gauleiters became powerful figures in the state governments. This is sadly true in America, in some areas, the only way to get nominated and possibly elected is by having to join the Democrat or Republican party… it is essentially mandatory or you don’t meet the rules or qualifications and don’t have a chance of election. Labor unions that did not adhere to nazism were dismissed and dismantled. The loyal nazi unions then were used to influence the workers and prevent any sort of strikes or opposition to the goals of the party. The Nazi party then organized and supported boycotts of Jewish products and businesses. Kinda like religious and gun rights businesses and organizations today. Its vast and complex hierarchy was structured like a pyramid, with party-controlled mass organizations for youth, women, workers, Unions, media, press, the arts, and other groups at the bottom, party members and officials in the middle, and Hitler and his closest associates at the top wielding undisputed authority. Very similar to socialist leftist organizations now. Nazi ideology: the belief in race science and the superiority of the so-called Aryan race (or “German blood”). For the Nazis, so-called “German blood” determined whether one was considered a citizen. The Nazis believed that citizenship should not only bestow on a person certain rights (such as voting, running for office, or owning a newspaper); it also came with the guarantee of a job, food, and land on which to live. Those without “German blood” were not citizens and therefore should be deprived of these rights and benefits. Now, think for a second, how do we determine if someone is “black” or “white”? It is primary assumed by their skin color. And “blackness” and “whiteness” is determined some cultural superior academic who studies “race science.” Who then pushes that down on the populous. Then, certain ‘rights’ or specialist treatment is then bestowed and granted to someone who satisfies that superior academical consensus on “blackness.” Which then, has a negative effect on someone who that superior academic imposes “whiteness” on, and racial prejudice and “positive” discrimination is “justified.” In Germany, it was positive prejudice for “German blood,” and negative prejudice for Jews; in America, it it positive prejudice for “Blackness.” and negative prejudice for “Whiteness.” The logical similarities are worrisome. Why are Americans now, acting like Nazis? Don’t worry, the same can be said about communists. They were also masters of history revision, censorship, propaganda, and tearing down statues; but since majority of young Americans have a decent to favorable view of communism, hopefully the similarities of Nazism will get them thinking. Resources and Citations
<urn:uuid:251cc2bc-0ef2-408c-b531-0608b642986d>
CC-MAIN-2021-43
https://potr1774.com/history-revisionists-what-did-nazis-do-with-their-nations-historic-statues-and-monuments/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00669.warc.gz
en
0.968475
2,246
3.421875
3
The questions that arise between scientists and theologians should help them to grow together in their understanding In discussions of science and faith, one often gets the impression that either science or Scripture can be believed—not both. In the secular world, science is by default considered the true source of knowledge. The Bible, if consulted at all, is seen as useful only as a source of spiritual insight—as long as it presents no conflict with current scientific consensus. Are the Bible and science truly in conflict? How can a believer who is also a scientist relate to this issue? First, a definition of science. For the purposes of this article, the wordscience refers to a systematic process that attempts to explain phenomena in terms of the physical mechanisms that cause them. Other definitions are possible, but this definition will suffice for this discussion. In a similar vein, a miracle is an event that cannot be explained solely by naturalistic scientific means. Experimental and Historical Sciences In discussing science and faith, it is useful to distinguish between experimental (or empirical) science on the one hand and historical science on the other. Sciences that are mainly experimental (e.g., chemistry, physics, anatomy, ecology) involve the manipulation of physical conditions to isolate and identify causal factors that explain an event. Sciences that are mainly historical (e.g., archeology, paleontology) study the results of some past event and attempt to explain what occurred to produce the observed evidence. Most sciences include both empirical and historical aspects. Only the empirical aspects, however, are open for experimentation. The historical parts are not. Normally, there is no conflict between Scripture and experimental science. Difficulties arise when attempting to understand historical events for which the Bible provides a supernatural explanation, and a scientist attempts to arrive at a naturalistic explanation. Different Types of Bible Passages Before considering further the ways in which science and Scripture seem difficult to reconcile, note that there are many areas where there is no conflict. For example, although the Bible is not primarily a science text, it nevertheless describes many events of a scientific nature. Various Bible authors mention mammals, birds, and plants. Aspects of anatomy, physiology, and behavior—plant, animal, and human—are mentioned by Bible authors. The Bible describes the creation of life forms, implying that God designed and fabricated the living systems available for study today. Science today confirms the appearance of design at all levels of complexity, although considerable disagreement exists over the cause of the design. Some passages in the Bible were written in symbolic terms or in figures of speech. Thus, one might mistakenly interpret an expression as literal when it is meant to be figurative. For example, Habakkuk 3:3 says that God came from Teman. Perhaps some people would conclude from that text that God lives in Teman, but most readers consider this to be a figure of speech. Here, God is represented as coming from the south, or Sinai, where the Ten Commandments were given. Other passages may be poetic, illustrative, or expressions of common understanding, not written to convey scientific explanations. On the other hand, many passages of Scripture are clearly intended as historical narrative. These include passages such as Genesis 1–11, the Gospel accounts of Jesus’ miracles, and His virgin birth, death, and resurrection. The clearly expository prose does not support attempts to “spiritualize” them or otherwise categorize them as figurative, poetic, etc. Natural and Supernatural Explanations There are two possible explanations of phenomena (or events): natural or supernatural. The two explanatory systems may be in conflict or may complement each other. As the Bible primarily describes God’s activities in the course of human history, it almost always proffers supernatural explanations. As mentioned above, explanations of past events are inherently not directly testable by scientific methods. For a given phenomenon that the Bible describes as supernatural, a materialistic (or naturalistic) scientist may give a naturalistic explanation. In some instances, both explanations may apply. In other words, God may well have used ordinary physical processes in a supernatural way to accomplish His will. Many of the great scientists of the past were believers and saw no conflict between the Bible and science. In the 17th century, scientists were divided into two camps in regard to religion and science (or philosophy, as it was then called). Francis Bacon and Galileo Galilei belonged to the “separatist” group who felt that the Book of Scripture and the Book of Nature were best kept separate, while recognizing that both had the same Author. In the past half-century, American scientist Stephen Gould has extended the idea of separation with his NOMA (Nonoverlapping Magisteria) proposal, which declared that science and religion occupy separate realms that do not interact.1 According to Gould, religion deals with spiritual and ethical ideas, while science deals with the real world. Accepting NOMA thus seems to necessitate rejection of Scripture as the inspired Word of God. The other group of 17th-century scientists, the Pansophists, viewed science and Scripture as being ultimately in harmony. Thus, both groups arrived at a “no conflict” answer—the separatists because they compartmentalized the fields of study, and the Pansophists because they saw science as reinforcing Scripture. Both groups saw God as author of Scripture and Creator of the world. Any apparent conflict lay in a disagreement between interpretations of the Bible and/or interpretations of science. We might take the same approach today with the additional caveat that not all of our questions will be answered. Since we are in a sinful world and have only incomplete understanding of science and Scripture, we will not arrive at complete answers to all questions. Areas of Conflict Conflict is especially prominent in the study of origins, which is a historical question, not an experimental one. Those with a naturalistic worldview prefer evolutionary theory because it posits explanations in terms of purely physical mechanisms. Those with a worldview based on biblical revelation prefer creation theory because it accepts biblical accounts of supernatural activity in the creation and maintenance of the natural world. Both views appeal to evidence. Because that evidence is so incomplete and open to different explanation, the scientist’s worldview comes to play a major role in interpretation; conflict is very evident. One of the best-known examples is found with regard to Galileo Galilei (1564-1642), considered by many to be the father of modern observational astronomy, modern physics, and ultimately the individual most responsible for the birth of modern science. In the late 16th century, leaders of the Roman Catholic Church endorsed the idea that the Earth was the center of the universe. While a pious believer, Galileo was nevertheless a scientist. He advocated Copernicus’ idea that the Earth revolved around the Sun. Since the church considered itself the supreme authority, Galileo was deemed a heretic. In this example, it is important to note that Galileo’s problem was not strictly a Bible/science conflict, but it reflected a difference between religious leaders and some scientists over how to interpret the Bible and scientific data. In the eyes of most materialist scientists, conflict has always existed between secular scientists and those who hold a theistic worldview. Books have been written on the topic of the so-called war between science and religion. Unfortunately, overzealous Christians share in the responsibility for this conflict. Serious thinkers were often alienated by superstition, suppression, and coercion (associated with the dominant church), and this led to distrust of the Bible itself. The Bible chronicles the occurrence of numerous miracles, which are almost invariably interpreted differently by two groups. A person not persuaded of the Bible’s divine inspiration—a “non-believer” for the sake of this article—concludes that the miracle did not in fact occur and that the biblical account is fallacious. The non-believer arrives at one of the following conclusions: (1) the writer thought it happened the way he wrote it but was wrong; (2) he knew it was wrong but was trying to fool his audience; or, (3) he wanted to make a point and merely told an illustrative story to do so. In any of these cases, the biblical report is regarded as unreliable, or at the least, not to be taken literally. In contrast, the person who accepts the Bible as divinely inspired—a “believer” for the purpose of this article—accepts the miracle by faith. Because the occurrence was placed in the Bible, and the Bible is God’s Word, the believer accepts that God used His power to cause the miracle. Miracles With No Available Physical Evidence But what about miracles for which there is no physical evidence? An example included by Gospel writers is Jesus walking on the water (Matt. 14: 25-32). Skeptics might suggest that Jesus may have known the location of rocks just under the surface so that He could walk from land to the boat, thus appearing to walk on water. Peter, not knowing the location of these rocks, lost his footing and had to be rescued. Believers may rightfully regard such explanations as strained, but since no direct physical evidence is available today, no tests may be conducted. Thus, the story is accepted or rejected based on personal presuppositions. A second example is Jairus’ daughter, a young girl who has died, whom Jesus brings back to life (Luke 8:49-56). The non-believer may observe that Jesus Himself declared that the girl was only asleep (Matt. 9:24), and that He merely woke her. Matthew and Luke’s reports are thus discounted as wrong. There is no direct physical evidence to know for sure whether the girl was in fact dead or not. One’s response to the account will depend on one’s confidence in the reliability of Scripture. Miracles With Observable Physical Effects Miracles for which physical evidence does exist today seem to present more problematic issues. At times, it appears that scientific evidence strongly disagrees with the most careful interpretation of Scripture. These are issues that may be called “No conflict, but . . .” issues. The belief is that the Bible and science are not in conflict. Nevertheless, they do appear to be so. To resolve these issues, evidence must be very carefully evaluated, as it can be interpreted in many different ways. According to a believer, the origin of life on Earth is an example of a miraculous event in which the Bible and science are not in conflict. For more than half a century, numerous experiments have been conducted in an attempt to produce life from non-living material via naturalistic means. Thus far, these experiments have failed to produce empirical evidence for the spontaneous origin of life. Therefore believers feel this is consistent with the biblical narration that life originated through supernatural activity. Non-believers would not be convinced—the absence of evidence is not considered good evidence. The fact that organic molecules have been made from inorganic gases is taken by secular scientists as evidence that spontaneous generation of a living cell could occur and therefore there is conflict in their minds. The area where the “No conflict, but . . .” questions are perhaps the most vexing is the amount of time required for accumulation of the fossil-bearing sediments in the Earth’s crust. There seems to be a conflict between the relatively short time implied in the Bible and the long time inferred by science. Ice cores offer another example. In places on the world’s surface like Greenland, a thick layer of ice has formed. When the ice is drilled into and a core is pulled out, there are layers like rings in a tree. Some ice cores may contain 160,000 layers,2 the lower ones of which have been identified by chemical means. Since the layers are presumably laid down one layer each year, this presents a conflict with the Bible’s timetable. Of course there are no dates in the Bible, but most conservative biblical scholars have used genealogies mentioned in the text to conclude that not much more than 10,000 years are represented by biblical history. Many other examples can be given of conventional dating techniques that suggest the Earth is much older than 10,000 years. Many Bible-believing scientists see no conflict in old dates for rocks. God certainly could have created the rocks of the Earth many millions of years ago and then organized the Earth’s crust during a more recent Creation week. However there are many examples of fossils found in rocks dated by standard techniques as much older than 10,000 years. Even considering these problems, there is evidence that the last chapter in age dating has not yet been written. In some cases, new scientific evidence may cast doubt on current conventional age dating. For example, soft tissue was recently discovered inside fossil dinosaur bones thought to be about 67 million years old.3 No one has an explanation for how soft tissue can survive that long. Another example is the discovery of the catastrophic nature of the Yellowstone fossil forests,4 once thought to represent long ages of ordinary processes. Other evidence for rapid deposition of sediments includes the rapid underwater deposition of turbidites (geological formations that were caused by a type of underwater avalanche), the rates of erosion of the continents, which seems to be too rapid for the supposed great age of the Earth.5 Taking the Bible as Myth Some people solve the conflict by concluding that the biblical miracles are myths—traditional stories that serve to express a worldview. For these individuals, no conflict exists since the event didn’t happen the way it was described. For example, there really wasn’t a man named Daniel who spent a night in a lions’ den. This is merely a story told to show that God takes care of those who believe in Him. This approach, however, undermines the inspiration of Scripture. Some see the ages obtained by conventional dating as so strongly indicating an old Earth that they conclude a literal reading of the Bible to be absurd. Such individuals may accept the ideas of some biblical scholars who believe that parts of Genesis (Chapter 1, for example) were written after other sections. Taking this view of Scripture may lead one to deny Christ’s life and ministry. The evidence against the bodily resurrection of Christ is comparable to that against a literal reading of Genesis 1. To be consistent in an understanding of the inspiration of Scripture, one must be ready to accept that miracles did occur and that, using conventional means, their literal occurrence cannot be proved. Thus the conflict remains. For most believers, it is no surprise for there to be conflict between faith and secular science. Christian doctrines are based on faith and are supported by evidence that appeals to reason, including personal experience, documentary evidence, and eyewitness testimonies. Empirical evidence is also important but is not the only factor as it is in secular science. Interpreting Scripture must always be done in humility. Are there other interpretations possible that do not destroy the original meaning? Alternate views may be acceptable if the passage allows for them without losing sight of the event’s miraculous nature. The same principle should apply to interpreting science—a humble attitude and consideration of alternative hypotheses. Maintaining this attitude can help keep conflicts between the Bible and science in perspective. To be consistent in understanding the inspiration of Scripture, one must be ready to accept that miraculous events did in fact occur and that, using conventional means, how they happened cannot be proved. Thus, the potential for conflict remains—as it will as long as the world does in its present iteration. Perhaps God will someday reveal to a greater degree the laws within which He has chosen to operate. Only then will an understanding come that there was no conflict after all. For the present, the tension must be tolerated. There will always be some conflict between science and the Bible. Some apparent conflicts may be resolved as science makes new discoveries, but others will be resolved only in eternity. Conflict between the Bible and science arises for several reasons: (1) the differing philosophical understandings of the role of God in nature; (2) the difficulty of interpreting the history of the world scientifically; (3) the inability of science to explain in scientific terms what God did miraculously; and, (4) the brevity and incompleteness of the biblical information about the history of nature. All these questions and conflicts should present opportunities for scientists and theologians to grow together in their understanding. The tragedy is that both often seem limited by and locked into their own perspective and fail to communicate in a common language. David Ekkens, Ph.D., is a retired Professor of Biology from Southern Adventist University, Collegedale, Tennessee. 1. Stephen Jay Gould, “Nonoverlapping Magisteria,” Natural History 106 (1997), pp. 16–22. 2. Http://www.chem.hope.edu/~polik/warming/IceCore/IceCore2.html. Accessed March 11, 2010. 3. M. H. Schweitzer; Z. Suo; R. Avci; J. M. Asara; M. A. Allen; F. T. Arce; J. R. Horner, “Analyses of Soft Tissue From Tyrannosaurus rex Suggest the Presence of Protein,” Science 316 (2007):277–280. 4. Harold Coffin, “The Puzzle of the Petrified Trees,” Dialogue 4 (1992):11–13, 30, 31. 5. A. A. Roth, Origins: Linking Science and Scripture (Hagerstown, Md.: Review and Herald Publ. Assn., 1998).
<urn:uuid:5f8a4bd8-d777-434a-a13d-9b912e77ff9b>
CC-MAIN-2021-43
https://www.perspectivedigest.org/archive/18-1/are-the-bible-and-science-in-conflict
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00070.warc.gz
en
0.956243
3,633
3.09375
3
On the insert menu, click advanced symbol, and then click the special characters tab. Ms word 20 paragraph marker stuck microsoft community. Many different cultures and background have adopted this symbol believing that the. This book is not for those who are experienced or advanced users of word, as there are a number of bigger and better referencestyle books available for word. In this video id like to show you how to insert a symbol in microsoft word. Choose the symbol that you want from the dropdown list. Insert a bookmark into a word document bookmarks are placed at a specific point within the text. Word allows you to format bullets in a variety of ways. This will open a quick access menu of 20 frequently used symbols to pick from. Formatting marks help you see characters that are not visible, such as tabs, spaces, paragraph marks, and hidden text. The paragraph marker in word 20 located in home, paragraph, and the little backwards p ribbon can be clicked and unclicked, however the maker in the text is always present. Symbols and characters fast symbol lookup microsoft appsource. If youre new to creating legal citations, you might wonder how to get the in n. Three ways to insert currency symbols in microsoft word. Find open book symbol stock images in hd and millions of other royaltyfree stock photos, illustrations and vectors in the shutterstock collection. Entering symbols and unusual characters what do you do when you need to enter a symbol or unusual character into a note that isnt on the keyboard. You can enter as many bookmarks as you want in your document or outlook message, and you can give each one a unique name so theyre easy to identify. Fortunately there is a simple way to view the formulas in your table so that you can confirm that they are working correctly, or in case you need to troubleshoot a formula that is not outputting the correct result. Author and talk show host robert mcmillen shows you how to insert symbols in microsoft word. Excel 2016, word 2016, outlook 2016, excel 20, word 20. This video shows how to insert a variety of different symbols, images, and graphics into a microsoft word document. With the microsoft mathematics addin 20 for word and onenote, you can perform mathematical calculations and plot graphs in your word documents and onenote notebooks. Setting formatting options formatting documents in word. Check out other articles in this blog series and find out how to insert tabs in word 20 or how to quickly create a chart in excel. The major views available in word are print layout, full screen reading, web layout, outline, and draft. Thousands of new, highquality pictures added every day. How to add fonts to word 20 posted february 22, 2015 by walker rowe in microsoft word calligraphy is a lost art for those who use computers and the latin. Starting with a new or open document, move the insertion point to where you want to insert the character then move to the insert tab and click on the symbol button far right. You can use symbols and different colors, or even upload a picture as a bullet. But for those who need a foundation of the features and using them, this is a great start to get comfortable with the various features word 20. This page is intended to supply a list of some useful symbols separated by topic so they can be found quickly without the need to search in the unicode reference tables. Open a word file, select insert symbol, scroll down to the new font, choose one of the symbols, and click insert. Indent the first line of a paragraph called a firstline indent as books do to distinguish paragraphs. How to insert special characters and symbols in word 20. Add or delete bookmarks in a word document or outlook. Also, readers will be more pleasant to look through a document with diversified symbols. Where to find out it in word 2007 or 2010 keeps bothering the new users who just upgraded from word 2003. However, you can also access the full range of special. Quickly indent lines of text to precise locations from the left or right margin with the horizontal ruler. A left tab is set by default, but you can change the tab to right, decimal, center, etc. How to insert symbols and special characters in word. A bookmark in word works like a bookmark you might place in a book. In addition to inserting things like images and shapes, you can insert symbols and special characters into your document. Quickly show or hide bookmarks in word with kutools for word. Symbols do not print on word document microsoft community. Symbols do not print on word document i created a word document that included check boxes that i inserted using the symbol browser. If you frequently insert the trademark symbol in your word documents. Make sure that the font dropdown is set to normal text. As a matter of fact, many people dont even know that symbols and special characters can be added to word 20 documents. In word options on the display pane, you can set options to show or hide formatting marks. Word 20 and word 2010 offer similar special character options. Exploring graphics in microsoft word this document provides instructions for working with various types of graphics in microsoft word. Format in microsoft word and convert to ebook in calibre 4. If the symbol is not in the list, click more symbols. In the font box, choose the font you are using, click the symbol you want to insert, and select insert. However, these steps can be modified for all currency symbols available through the font files installed on your computer. For simplicity, well concentrate on the euro, pound, and cent signs in these examples. Word 20 does not have a formula bar, which can make it difficult to check a formula that you have added to your table. But this is a special symbol font which means it does not use standard unicode encoding. The addin also provides an extensive collection of mathematical symbols and structures to display clearly formatted mathematical expressions. Download microsoft mathematics addin for word and onenote. Notice the character code at the bottom right side of the screen. With the microsoft mathematics addin for word and onenote, you can perform mathematical calculations and plot graphs in your word documents and onenote notebooks. Assign a shortcut key to a symbol in word 20 dummies. Tabs can be set by clicking inside the ruler shown in the toolbar area. Click the symbol button to open the symbol dialog box. Most people do not know just how easy it is to insert symbols and special characters in word 20. In this case, the sixpointed black star scroll if needed to find it. If i understand right what you mean, the answer is easy. If i open a word document in safe mode it is not present. How to insert symbols in microsoft word 20 youtube. And this can be pretty useful when youre looking for. In ms word bulleted lists can help arrange word documents so they are clearer. Text must be indented using your tab key or setting tabs in word s toolbar. Word provides different ways you can view your documents, depending on your particular needs. The symbol window will be opened where you can select the check box and then click on the insert button. How to get special characters using alt key codes or the. Click kutools show hide button on bookmark group to show all bookmark symbols. Symbols and characters fast symbol and emoticon search with convenient. To add a bookmark, you first mark the bookmark location in your document. To see the symbol menu in microsoft word, go to insert symbols on the ribbon and click the symbol button or insert advanced symbol symbols in the menu system in word for mac. An open book or scroll can fall into one of two categories. Microsoft word includes two types of special characters. Insert desired math symbols to the document in a single tap. Im working on microsoft word 20 and trying to make the symbol for a trademark. Download microsoft mathematics addin 20 for word and. Translate an entire document, a selected word or implement a quick translation. Taken within the context of a ritual or ceremony, however, an. Where is the insert autotext in microsoft word 2007, 2010. This week, well look at three ways to insert currency symbols in microsoft word. The advantage of using the unicode version of the open book is that it will appear the same in any application which has an image at that code point in the future when more fonts have it. Todays post explains all of the special characters in microsoft word for office 365, word 2019, and word 2016. To insert a symbol, go to the insert tab and click symbol. From the font dropdown list, choose the font you like best, wingdings for example. One of the benefits of using an application like word is that you can add more than just words to your documentyou arent constrained by what you can type. Unfortunately, the keyboard has only important symbols and if you need to use some other symbols and note the symbols on the keyboard, you need to use the symbols tool in ms word 20 while preparing your documents. How to change bullet characters in word 20 dummies. Unicodelist of useful symbols wikibooks, open books for. The hamsa symbol depicting an inverted right hand with an open eye in the middle gets its name from the arabic word for five. If you dont have classic menu for word 2007, 2010, 20, 2016, 2019 and 365 installed, you can autotext can be a handy feature for when you have lots of boilerplate text to use in a project. Here is how you can use this tool in ms word 20 with a few mouse clicks only. Instructions in this article apply to microsoft word for office 365, word 2019, word 2016, and word 20. Microsoft word 20 symbols 2 typing the occasional nonstandard character to type the occasional foreign character or symbol, its easiest to use insert symbol. Word document with mathematics will definitely need the insertion of symbols, so this app allows the user to insert the symbol in just a single clicktap. Setting paragraph indents formatting documents in word. When you work within a document, you can set options to customize the way formatting appears in word. You can insert the superscript tm symbol by applying the special characters command. While entering a very long document in fast the user need not to use any shortcuts or other source to insert the symbol. How to get special characters using alt key codes or the word. Word displays a popup with a number of frequently used symbols. On the insert tab of the ribbon, in the symbols group, click symbol more symbols. Microsoft word 20 contains a list of symbols, including the trademark symbol, not normally displayed on your keyboard. On the insert menu, click advanced symbol, and then click the symbols tab. Kutools for word provides users two ways to show or hide bookmarks quickly kutools for word, a handy addin, includes groups of tools to ease your work and enhance your ability of processing word document. Select more symbols and choose one from the symbols library from the normal text font. You can also insert a check box from the developer tab. This award recognizes tech experts who passionately share their knowledge with the. Entering symbols and special characters getting started with. Insert symbols in documents in ms office 20 how to. That provide the facility for the user to check the check box in word 20. If all youre doing is using with word, the insert symbol tool may still be working for you. Click the symbol button in the symbols section of the insert tab and select more symbols. In word 20 the full screen reading view was renamed the read mode view.1488 616 665 821 213 1311 1490 669 1556 796 206 1419 80 56 1452 1206 527 1492 115 414 750 491 1123 405 742 1301 596 512 820 492 615 393
<urn:uuid:c57e6117-9596-47a5-83c2-5b33427e39cf>
CC-MAIN-2021-43
https://ninettopar.web.app/423.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00150.warc.gz
en
0.861916
2,454
2.65625
3
- By Chip McDaniel - Power needs to be distributed to a machine's motors, drives, controllers, and other components. - The machine's safety system must remove motion-causing energy when called upon, including both electrical and fluid power. - It is a good practice to have multiple Ethernet and serial ports available to integrate to a variety of equipment, computers, HMIs, and business and enterprise systems. Systems must integrate multiple power and control subsystems and components into a coherent whole By Chip McDaniel Faced with ever-increasing cost pressures and demands for improved performance, machine builders are actively seeking new automation solutions with improved cost/performance ratios. In response to these demands, vendors must often incorporate commercial off-the-shelf components and other technologies to deliver more performance at lower costs in smaller form factors. This article shows how machine builders and vendors can work together to deliver the automation systems demanded, and how to successfully integrate the multiple power and control subsystems and components. Components and subsystems A machine's automation system primarily consists of power and control components. For a smaller machine, these may be housed in one panel (figure 1); whereas larger machines may require multiple panels, often one for control and another for power. The main subsystems and components of a machine automation system are: - power distribution - motor control and drives - safety system - programmable controllers - discrete and analog I/O - communication systems - human-machine interface (HMI) The power distribution subsystem feeds power to components, such as motors, drives, and controllers. The control subsystem primarily consists of safety systems, programmable controllers, discrete and analog I/O, communication systems, and HMIs. Let's look at each of these areas in more detail. The National Electric Code (NEC, also NFPA 70) has much to say about using electricity properly to safeguard persons and property. The code comes into play well before the power source connects to the machine control enclosure through a plug, disconnect, or terminal block. At the machine, the NFPA 79: Electrical Standard for Industrial Machinery is the benchmark for industrial machine safety related to fire and electrical hazards. Some of the major requirements in machine control power distribution discussed in these standards include using proper disconnect means, protecting personnel from contact with electrical hazards, and protecting equipment from overcurrent and overloads. The disconnect—whether a switch, circuit breaker, or cord with a plug—must be provided for any control enclosure fed with voltages of 50 VAC or more. It should be properly sized, positioned, wired, labeled, and, in some cases, interlocked to the enclosure door. Protecting personnel from contact with electrical hazards is always needed, both inside and outside a machine power or control panel. All conductors must be protected from contact by personnel. Most power distribution devices are designed to facilitate this level of protection, but live components, such as power buses, distribution blocks, and other power terminals, should be covered with a nonconductive, see-through cover. Protecting equipment from overcurrent is critical to reduce the chance of fire. Conductors and electrical components must be protected from overcurrent related to short circuits. Overcurrent protection devices, such as fuses and circuit breakers, must be sized based on conductor current-carrying capacity, device interrupt rating, maximum fault current, system voltage, load characteristics, and other factors. For power circuits, branch-circuit-rated devices must be used to meet current-limiting and ground fault protection requirements. Supplemental overcurrent protective devices are not suitable for use in these circuits but work well in downstream control circuits tapped from the load side of the branch circuit. Motor control and drives Motors have special needs in machine control. For every motor, a proper form of electrical control is required, from simple on/off to more complex variable speed applications. Motor control devices include manual motor starters, motor contactors and starters with overloads (figure 2), drives, and soft starters. A motor circuit must include both overcurrent (short circuit) and overload protection. This typically consists of branch-circuit protection, such as properly rated fuses, and a motor starter with overload protection devices, such as thermal overloads, but additional protection may be needed. Additional protection to consider for machine control components includes loss of cooling and abnormal temperatures. Ground fault protection is also needed, so a proper ground connection is important. Over, under, and loss of voltage must also be considered. Protection from lightning, overspeed, and loss of a voltage phase in three-phase supplies are additional considerations for proper machine control. Some motor controllers, such as drives and combination controllers, are self-protected. If this is the case, the device's rating or manufacturer's instructions will clearly note it is suitable for output conductor protection. A risk assessment drives the safety system design as needed to remove motion-causing energy, including electrical and fluid power, to safely stop the equipment for protection of both personnel and machines. Many safety standards come into play for proper machine control at both a mechanical and electrical level. Proper mechanical machine guarding and access points, as well as elimination of identified hazards, is a starting point. Safety relays or safety-rated controllers must be used to monitor safety switches, safety limit switches, light curtains, and safety mats and edges. In small machine control applications, a safety relay is probably the simplest way to integrate safety functionality for emergency stop, monitoring a guard door, or protecting an operator reaching through a light curtain. In more advanced machines, safety-rated controllers provide the same functions, but can simplify the integration of multiple safety devices. Safety-rated controllers reduce hardwired safety logic by providing a platform to program the safety functions needed for proper and safe machine control. Programmable controllers and I/O Available in form factors from small to large, the machine controller can be a programmable logic controller (PLC), a programmable automation controller (PAC), or a PC. The complexity of the machine control application, end-user specifications, and personal preference drive controller selection. Many vendors have families of controllers to cover a range of applications from simple to complex, allowing a machine builder to standardize to some extent. Often three or more physical configurations-small, medium, and large form factors-are available from the controller manufacturer. Using the same software platform to program a family of controllers is becoming the norm. This allows the designer to first program the system, and then select the right controller based on its capacity to handle the number of I/O points needed, as well as special functions such as proportional, integral, derivative control and data handling. Required capabilities like extensive communications and high-speed control should be carefully evaluated, as these are often the main factors driving controller selection. Discrete and analog inputs and outputs connect the controller to the machine sensors and actuators. These signals can originate in the main control panel through a terminal strip with wiring to field devices, but a distributed I/O architecture is often a better solution. Distributed I/O reduces wiring by moving the input or output point closer to the field device, and by multiplexing multiple I/O signals over a single cable running from the remote I/O component to the control panel. For distributed I/O at a smaller scale, IO-Link is a point-to-point serial communication protocol where an IO-Link-enabled device connects to an IO-Link master module. This protocol communicates data from a sensor or actuator directly to a machine controller. It adds more context to the discrete or analog data by delivering diagnostics and detailed device status to the controller. Another important part of machine control now and for the future is extensive communication capability. It is a good practice to have multiple Ethernet and serial ports available to integrate to a variety of equipment, computers, HMIs, and business and enterprise systems (figure 3). Multiple high-speed Ethernet ports ensure responsive HMI communication, as well as peer-to-peer and business system networking. Support of industrial Ethernet protocols, including EtherNet/IP and Modbus TCP/IP, is also important for scanner/client and adapter/server connections. These Ethernet connections enable outgoing email, webserver, and remote access communication functions-all important options for machine control. Machine control often benefits from the availability of legacy communication methods, such as serial RS-232 and RS-485. Modern controllers often also include USB and MicroSD communication and storage options. A big part of machine control communication is cybersecurity. Consider a layered defense where protection includes remote functions that are only enabled as part of the hardware configuration. For further protection, all tags should be protected from remote access unless the tag is individually enabled for that purpose. The HMI shows vital information about machine conditions using graphical and textual views. HMIs can be in the form of touch panels, text panels, message displays, or industrial monitors. They are used for monitoring, control, status reporting, and many other functions. The purpose of the HMI must be clearly defined. Some machines may only need a fault message display with few control functions. Other machines may demand a detailed view of machine status, access to system parameters, and recipe functionality. Clearly defining the need of the machine will help determine HMI size and capabilities, and this should be done early in the design process. HMIs can also act as data hubs by connecting to multiple networked devices. In some machine control applications, multiple protocols may be used, and often HMIs can be used for protocol conversion. This functionality can be used to exchange data, such as status and set points, among different controllers and other smart devices. Some HMIs can also send data to the cloud or enable remote access functionality through the Internet, given proper user name and password authentication. Machine automation systems consist of multiple subsystems and components to provide the required power distribution, safety, and real-time control. Each of these subsystems and components must work together, and many are often networked to each other via either hardwiring, or increasingly via digital communication links. Careful design, selection, integration, and testing will ensure the automation system performs as required, both initially and throughout the life cycle of the machine. We want to hear from you! Please send us your comments and questions about this topic to [email protected].
<urn:uuid:19b88d75-3781-4f54-9ece-07901bae7841>
CC-MAIN-2021-43
https://www.isa.org/intech-plus/2019/may/machine-automation-basics
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00270.warc.gz
en
0.910477
2,132
2.796875
3
A picture is worth a thousand words is an English saying that is familiar to all of us. Wikipedia describes this saying as referring “to the notion that a complex idea can be conveyed with just a single still image or that an image of a subject conveys its meaning or essence more effectively than a description does.” Unfortunately, no one seems to know for sure where or when this phrase originated, but what is certain is that it undoubtedly rings true for all of us, especially in the world language classroom! In fact, research now backs up what we have long suspected… that teaching with images is very effective! with a picture, you might ask? Well, I’m here to tell you that a LOT can be done. I recently stumbled onto an article published by a gentleman by the name of Harry Grover Tuttle in which he lists 50 ways to use images in the foreign language classroom. I was blown away by how many uses he came up with, and I’m sure you will be too! So sit back, grab a pencil and prepare to jot down a few ideas that you can easily incorporate into your class tomorrow! minute to his partner. 2) One student describes a picture orally to a partner who then repeats the description, using the picture as an aid for recall. 3) One student orally describes the picture to another student who does not see it; the second student then repeats the description to the first student. 4) Two students look at a picture; then one student looks away while the other student asks him questions about 5) Two students look at the picture and compete to see who can make up more questions about it. 6) Two students make up questions about a picture; one student uses question words, the other does not use question words. A continuation of this exercise would be to have the students answer each other’s questions orally or in writing. 7) One student orally describes a picture to a second student who then draws a copy of it. 8) One student orally describes a picture to another student who then is given a choice of pictures and must choose the one described. 9) Two students tell a story using a picture. One student tells what happened before the scene in the picture and the other tells what will happen afterward. 10) While one student orally describes a picture, the other student changes descriptive statements to 11) While one student orally describes what is happening in a picture, the other student says the same thing in a different tense or in the negative. 12) While one student orally describes a picture, the other paraphrases what the first student is saying. describes a picture, the other repeats the same thing but changes all subjects to the plural or singular and makes all other necessary grammatical changes. 14) Two students look at a picture and one acts the angel conscience and the other the devil conscience to debate what the person in the picture should do in a certain situation. 15) Two students look at the same picture and one tells what will happen in an optimistic point of view while the other relates the future in a pessimistic point of view. 16) Two students look at the same picture and one tells all the good points about things in the picture and the second tells all the bad points. 17) Two students look at the same picture and as one describes the picture the other says the exact opposite, i.e., “the chair is big” will be changed to “the chair is small.” 18) Two students look at the same picture and supply the dialogue for the people represented. (If there are more than two characters in the picture, group students accordingly.) 19) Two students look at the same picture and act out what is happening in the picture as they are describing it. 20) Two students look at the same picture and each pretends to be an object in the picture. The two objects then talk to each other. 21) One student selects an object in the picture and tries to sell it to the other student. 22) One student tells the other student all the colors in the picture and the second student tells what objects have those colors. 23) One student tells the other student what he would do in the shown situation. The other student then tells what he would do. At a more advanced level the second student might use a different verb construction such as “should have.” 24) After selecting a picture, a student chooses a letter of the alphabet and then names as many objects as possible in the picture that begin with that letter. The student who names the most in one minute wins. picture; the first student names an object and describes it. The second student compares it to some other object in the picture. They do this for as many objects in the picture as possible (at least 5). For example: first student, “The bush is large;” second student, “The tree is larger than the bush.” 26) Two students look at the same picture; the first student names everything made of wood and then the second student names everything made of metal or plastic. See who can name the most 27) Two students look at the same picture; the first tells how he would add to the picture to make it more attractive and the second tells what he would do to the picture to improve its 28) Two students look at the same picture; the first names all the pretty things in the picture and the second student then names all the ugly things in the picture. 29) Two students look at the same picture; the first student tells what mood he feels is represented in the picture. The second student tells him whether he agrees with him and why. 30) Two students look at the same picture; the first student tells the other about a similar experience in his own life. The second student then tells in what way the first person’s experience is similar to the original picture. 31) One student is given two pictures by his partner. The first student describes all the similarities between the two pictures. The second student then describes all the differences between them. (He should not mention any that the first student mentioned.) 32) One student is given two pictures by his partner. The first student makes up a story about the two pictures. The second student uses the pictures in a different order to tell a 33) One student is given two pictures by his partner. The first student chooses an object in one picture to put in the second picture and tells how the new object would change the picture. The second student does the same thing with a different object. 34) A student is given a picture by another student. The first student tells the physical location, the season of the year, the weather, the time of day, the health of the people involved, and their activities. The second student then tells all other information about the physical conditions and health of the people in the picture. 35) A student writes out a description of a picture and then omits at least one word per sentence which he puts at the bottom of the page. The other student then replaces the omitted words in the paragraph. 36) The first student describes the home and the family of the person in the picture. The second student tells how the described home and family is similar or different from his own. 37) A student selects a picture and tells what the person’s favorite sports or hobbies are, where he does them, and how he does them. 38) A student writes a letter of about ten sentences telling a friend about the picture, pretending it is a tourist site, a vacation trip, historical incident, or a news story. 39) The first student contrasts objects in the picture, i.e., “The chair is big but the book is small.” The second student compares the objects using equalities, i.e., “The chair is as heavy as the table.” 40) One student tells another student how he would make his picture into a TV program or movie. The second student tells what he thinks about this program. 41) One student makes up a mystery story about the picture. Another student tries to solve the mystery by creating a possible solution. 42) One student gives another student a picture and specifies a mood. The second student then writes at least five sentences about the picture reflecting that mood. The first student then makes as few changes as possible on the written description to change it to a different mood which the second student suggests. 43) One student looks at a picture and describes cultural differences between the country depicted in the picture and the United States. The second student describes cultural similarities depicted in the picture. 44) Each of the two students lists as many vocabulary words as possible from a given picture. The student who writes down the most words wins. 45) One student starts a story based on the picture. After three sentences, the second student continues the story for three more sentences. The first student then continues for an additional three sentences. The second student ends the story with three sentences. 46) Given a vowel or consonant sound, the students say all the words, objects, actions, etc., in the picture which contain that sound. 47) One student makes a statement about the picture. The second student repeats the statement and adds to it by using a conjunction such as but or since. 48) Two students see how many different ways they can rearrange three pictures to tell different stories. 49) One student looks at a picture and tells how it is similar to his house, community, etc. The second student tells how it differs. 50) In turn, each of the two students selects a picture and tells why the other should visit the place or do the activity illustrated in the picture. A third student will decide who wins and explain why. T-H-A-N-K Y-O-U, Harry! I’m sure there are a few ideas listed here that perhaps you hadn’t ever thought of… I know that was true for me! And once you start using images, you might even think of a few more ideas not listed here! There are endless ways that you can incorporate an image into your on the internet. Do a search for a related topic that you are studying (house, family, etc.) and simply archive all of those wonderful pictures into a digital file for later. But… have you ever considered having your students bring in pictures? This is a wonderful way to bring even more meaning to the language because the language gets personal when personal images are used! You could even have the students take pictures around campus with a digital camera and upload them to a class file to be used throughout the semester! And then there’s the option of you… yes, you… bringing in personal pictures. Students love to get sneak peeks into your private life (you and your dog at the park, your family at Disney World, etc.) There are so many options! *Of course, you should always use discretion when sharing personal images. from Google and get going! Let me know in the comments below how your activity turned out or if you have another idea to add to the list!
<urn:uuid:b85269f2-4934-42ea-87bf-c49d6d67270e>
CC-MAIN-2021-43
https://secondaryspanishspace.com/50-ways-to-use-images-in-world-language/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00510.warc.gz
en
0.9372
2,476
3.921875
4
Joint Air & Space Power Conference 2020 Leveraging Emerging Technologies in Support of NATO Air & Space Power Conference Read Ahead Implications of 5G to Air Power – A Cybersecurity Perspective By Maj Fotios Kanellos, GRC Air Force, Joint Air Power Competence Centre The next generation of wireless and mobile network, called 5G1, is expected to become the most important network of the 21st century and is predicted to have a decade-long impact. 5G’s deployment started in 2019, and since then a ‘race’ has been ongoing between governments, industries, and investors to be the first to build a functional network. 2020 is expected to be the year that 5G will be globally launched and by 2025, 15% of global mobile connections will be based on it2. The worldwide 5G revenues in 2025 are anticipated to reach € 225 billion3. 5G is gradually replacing the 4G/LTE4 network which was released in March 20095 and introduced ground-breaking, for that period, fast connection speeds and mobile hotspots. 5G technology, based on the 802.11ac IEEE wireless standard6, is expected to boost data transmission and communication by over three times while simultaneously guaranteeing ultra-high reliable and resilient connections. In the 3G and 4G world, speed and throughput were the most important characteristics to differentiate a network. The amount of data that a network could relay and the upload and download speed were the main features for users or services. But in a future 5G world this is not enough; 5G technology is not simply a faster version of 4G, but rather, an entirely new network architecture7. The three main technical characteristics of 5G networks are: - Data rates of between 1–20 Gbit/s per mobile base station, at least 10 times faster than before, allowing users on the same cell to quickly download a large volume of data. - Latency speed less than 1ms, virtually eliminating any delays or lags when requesting data from the network. - Increased capacity to connect not only a high number of individual users but also more objects per specific geographical area. The three characteristics above, together with mobility (staying connected while travelling at high speeds), energy efficiency (switching inactive radio interfaces into low-energy mode), service deployment and reliability, synthesize the key features of 5G networks that make them unique and, indeed, revolutionary as they promise to expand our ways of communication and completely transform our way of living. 5G’s network infrastructure will no longer be based on the combination of specialised hardware and software elements. Instead, customization and functionality will take place only in the software. A new core network will support ‘network slicing’ features which will provide different service layers on the same physical network8. 5G, unlike previous technology, operates on three different spectrum bands (high, medium and low – see figure on next page) with each band having specific characteristics suitable for certain deployment scenarios. Finally, a more decentralized architecture than the traditional one in 4G will allow the network to steer traffic at the ‘edge of the network’ while still ensuring low response times. 5G technology enhanced by Artificial Intelligence (AI) is accelerating the development and implementation of technologies such as Connected Autonomous Vehicles (CAVs), ‘smart cities’, Virtual Reality (VR) and Augmented Reality (AR). Moreover, 5G networks contribute to a huge rise in the number of components in the Internet of Things (IoT), massively increasing the number and diversity of interconnected devices. It is predicted that around 75,44 billion devices worldwide will be ‘online’ by 20259, virtually connecting ‘everything to everything’ (X2X). Subsequently, 5G has the potential to transform the employment of military air operations and enhance its capabilities with components and functions that never existed before. However, from a cyber-space perspective, 5G technology also increases drastically the attack surface (in some ways previously non-existent) and the number of potential entry points for attackers. The increased speed of the connected devices could make them more vulnerable to Distributed Denial-of-Service (DDoS) attacks. In today’s era of 4G/LTE mobile Internet, a large botnet10 formed simply by hacking a user’s home devices could be used to launch large-scale DDoS attacks against websites; in tomorrow’s 5G network era, a similar botnet could disrupt an entire network of autonomous cars in a city11. As a result, the wide range of services and applications, as well as the novel features in the architecture, will introduce a plethora of new security challenges. It was in September 2016 when hackers succeeded in scanning and exploiting hundreds of thousands of low-cost and low-powered IoT devices such as IP cameras, home routers, and digital video recorders, and turned them into remotely controlled bots by using ‘Mirai’ malware to launch large scale DDoS attacks. Not only 5G technology itself but also the communication between devices connected to the internet can be the weakest link in 5G’s security. If the manufactures of those low-cost interconnected devices do not embed cybersecurity standards in their products, the security risks will remain high. 5G networks and smart devices must adopt reliable and long-term security requirements beginning in the early stages of the design and manufacturing processes in order to fulfil their technological promises. By embracing a structured ‘cyber hygiene policy’, 5G technology can eventually be effectively implemented in Air Operations to improve communications and situational awareness. Enhancing Air Power NATO Allied Forces can gain great advantages by leveraging the novel features of 5G cellular technology. Communications and network operations in the air battlespace will be able to handle far more data at much faster speeds supporting real-time video streaming and VR applications. The wide employment of Unmanned Aerial Vehicles (UAVs) for purposes ranging from Intelligence, Surveillance & Reconnaissance (ISR) to airstrikes is expected to evolve even further in terms of geographic coverage and efficiency. Even logistic and maintenance activities, such as tracking maintenance stocks and conducting technical inspections, could benefit from a reliable and secure mobile connectivity. Modern logistics systems such as the Autonomic Logistics Information System (ALIS) for the Joint Strike Fighter, are integrated with maintenance and operations procedures from across the world identifying problems with the aircraft, installing software updates, and providing preventive actions. 5G technology can clearly enhance productivity and safety of such complex, large-scale and interconnected military logistics operations transforming them to ‘sophisticated weapon systems’ ready to use even on the battlefield. 5G networks have the ability to expand the range of cloud-based applications and exponentially increase the amount of data transmitted and exchanged during air combat operations. The challenge of infobesity (information overload) can still be encountered using digital technologies that take advantage of the super-fast, high-bandwidth and low-latency communication environment that 5G provides. Consolidating the extracted information from internet-connected sensors and platforms, and immediately distributing the acquired knowledge to the Command and Control structure is essential to facilitate ‘smart’ decision-making12. Therefore, securing (allied) military networks and maintaining their high level of interoperability will become even more critical. According to an ‘EU coordinated risk assessment report’13, published on 9 October 2019, among the main threats and vulnerabilities of 5G networks are the high dependencies on individual suppliers. The lack of diversity in equipment and infrastructure can lead to increased exposure to attacks by State-sponsored actors who interfere with the suppliers. Thus, the individual risk profile of suppliers will become particularly important, especially for those with significant presence within networks. In order to develop a secure 5G mobile network strategy, the US Department of Defence (DoD) decided, about a year ago, to strengthen the requirements for the supply chain of innovative technology products, including subcontractors, by introducing higher cybersecurity standards that would ensure resiliency to cyber-attacks. The established public-private partnership, known as ‘Trusted Capital Marketplace’, connects defence technology start-ups with trusted sources of capital in order to secure the delivery of such critical emerging technologies14. With the future stand-alone 5G ecosystem, as described above, all network functionalities will be virtualised based on software rather than hardware, and take place within a single cloud environment. 5G networks are going to be deployed in a complex global cybersecurity threat landscape. To ensure confidentiality (authorised access), integrity (accurate information) and availability (any time access) with such a revolutionary technology and confront the challenges derived from it, NATO member and partner countries will have to follow a new security paradigm. Current cybersecurity models and policies must be reassessed and new security frameworks applied in order to mitigate risks and threats. Are Alliance members determined to invest the resources necessary for establishing a resilient and secure infrastructure for 5G technology? Are we willing to adapt to this emerging technology at the speed of change and not lagging behind other competing nations? Those questions have to be answered as clearly and decisively as possible in the very near future. 1. A relatively recent definition of 5G networks provided by the EU Commission Recommendation 2019/534 (26 Mar. 2019) is ‘all relevant network infrastructure elements for mobile and wireless communications technology used for connectivity and value-added services with advanced performance characteristics such as very high data rates and capacity, low latency communications, ultra-high reliability, or supporting a high number of connected devices. These may include legacy networks elements based on previous generations of mobile and wireless communications technology such as 4G or 3G. 5G networks should be understood to include all relevant parts of the network.’ 2. Fragouli, N., ‘5G brings $2.2Tn to the economy over the next 15 years’, Hellenic Association of IT & Communications, 2019 http://www.sepe.gr/gr/research-studies/article/13004311/axia-22-tris-fernei-to-5g-stin-oikonomia-ta-epomena-15-hronia/, accessed 21 Feb. 2020. 3. NIS Cooperation Group, ‘Cybersecurity of 5G networks: EU Toolbox of risk mitigating measures’,CG Publication, 29 Jan. 2020. 4. 4G LTE stands for the 4th Generation of Cellular Network Long Term Evolution. LTE is considered an improvement of the 4G. 5. The first commercial use of 4G was in Norway and Sweden. 6. Techopedia, ‘Fifth Generation Wireless (5G)’, https://www.techopedia.com/definition/28325/fifth-generation-wireless-5g, accessed 21 Feb. 2020. 7. CPO Magazine, ‘5G and the Future of Cybersecurity’, https://www.cpomagazine.com/cyber-security/5g-and-the-future-of-cybersecurity/, accessed 21 Feb. 2020. 8. Each ‘layer’ will perform in parallel varying functions across the network, processing different volumes of information and transporting data packets to and from other layers within it. 9. Statista Research Department, ‘Internet of Things (IoT) connected devices installed base worldwide from 2015 to 2025’, https://www.statista.com/statistics/471264/iot-number-of-connected-devices-worldwide/, accessed 21 Feb. 2020. 10. ‘A botnet is a set of computers infected by bots. A bot is a piece of malicious software that gets orders from a master. (…). A computer becomes infected either when a worm or virus installs the bot, or when the user visits a malicious web site that exploits a vulnerability in the browser’. ENISA, https://www.enisa.europa.eu/topics/csirts-in-europe/glossary/botnets, accessed 21 Feb. 2020. 11. Ibid. 7. 12. Pappalardo, D., ‘The Role of the Human in Systems of Systems: Example of the French Future Combat Air System’, OTH Journal, 2020, https://othjournal.com/2020/01/27/the-role-of-the-human-in-systems-of-systems-example-of-the-french-future-combat-air-system/amp/, accessed 21 Feb. 2020. 13. European Commission, ‘Member States publish a report on EU coordinated risk assessment of 5G networks security’, Press Release, Oct. 2019, https://ec.europa.eu/commission/presscorner/detail/en/ip_19_6049, accessed 21 Feb. 2020. 14. Mitchell, B., ‘DoD to launch Trusted Capital Marketplace of startups, investors’, FedScoop, 2019, https://www.fedscoop.com/dod-trusted-capital-marketplace-ellen-lord/, accessed 21 Feb. 2020.
<urn:uuid:77c6dc5a-336d-4ace-a5b8-ade69d531300>
CC-MAIN-2021-43
https://www.japcc.org/implications-of-5g-to-air-power-a-cybersecurity-perspective/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00589.warc.gz
en
0.917857
2,749
2.84375
3
Point of sail A point of sail is a sailing craft's direction of travel under sail in relation to the true wind direction over the surface. The principal points of sail roughly correspond to 45° segments of a circle, starting with 0° directly into the wind. For many sailing craft 45° on either side of the wind is a no-go zone, where a sail is unable to mobilize power from the wind. Sailing on a course as close to the wind as possible—approximately 45°—is termed beating, a point of sail when the sails are close-hauled. At 90° off the wind, a craft is on a beam reach. At 135° off the wind, a craft is on a broad reach. At 180° off the wind (sailing in the same direction as the wind), a craft is running downwind. The point of sail between beating and a beam reach is called a close reach. A given point of sail (beating, close reach, beam reach, broad reach, and running downwind) is defined in reference to the true wind—the wind felt by a stationary observer—and, depending on the speed of the sailing craft, determines an appropriate sail setting, as determined by the apparent wind. The apparent wind—the wind felt by an observer on a moving sailing craft—determines the motive power for sailing craft. In points of sail that range from close-hauled to a broad reach, sails act substantially like a wing, with lift predominantly causing the boat to heel and to a lesser degree propelling the craft because the apparent wind is flowing along the sail. In points of sail from a broad reach to down wind, sails act substantially like a parachute, with drag predominantly propelling the craft, due to the apparent wind flowing into the sail. For craft with little forward resistance, like ice boats and land yachts, this transition occurs further off the wind than for sailboats and sailing ships. In the no-go zone, sails are unable to generate motive power from the wind. Sailing craft, such as sailboats and iceboats, cannot sail directly into the wind, nor on a course that is too close to the direction from which the wind is blowing. The range of directions into which a sailing craft cannot sail is called the "no-go" zone. In the no-go zone the craft's sails cease producing enough drive to maintain way or forward momentum; therefore, the sailing craft slows down towards a stop and steering becomes progressively less effective at controlling the direction of travel. The span of the no-go zone varies among sailing craft, depending on the design of the sailing craft, its rig, and its sails, as well as on the wind strength and, for boats, the sea state. Depending on the sailing craft and the conditions, the span of the no-go zone may be from 30 to 50 degrees either side of the wind, or equivalently a 60- to 100-degree area centered on the wind direction. A sailing craft is said to be "in irons" if it is stopped with its sails unable to generate power in the no-go zone. If the craft tacks too slowly, or otherwise loses forward motion while heading into the wind, the craft will coast to a stop. This is also known as being "taken aback," especially on a square-rigged vessel whose sails can be blown back against the masts, while tacking. A sailing craft is said to be sailing close-hauled (also called beating or working to windward) when its sails are trimmed in tightly, are acting substantially like a wing, and the craft's course is as close to the wind as allows the sail(s) to generate maximum lift. This point of sail lets the sailing craft travel diagonally to the wind direction, or "upwind". Sailing to windward close-hauled and tacking is called "beating". On the last tack it is possible to "fetch" to the windward or weather mark. A fetch is sailing close hauled upwind to a mark without needing to tack. The smaller the angle between the direction of the true wind and the course of the sailing craft, the higher the craft is said to point. A craft that can point higher (when it is as close-hauled as possible) is said to be more weatherly. When the wind is coming from the side of the sailing craft, this is called reaching. A "beam reach" is when the true wind is at a right angle to the sailing craft. A "close reach" is a course closer to the true wind than a beam reach but below close-hauled; i.e., any angle between a beam reach and close-hauled. The sails are trimmed in, but not as tight as for a close-hauled course. A "broad reach" is a course further away from the true wind than a beam reach, but above a run. In a broad reach, the wind is coming from behind the sailing craft at an angle. This represents a range of wind angles between beam reach and running downwind. On a sailboat (but not an iceboat) the sails are eased out away from the sailing craft, but not as much as on a run or dead run (downwind run). This is the furthest point of sail, until the sails cease acting substantially like a wing. On this point of sail (also called running before the wind), the true wind is coming from directly behind the sailing craft. In this mode, the sails act in a manner substantially like a parachute. When running, the mainsail of a fore-and-aft rigged vessel may be eased out as far as it will go. Whereupon, the jib will collapse because the mainsail blocks its wind, and must either be lowered and replaced by a spinnaker, or set instead on the windward side of the sailing craft. Running with the jib to windward is known as "gull wing", "goose wing", "butterflying", "wing on wing" or "wing and wing". A genoa gull-wings well, especially if stabilized by a whisker pole, which is similar to but lighter than a spinnaker pole. In light weather, certain square-rigged vessels may set studding sails, sails that extend outwards from the yardarms, to create a larger sail area. Sailing craft with lower resistance across the surface (multihulls, land yachts, ice boats) than most displacement monohulls have through the water can improve their velocity made good (VMG) downwind by sailing on a broad reach and jibing, as necessary to reach a destination. Effect on sailing craft True wind (VT) combines with the sailing craft's velocity (VB) to be the apparent wind velocity (VA); the air velocity experienced by instrumentation or crew on a moving sailing craft. Apparent wind velocity provides the motive power for the sails on any given point of sail. It varies from being the true wind velocity of a stopped craft in irons in the no-go zone to being faster than the true wind speed as the sailing craft's velocity adds to the true windspeed on a reach, to diminishing towards zero, as a sailing craft sails dead downwind. - Effect of apparent wind on sailing craft at three points of sail Sailing craft A is close-hauled. Sailing craft B is on a beam reach. Sailing craft C is on a broad reach. Boat velocity (in black) generates an equal and opposite apparent wind component (not shown), which adds to the true wind to become apparent wind. Apparent wind on an iceboat. As the iceboat sails further from the wind, the apparent wind increases slightly and the boat speed is highest on the broad reach. The sail is sheeted in for all three points of sail. The speed of sailboats through the water is limited by the resistance that results from hull drag in the water. Ice boats typically have the least resistance to forward motion of any sailing craft; consequently, a sailboat experiences a wider range of apparent wind angles than does an ice boat, whose speed is typically great enough to have the apparent wind coming from a few degrees to one side of its course, necessitating sailing with the sail sheeted in for most points of sail. On conventional sail boats, the sails are set to create lift for those points of sail where it's possible to align the leading edge of the sail with the apparent wind. For a sailboat, point of sail significantly affects the lateral force to which the boat is subjected. The higher the boat points into the wind, the stronger the lateral force, which results in both increased leeway and heeling. Leeway, the effect of the boat moving sideways through the water, can be counteracted by a keel or other underwater foils, including daggerboard, centerboard, skeg and rudder. Lateral force also induces heeling in a sailboat, which is resisted by the shape and configuration of the hull (or hulls, in the case of catamarans) and the weight of ballast, and can be further resisted by the weight of the crew. As the boat points off the wind, lateral force and the forces required to resist it become reduced. On ice boats and sand yachts, lateral forces are countered by the lateral resistance of the blades on ice or of the wheels on sand, and of their distance apart, which generally prevents heeling. - Rousmaniere, John (7 January 2014). The Annapolis book of seamanship. Smith, Mark (Mark E.) (Fourth ed.). New York. pp. 47–9. ISBN 978-1-4516-5019-8. OCLC 862092350. - Kimball, John (2009). Physics of Sailing. CRC Press. p. 296. ISBN 978-1466502666. - Cunliffe, Tom (2016). The Complete Day Skipper: Skippering with Confidence Right From the Start (5 ed.). Bloomsbury Publishing. p. 208. ISBN 9781472924186. - Jobson, Gary (2008). Sailing Fundamentals. New York: Simon and Schuster. pp. 72–75. ISBN 9781439136782. - Cunliffe, Tom (1994). The Complete Yachtmaster. London: Adlard Coles Nautical. pp. 43, 45. ISBN 0-7136-3617-3. - "Sailing Terms You Need To Know". asa.com. 27 November 2012. Retrieved 19 April 2018. - "Sailing the seas of nautical language - OxfordWords blog". oxforddictionaries.com. 30 June 2014. Retrieved 19 April 2018. - Kemp, Dixon (1882). A Manual of Yacht and Boat Sailing. H. Cox. pp. 97. - Jett, Stephen C. (2017). Ancient Ocean Crossings: Reconsidering the Case for Contacts with the Pre-Columbian Americas. University of Alabama Press. p. 528. ISBN 9780817319397. - King, Dean (2000). A Sea of Words (3 ed.). Henry Holt. p. 424. ISBN 978-0-8050-6615-9. - Bethwaite, Frank (2007). High Performance Sailing. Adlard Coles Nautical. ISBN 978-0-7136-6704-2. - Batchelor, Andy; Frailey, Lisa B. (2016). Cruising Catamarans Made Easy: The Official Manual For The ASA Cruising Catamaran Course (ASA 114). American Sailing Association. p. 50. ISBN 9780982102541. - Jobson, Gary (1990). Championship Tactics: How Anyone Can Sail Faster, Smarter, and Win Races. New York: St. Martin's Press. pp. 323. ISBN 0-312-04278-7. - Marchaj, C. A. (2002), Sail Performance: Techniques to Maximize Sail Power (2 ed.), International Marine/Ragged Mountain Press, p. 416, ISBN 978-0071413107 - Rousmaniere, John, The Annapolis Book of Seamanship, Simon & Schuster, 1999 - Chapman Book of Piloting (various contributors), Hearst Corporation, 1999 - Herreshoff, Halsey (consulting editor), The Sailor’s Handbook, Little Brown and Company, 1983 - Seidman, David, The Complete Sailor, International Marine, 1995 - Jobson, Gary, Sailing Fundamentals, Simon & Schuster, 1987
<urn:uuid:cf23c555-389d-419c-85bd-deea56a66095>
CC-MAIN-2021-43
https://en.wikipedia.org/wiki/Broad_Reach
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00110.warc.gz
en
0.901694
2,651
3.90625
4
Whenever you’re watching a baseball game on tv or in the stands, it’s common to hear someone exclaim that a hitter had a “great knock” or a “base knock”. This may sound like a weird term to be used in baseball, but it turns out this term has been used for a long time. What is a knock in baseball? A “knock”, sometimes called a “base knock”, is another way to describe a well-hit single in baseball. When a player hits a knock, there is no doubt they are safely making it to first base. If you look for baseball’s definition of a knock on other websites, like Wikipedia or SportsLingo, they mention that a knock is simply another word for a hit. But in my experience, a knock in baseball is a way to describe a batter who hit the ball well and got a single. In addition to describing a base hit, the term “knock” has a couple of other variations that are used in the game of baseball to help describe what type of knock the batter hit. I’ll also cover those variations in this article. What is a Base Knock in Baseball? Baseball is full of lingo and some of it may sound weird if you’re hearing it for the first time. When people hear the term “knock” they typically think of someone knocking on a door, but baseball uses a different meaning. In baseball, “base knock” refers to a player getting a base hit. A base knock is most commonly used to describe a single that was hit well, but it can also be paired with additional words to give more context to a play. In my experience, there are four main ways someone can use the word “knock” to describe a play: - When a player hits a ball well and there is no doubt the play will result in a single, the player has hit a “base knock” - Whenever a player hits for extra bases, you can put the number of bases earned in front of the phrase “base knock” to more accurately describe the base hit - You could put the number of runs scored before the phrase “base knock” to describe how many RBIs the batter received - You could put the number of outs in the inning to describe how many outs there were when a player hit a base knock Let’s cover each of those scenarios in detail. “Base Knock” Refers to a Single The most basic use of the phrase “base knock” is to describe a player hitting a single. When fans, coaches, and players use this phrase, they are not just referring to any type of base hit. They are referring to a batter who made solid contact with the pitch and there was no doubt the batter was getting a single. Some examples of the way players would use this phrase during a game would be “Nice knock!”, “Good base knock”, and “That’s a solid knock”. Below are two examples of a base knock in baseball. What makes these hits a base knock is how the batters squared up the ball and there was no doubt they were getting on first base. Put the Number of Bases Earned Before the Phrase “Base Knock” For more context on when a base knock refers to an extra-base hit, it’s common to put the number of bases earned before the phrase “base knock”. “Two-Base Knock” Refers to a Double The most common type of extra-base hit is a double. A majority of the time a player hits a double, they made solid contact with the ball. When a player has a great hit and reaches second base with no errors committed by the defense, it’s common to say that this player hit a “two-base knock”. There are some other ways to say that a player got a double in baseball, but if you hear that a player hit a two-base knock, you know that means they hit a great-looking double. “Three-Base Knock” Refers to a Triple Similar to how there are multiple ways to say that a baseball player hit a double, there are also multiple ways to say that a player hit a triple. One of those ways is to say that a player hit a three-base knock. In baseball, a “three-base knock” refers to a player who hit the ball well and earned a triple. If the ball is hit well enough, the player may not need to slide into third base to earn a triple. But whether or not a player slides into third, they still earned themselves a nice three-base knock. The “Four-Base Knock” In my baseball experience, I’ve never actually heard the term “four-base knock”, but if we’re following a similar pattern with a double and a triple, it makes sense that a home run can be called a four-base knock. In baseball, a four-base knock refers to a player who hit a home run. To be called a four-base knock, the batter could either hit a fair ball over the fence or the batter could hit an in-the-park home run. In baseball, there are a lot of different ways to refer to a hit as a home run. In fact, if you look at this list of nicknames for home runs on Baseball Reference, you’ll also see that one of the nicknames includes “four-base knock”. So even though I can’t recall hearing this term to describe a home run, a majority of baseball players should understand that a four-base knock is another saying for a home run. Put the Number of Runs Scored Before the Phrase “Base Knock” When a batter gets a base hit, they may end up driving in some runs. Whenever a batter drives in runs with a well-hit single, players tend to say how many runs were scored while still referring to the hit as a base knock. At most, 3 RBI’s will score on a base knock, but a three-run base knock almost never happens. For a three-run base knock to occur, a player needs to hit a single with the bases loaded and drive in all of those runs. In these scenarios, a batter has hit the ball well enough to turn that play into a double so it wouldn’t be considered a base knock, but it would be considered a wonderful extra-base hit. For this article, we’ll just cover the scenarios of a one-run base knock and a two-run base knock. A One-Run Base Knock Scores One Run When a batter makes solid contact with the ball for a single and they drive in one run, it’s common for players, fans, and coaches to say this batter hit a “one-run base knock”. An example video is below of a one-run base knock. In this video, Adam Duvall made solid contact with the pitch that ended up being placed perfectly in the hole between the shortstop and third baseman. This base knock resulted in one run scoring so it’s referred to as a one-run base knock. A Two-Run Base Knock Scores Two Runs If a batter makes solid contact with the ball and two runs scored, it’s common to refer to that play as a “two-run base knock”. The video below is a great example of a two-run base knock. This video should start around the 0:48 second mark, which is right before the player hits a base knock to drive in the runners on second and third base. Put the Number of Outs Before the Phrase “Base Knock” Another way to give more context to a base knock is by referencing how many outs there were when the batter got a base hit. This can be used to describe the importance of the base knock or just to give someone more of an idea of what the situation was like for the hitter when they earned their base knock. One-Out Base Knock is a Single with One Out A great hit can happen at any time during a game, but sometimes it’s helpful to let someone know there was one out when the player was up to bat. In baseball, a one-out base knock is a way to describe a batter who got a well-hit single while there was one out in the inning. Two-Out Base Knock is a Single with Two Outs When people hear that there are two outs in an inning, they immediately understand that the batter has a little bit more pressure to get on base. So it helps provide some context to a play when someone describes it as a two-out base knock. In baseball, a two-out base knock refers to a batter who got a well-hit single while there were two outs in the inning. Below is a great example of a batter hitting a two-out base knock: Other Baseball Terminology That Includes “Knock” The term “knock” actually shows up in a few other areas of baseball. And even though a knock typically refers to a base hit, it can sometimes mean something else when it’s used in a different context. Let’s cover a couple of those phrases where “knock” is also used in baseball. In baseball, “knock in” refers to a base hit that resulted in an RBI. In baseball, “knock off” is another way to say that one team beat another team. A lot of times this term is used in a tournament where one team can “knock off” another team from the tournament bracket. Knocked the Cover Off of The Ball When a batter hits the ball with a lot of speed, the batter is said to have “knocked the cover off of the ball”. Knock it Down “Knock it down” in baseball refers to an infielder stopping a line drive or a hard-hit ground ball from leaving the infield. Sometimes, the infielder is even able to throw out the runner at first. Knocked it Out of the Park “Knocked it out of the park” is another phrase for a home run.
<urn:uuid:82a738b9-152e-4caa-adee-c132ccbf15ce>
CC-MAIN-2021-43
https://baseballtrainingworld.com/base-knock-baseball-lingo-explained-with-examples/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00550.warc.gz
en
0.966378
2,209
2.59375
3
1. The following signals will be made with the sword when drawn, otherwise with the hand. Officers using signals should, as far as possible, face the same way as those to whom the signals are made. Signal To indicate. i. Arm swung from rear to front below the “Advance” or “Forward” shoulder, finishing with the sword pointing to the front. ii. Sword hilt raised in line with the shoulder, “Walk” or “quick time” elbow bent and close to the side, sword perpendicular. iii. Sword hilt moved up and down between thigh “Trot” or “Double” and shoulder, forearm pointing in such a direction that the movement can be seen by those for whom the signal is intended, sword perpendicular. iv. Sword swung from rear to front below the “Gallop” shoulder, motion repeated several times. v. Sword raised perpendicularly to the full extent “Halt” above the head. vi. Sword pointed in the required position (to be “Incline” followed by the signal for “Forward”. vii. Slow circular movement of the extended “Shoulders” sword in line with the shoulder in the required direction. Note. – This signal is also used for “direction”. “Head of column change”. viii. Sword held perpendicularly above the head “Line of troop columns” raised and lowered frequently. or where a troop is alone, “Troop column”.; and light machine-gun troops, “Column of sections”. ix. Sword waved from the head, to right for a half “Troops half-right or half -left”-right wheel and to the left for a half-left wheel – to a position in line with the shoulder pointing in the direction required. x. Above signal repeated twice. “Troops right or left wheel”. xi. Sword circled several times at its full extent “Troops right or left about wheel” above the head, left to right for a right about wheel, and right to left for a left about wheel. xii. Sword waved horizontally from right to left 1. “Form squadron column “(from and back again as though cutting, finishing with the line) delivery of a point to the front. 2. “Form line of squadron columns” (from line, mass, regimental column of troops or echelon of squadron columns.) 3. “Form line” :- (a) From line of squadron columns (b) From line of troop columns. (c) From column of squadrons xiii. Sword held extended above the head as for “Rally” or “Mass” if in close order, “Halt” and point at once moved rapidly right and or “Close” if in extended order or left. dispersed. Note. – This signal denotes “close on the centre”. If desired to close on a flank, finish the signal by pointing towards that flank. xiv. Two or three slight movements of the open “Dismount” or “Lie down”. hand towards the ground (palm downwards), sword transferred to the bridle hand. xv. Two or three slight movements of the open “Mount” hand upwards (palm uppermost), sword transferred to the bridle hand. xvi. Sword raised as for “Halt” and then “For action dismount”. pointed to the ground. xvii. Sword at full extent over the head and “Extend” waved a few times slowly from side to side,bringing the point down at each wave on a level with the shoulder. Note. This signal denotes extensions to both flanks. If the extension is to be made to the right, finish the signal by pointing to the right. If the extension is to be made to the left, finish the signal by pointing to the left. Extensions are usually 5 or multiples of 5 yards. If an extension other than 5 yards is required the interval will be given by word of mouth. xviii. Sword waved horizontally from right “Artillery (or aircraft) formation”. to left and back again as though cutting, finishing with the delivery of a point vertically above the head. xix. Sword swung from rear to front above the “Reinforce” shoulder. xx. Rifle held up at full extent of arm, muzzle “No enemy in sight”. uppermost. xxi. Rifle held above the head, at full extent ” Enemy in sight in small numbers” of the arm and parallel with the ground, muzzle pointing to the front. xxii. As for “Enemy in small numbers”, but “Enemy in sight in large numbers”. the weapon raised and lowered frequently. xiii. Both arms held out horizontally in line “Bring up the led horses or transport with the shoulders. vehicles” xxiv. Both arms held above the head and “Enemy aircraft in sight”.hands waved. Signals such as “Halt” or “Incline” should be maintained, and signals of movement, such as “Advance” or “Shoulders” should be repeated until it is clear that they are understood. Signals should not be acted upon until they have been completed. 2. The following signals are used by ground scouts :- Impassable ground – Halt and raise the arm (without weapon) perpendicularly, transferring the weapon, if any, to the bridle hand ; then ride towards whatever spot appears practicable, pointing towards it with the hand. If the ground within view in front and on either side is quite impracticable, a scout will raise his right arm and ride to the squadron to report. 3. The whistle will be used :- i. To draw attention to a signal or order about to be given – ” a short blast”. On a “short blast” being blown on the whistle when cavalry are in action dismounted, men will stop firing momentarlily, if necessary, and look towards their leader, and remain looking at him until he has completed the signal and dropped his hand, or until the order is understood. If men are on the move they will continue the movement, looking towards the leader. ii. To denote “Cease firing” – “one long blast” iii. To denote “Rally” – “a succession of short blasts”. iv. To denote “Alarm” or “Enemy aircraft in sight” – “a succession of long and short blasts”. Troops will turn out from camp or bivouac and fall in at, or occupy, previously arranged positions. When on the march, troops will either get ready to fire, open out or take cover, according to orders in force. To denote “Attack ended” (aircraft), two long blasts on whistle repeated at6 intervals of a second. On this, all troops resume previous formations. Troops who have been firing will recharge their magazines before moving off. Distances between mounted troops are measured from the tail of a horse to the head of the one behind it. Between dismounted troops they are measured from heel to heel. Line.(squadron) Troop leaders to front rank , 1 horse-length.front to rear rank, rear rank to (on foot 3 paces) Line (open order) Between front and rear rank 3 horse-lengths.on parade for inspection purposes. Column of sections, Troop leaders to front rank 1/2 horse-length half-sections or and front to rear rank. (on foot 3 paces) Squadron column Troop leaders to front rank 1/2 horse-length and squadron half- front rank to rear rank. (on foot 3 paces) column. Echelon Between units in echelon. Frontage of rear unit plus regulation interval in line and less the depth of the preceding unit. N.B. – This allows of a correct echelon being formed to a flank by a wheel of 90 degrees. Intervals between mounted troops are measured from knee to knee. Including intervals between files, a mounted man in the ranks occupies a frontage of slightly less than one yard. Intervals between dismounted men are measured from elbow to elbow. Each dismounted man is allotted a lateral space of 24 inches, but a 2 inch space, elbow to elbow, should be aimed at. Line. . . . . . . Between men (mounted) 6 inches. ” . . . . . . Between squadrons. 10 yards. ” . . . . . . Between regiments. 20 yards exclusive of band and staff. ” . . . . . . Between brigades. 20 yards. Line of squadron Between squadrons. Frontage of all the rear columns troops of one squadron plus 10 yards. Mass . . . . . . Between squadrons. 5 yards. Any line of columns. Between regiments or Deploying interval plus 20 brigades. yards. Ranking past by sections. Knee to knee. 1 horse-length. 1. The command or signal “March”, unless preceded by some other command, means “Trot”. The commands for the three paces are:- From the halt. – On the move. – 2. The following table shows the regulation paces of drill:- Pace Distance Distance Time taken Covered Covered to Cover In one hour In one minute 1/4 Mile (1) (2) (3) (4) Walk .. .. 4 Miles 117 yds. 3′ 45″ Trot .. .. 8 Miles 235 yds. 1′ 52″ Canter .. .. 10 Miles 293 yds. 1′ 30″ Gallop .. .. 15 Miles 440 yds. 1′ 0″ 3. Correctness and evenness of pace are essential in order to preserve cohesion in the unit and to avoid exciting the horses. Sudden change of pace in the endeavour to correct alignment or distances must be carefully avoided, and corrections should carried out gradually and quietly. Each man should be on the alert for any slight change of pace. By careful riding he will be able not only to avoid incresing any irregularity of pace, but also to assist in rectifying it. 94. Details of march discipline. 1. Special parades for the purpose of instruction in marching should rarely be necessary except for transport, but advantage should be taken of every opportunity, such as is offered when troops march to and from the manoeuvre ground, of giving instruction in marching both by day and by night. 2. Whenever tactical considerations permit the following system of marching is suitable:- Mount after halt and trot 15 minutes, then walk 10 minutes then trot 15 minutes, then lead 10 minutes. Halt 10 minutes. This system can be varied according to circumstances. When the horses are being led, they should be kept as close to the edge of the road as possible so as to avoid blocking the traffic. The tendency to string out must be avoided. 3. To enable men to look round their horses and saddles, a short halt should be made about a quarter or half an hour after starting or as soon as day has broken. Subsequently, halts of 10 minutes duration should be made before each clock hour. During a long march a halt should usually be made after four hours to water and feed the horses. A regiment should start and halt by squadrons by whistle or signal, or by both. The regiment as a whole should be warned by whistle one minute before each halt or start. Troops will march at attention when the warning signal to halt is given. The will wait for orders from troop commanders before falling out after a halt is signaled. When troops halt, the commander should give out at once the duration of the halt, so that the men may know exactly at what time they must be ready to march again. In roads horses’ heads should be turned towards the centre of the road, and the horses backed into the side of the road. During long halts the horses should be offsaddled and their backs hand-rubbed. During short halts girths should be loosened and saddles eased without further orders. 4. All commands to “Walk”, “Trot”, and “Halt”, must be passed either verbally or by signal down the whole length of the column. Sufficient time must always be allowed for a command to be passed down a column, before the change is made. Opening out, which frequently occurs after a halt, owing to units in rear not mounting and moving off at the same time as the head of the column, should be prevented by units decreasing their distances before dismounting, and by each squadron posting a special sentry to report when the units in front are mounting. It is advisable to walk for a considerable distance after mounting and before giving the order to trot. 5. To avoid tiring the horses, the men should at all times sit square and steady in the saddle. No man should quit his stirrups, and when trotting the men should rise in them, changing the diagonal every half mile or so. The correct places in the ranks should always be maintained. When dismounted the rifle must never be left on the saddle. 6. Units moving on a road will march well into the side of the road in order not to impede traffic, the side of the road depending upon the custom of the country they are in. The directing flank will be in accordance with the rule of the road and, during halts, men will fall out on the same side of the road as they are marching.
<urn:uuid:8fbdfad1-079c-478b-8cca-84fe14b279c6>
CC-MAIN-2021-43
https://www.lighthorse.org.au/chapter-8-signals-distances-intervals-paces-etc/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00670.warc.gz
en
0.920973
2,981
2.9375
3
Many characters not in the repertoire of standard ASCII will be useful--even necessary--for Wiki pages, especially the international pages. This page contains my recommendations for which characters are safe to use and how to use them. There are three ways to enter a non-ASCII character into a Wiki page: - Enter the character directly from a foreign keyboard, or by cut and paste from a "character map" type application, or by some special means provided by the operating system or text editing application. The web server should then be configured to announce which 8-bit character set is being used. - Use an HTML named character entity reference like à. This is the most reliable method, and is unambiguous even when the server does not announce the use of any special character set, and even when the character does not display properly on some browsers. - Use an HTML numeric character entity reference like ¡. This is not recommended, because many browsers incorrectly interpret these as references to the native character set. It is, however, the only way to enter Unicode values for which there is no named entity, such as the /Turkish letters. Note that because the code points 128 to 159 are unused in both ISO-8859-1 and Unicode, character references in that range such as ƒare illegal and ambiguous, though they are commonly used by many web sites. Generally speaking, Western European languages such as Spanish, French, and German pose few problems. For specific details about other langauges, see: /Turkish (more will be added to this list as contributors in other languages appear). The following extended ASCII characters are safe for use in all Wiki pages. The table below shows the character itself, lists the code for each character in hexadecimal and decimal, shows the HTML entity name, and gives the common name of the character. |00A0||0160|| ||no-break space| |¤||00A4||0164||¤||intl. currency sign| |«||00AB||0171||«||left double-angle quote| |®||00AE||0174||®||registered trademark sign| |¶||00B6||0182||¶||pilcrow (paragraph) sign| |·||00B7||0183||·||middle dot (Georgian comma)| |»||00BB||0187||»||right double-angle quote| |ß||00DF||0223||ß||sharp s (ess-zed)| These characters are a subset of the most common extended ASCII character set in use on the Internet, ISO 8859-1. Wikipedia pages are identified by the server as containing ISO-8859-1 text. The characters above are a subset selected to improve compatibility with other machines. For example, the Apple Macintosh is in common use on the Internet, is not limited to any specific language, and its native character set (which is not ISO-8859-1) contains many of the common international characters. Many Macintosh browsers will correctly translate ISO text into the native character set, as long as the characters used are available. So the table above is the subset of ISO-8859-1 characters that are also available on the native Macintosh character set. Microsoft Windows standard code page 1252 set is a superset of ISO-8859-1, so these characters will be readable as is on Windows machines. The most common Latin character sets other than ISO-8859-1 are MS-DOS (pre-Windows) code page 437, Macintosh Roman, and other ISO sets such as ISO-8859-2. The number of pre-Windows MS-DOS machines with web browsers is small and they are often dedicated-purpose machines that wouldn't be using Wikipedia anyway, so it is reasonably safe to sacrifice compatibility with them for the sake of needed foreign characters. Other ISO sets are generally intended to be read by other browsers using those same sets in the same country, and so those pages should use a language-specific set. These characters can be entered either as HTML named character entity references such as à, directly from foreign keyboards, or with whatever facilities are available to the Wiki author for entering these characters. For example, Wiki authors using Windows machines can enter these by holding down the Alt key while typing the 4-digit decimal code of the character on the numeric pad of the keyboard. It is important that all 4 digits (including the leading 0) be typed; typing a 3-digit code will enter characters from the obsolete code page 437. Wiki authors using Macintosh machines should take care to either use special facilities to enter these in ISO-8859-1 format rather than with the native character set, or else use HTML named character entity references. Note that some Windows users may have trouble with versions of Microsoft Internet Explorer that use "Alt-Left-Arrow" and "Alt-Right-Arrow" for page movement. These will interfere with entering codes that contain the digits 4 and 6. Use HTML named character entity references in this case. The characters from the table above can be used directly as 8-bit characters in all Wiki pages, and are sufficient for all pages primarily in English, Spanish, French, German, and languages that require no more special characters than those (such as Catalan). Despite their general safety, at this time, it is not possible to use these characters in Wiki page titles in the English Wikipedia, though some of the International Wikipedias are configured to allow them. Note especially what is missing here from the full ISO-8859-1 set: The broken bar ( 0166=¦), soft hyphen ( 0173=­), superscript digits ( 0178=², 0179=³), vulgar fractions ( 0188=¼, 0189=½, 0190=¾), Old English eth and thorn ( 0208=Ð, 0240=ð, 0222=Þ, 0254=þ), and multiply sign ( 0215=×). These should be considered unsafe (and adequate substitutes are available for most of them). Special care should be taken with characters that do exist in the native character set of popular machines but not in the above set. These are not safe, even though they may display correctly to you when you use them. Characters from Windows code page 1252 not in ISO-8859-1 include the euro sign ( €), dagger and double dagger ( †, ‡), bullet ( •), trade mark sign ( ™), typeset-style punctuation (see below), per mille sign ( ‰), some Eastern European caron-accented letters, and the oe ligatures. Characters from the Macintosh Roman set not in ISO-8859-1 include dagger and double dagger, bullet, trade mark sign, a few math symbols such as infinity ( ∞) and not equal ( ≠), a few commonly-used Greek letters such as pi ( π), ligatures like oe and fl, typeset-style punctuation, per mille sign, and lone accents such as the breve, onogek, and caron. HTML 4.0 defines named character entities for some Latin characters not in ISO-8859-1 that are used by popular languages, such as OE ligature ( Œ, œ), uppercase Y with diaeresis ( Ÿ), and some Eastern European accented characters like š. These are also unsafe, though if they entered as HTML named character entity references, they may display on some machines. In short, don't assume that it is safe to use a special character just because it looks correct on your machine. Use the ones from the table above, and read and understand how to use others shown below. Possibly usable non-ISO characters Some characters not listed as safe above may still be usable when entered as named HTML character entity references, because web browsers will recognize them and render them correctly, perhaps by switching to alternate fonts as needed. All of these should be considered less safe to use than those above, but only in the sense that they may not display properly, though in the form of HTML character entity references they are unambiguous, and preserve data integrity. For many of these, adequate substitutes and workarounds are available, and should be used when the value of making the text available to users of older computers and software exceeds the value of good presentation to those with newer software (in the judgment of the author or editor). Absent from the ISO-8859-1 character set, but commonly used and present in both Macintosh Roman and Windows code page 1252 character sets, are proper English quotation marks and dashes. These can be entered as character entity references, and should appear correctly on most machines running recent software. Even on ISO-based machines such as Unix/X, browsers should be able to interpret these references and make appropriate substitutes using plain ASCII straight quotes and hyphens (Mozilla does this correctly, for example). These references were not present in older versions of HTML, so may not be recognized by older software. Since using these characters maintains data integrity even on those machines that may not display them correctly, it should be considered safe to use these unless proper display on old software is critical. German "low-9" quotation marks are a similar case, but are less commonly translated by browsing software, and so are not quite as safe. The table below shows these characters next to a capital letter "O" for better visibility: ‘O ‘ left single quote —O — em dash ’O ’ right sigle quote –O – en dash “O “ left double quote ‚O ‚ single low-9 quote ”O ” right double quote „O „ double low-9 quote Many web sites targeted for a Windows-using audience use code page 1252 references for these characters: for example, using — for the em dash. This is not a recommended practice. To ensure future data integrity and maximum compatibility, recode these as named references such as Greek letters and math symbols Web standards for writing about mathematics are very recent (in fact MathML 2.0 was just released in February of 2001), so many browsers made before these standards were in place try to compensate by at least allowing characters commonly used in mathematics, including most of the Greek alphabet. These are necessarily entered as character entity references. Browsers often render these by switching to a "Symbol" font or something similar. Upper- and lowercase Greek letters simply use their full names for character entities. These should, of course, only be used for occasional Greek letters in primarily-Latin text. Actual Greek-language text should be written using a Greek character set to avoid bloated files and slow response. Here are a few samples: α α Γ Γ β β Λ Λ γ γ Σ Σ π π Π Π σ σ Ω Ω ς ς ("final" sigma, lowercase only) Other common math symbols: ≠ ≠ ′ ′ ≤ ≤ ″ ″ ≥ ≥ ∂ ∂ ≡ ≡ ∫ ∫ ≈ ≈ ∑ ∑ ∞ ∞ ∏ ∏ √ √ Many of the symbols in the Windows "Symbol" font commonly used for rendering mathematics (such as the expandable bracket parts) are not present on most other machines, and not even present in Unicode 3.1 or as HTML named entities (though they are planned for Unicode 3.2). These are used by products such as TtH to reder equations. You should be aware that if you use these symbols, you are restricting your audience to Windows users (whether or not that's acceptable is a judgment you will have to make as an author). Other common symbols Some characters such as the bullet, euro currency sign, and trade mark sign are special cases. They are likely to be understood and rendered in some way by many browsers. Because they are important for international trade, many computers specifically add them to fonts at some non-standard location and render them when requested, or else render them in special ways that don't require them to be present in a font. See below for how your browser renders these: • • bullet € € euro currency sign ™ ™ trade mark sign Other somewhat less commonly used symbols include these: † † dagger ♠ ♠ black spade suit ‡ ‡ double dagger ♣ ♣ black club suit ◊ ◊ lozenge ♥ ♥ black heart suit ‰ ‰ per mille sign ♦ ♦ black diamond suit ← ← leftward arrow ‹ ‹ single left-pointing angle quote ↑ ↑ upward arrow › › single right-pointing angle quote → → rightward arrow ↓ ↓ downward arrow These should be considered unsafe to use except perhaps on pages intended for a specific audience likely to have very up-to-date software on popular machines. The Unicode character encoding UCS-4 is the official character encoding of HTML 4.0. Many browsers, though, are only capable of displaying a small subset of the full UCS-4 repertoire. For example, the codes Й ק م display on your browser as Й, ק, and م, which ideally look like the Cyrillic letter "Short I", the Hebrew letter "Qof", and the Arabic letter "Meem", respectively. It is unlikely that your computer has all of those fonts and will display them all correctly, though it may display a subset of them. Because they are encoded according to the standard, though, they will display correctly on any system that is compliant and does have the characters available. Numeric character entity references are the only way to enter these characters into a Wiki page at present. Note that encoding them using decimal rather than hexadecimal (e.g. Й instead of Й) will increase the number of browsers on which they will work. see also Unicode and HTML for character entities tables.
<urn:uuid:d3e83f01-a501-4955-b0a8-319f3f42801b>
CC-MAIN-2021-43
https://nostalgia.wikipedia.org/wiki/Wiki_special_characters
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00030.warc.gz
en
0.835967
2,941
2.5625
3
Imagine looking down the length of a 100-foot garden hose. Now imagine gathering up that hose and fitting it inside your horse's belly. One end of the tube is at his mouth, and the other is at his tail, with the majority balled up in his abdominal cavity. You've just pictured a rough image of your horse's digestive system. One hundred feet of tube through which ?everything you feed him travels, with ?digestion and absorption processes all along the path. That's a lot of tube. And, when things go right, the system is very efficient. However, so many things happen in those 100 feet, it's not too surprising that there are quite a few potential problems. There are also many rules in feeding horses: feed small meals often; feed only high-quality hay; make any feeding changes gradually; never feed cattle feed to horses, etc. Why does feeding your horse seem so complicated, and why so many rules? The answer lies in the architecture of the horse's gut?how his unique digestive system is designed. I've always thought understanding how your horse's digestive system works is more important than trying to memorize all those rules. If you understand the architecture of the gut and how digestion and absorption of nutrients works in horses, you don't need to memorize anything?it all just makes logical sense. Then, when you're faced with a new situation, you don't have to try to remember the ?appropriate rule, you can just think of what makes sense. That will help you make the best choices in what and how to feed your horse. The horse's gut is fairly unique compared to other livestock species. The horse is classified as a nonruminant herbivore?an animal that eats plants and is not a ruminant. Several livestock species are ruminant herbivores, including cattle, sheep and goats. Ruminants have stomachs that are divided into compartments, whereas horses have simple stomachs with only one compartment. Animals with simple stomachs are classified as monogastrics, including horses, pigs, dogs, cats and humans. With those basic differences defined, let's look at the horse's gut. We're going to start at the beginning, follow it through to the back end and examine what goes on in each section. The Upper Gut The gut starts at the mouth, which the horse uses to take in feedstuffs and chew. In horses, a unique aspect of the mouth is that the physical act of chewing stimulates the production of saliva, which is not necessarily the case in other species. To understand the importance of this, think of saliva as lubrication. If your horse doesn't chew adequately, there will be larger chunks of feed and less lubrication (saliva) to help the feed flow smoothly through the digestive tract. Providing regular dental care is the first step horse owners can take to help ?ensure adequate chewing. This decreases the risk of digestive tract problems, such as choke, and helps ensure optimal digestion and absorption of nutrients. The next part of the gut is the esophagus, or throat. The horse's esophagus is unique in how it attaches to the stomach. The attachment is at such an angle and the muscles are so firm that once the digesta passes that point, it's not coming back?it's a one-way trip. The horse normally cannot belch or regurgitate. In fact, if something makes it into the horse's stomach that should not be there, such as a toxic substance, his stomach would rupture before he could ever regurgitate. This is different than in cattle. Cows can belch and "chew their cud" (or ?ruminate) when partially degraded food moves back up the esophagus from the stomach and is then chewed and swallowed again. This allows them to break down less digestible foods so nutrients are more available farther down the tract, which is one of the reasons cattle are better able than horses to utilize poor quality hay. Now we enter the horse's stomach. As I mentioned before, the horse has a ?monogastric stomach, meaning a single compartment or a simple stomach. This single compartment contains primarily ?digestive enzymes and hydrochloric acid, so feed is degraded by enzymatic digestion. This is also quite different from cattle, as a cow's stomach comprises four compartments, with the largest compartment being the rumen. The rumen is a very large bag?large enough to fill a typical wheelbarrow. It contains billions of microorganisms?bacteria and protozoa. When feed enters the cow's rumen, it is digested (fermented) by the microbes. This accounts for one of the reasons you should feed your horse only products designed specifically for horses and not cattle, because ?microbes are able to digest and utilize some feed components (and some potentially toxic substances) that digestive enzymes cannot. (For more information, see "Why Cattle Feeds Don't Work," below.) Another function of microbial fermentation is the digestion of fiber carbohydrates in the diet. Fibers are made of sugars linked together by a bond that requires a microbial enzyme to break. In ruminants, microbes in the rumen break down fibers into volatile fatty acids (VFAs). The VFAs are then absorbed from the small intestine and are an important energy source for the animal. In the horse, these fibers pass through the stomach and small intestine with very little breakdown. This is another reason to feed high-quality hay to your horse. The more fibrous the hay, the less digested it will be in the upper gut (stomach and small intestine) and the fewer nutrients your horse will get out of the hay. Cattle are quite efficient at ?retrieving nutrients even from fairly poor-quality roughages due to the microbial fermentation in the rumen. One more interesting difference between the equine and bovine stomach is the rate of passage. In cattle, it can easily take 24 to 36 hours for feedstuffs to pass through the entire stomach. In horses, digesta usually passes through the stomach within two hours, though it can be as short as 15-20 minutes. The faster digesta moves, the less efficient digestion processes may be. Moving on, the next part of the horse's gut is the small intestine. This is a tube that is about 3 inches in diameter and 60-70 feet long. As digesta moves through the small intestine, more digestive enzymes are produced, and nutrients are degraded into components that can be absorbed into the bloodstream. In fact, the small intestine is the major site of nutrient absorption: Most if not all of the fat in the diet is digested and absorbed here, soluble carbohydrates (sugars and starch) are primarily digested and absorbed in the small intestine, and it is the only appreciable area of absorption of amino acids from dietary protein. The ?majority of vitamins and several minerals are also absorbed in the small intestine. Here again, the rate of passage of digesta through the small intestine is fast?as short as 45 minutes, with a maximum rate of about eight hours. In 10 hours, feed has passed all the way through the stomach and small intestine in the horse. Anything that we can do as horse owners to slow down the rate of passage in the stomach and small intestine can help increase the efficiency of digestion and nutrient absorption. About the only way to do that is to slow down your horse's rate of intake. Feeding management practices such as placing large, round stones in the feed tub can accomplish that goal?your horse has to pick around the stones, slowing down intake. Why Cattle Feeds Don't Work Feeding cattle feeds to horses is never a good idea for several reasons. First, horses have different nutritional requirements than cattle, so any feed that is designed for cattle will not specifically meet your horse's needs. Further, the differences in the animals' digestive systems set the scene for ingredient variations that can cause problems for your horse. Remember, the horse's simple stomach contains primarily digestive enzymes and hydrochloric acid, so feed is degraded by enzymatic digestion rather than the microbial fermentation found in a cow's rumen. This means that cattle can utilize poor quality or highly fibrous feedstuffs much more efficiently than horses. Therefore, cattle feeds often contain ingredients that are good for cattle but provide few nutritional benefits to your horse due to poor digestibility. Further, cattle feeds sometimes include ingredients that can be detrimental to horses, such as ionophores. Ionophores are antibiotics that have been shown to increase feed efficiency and growth rate in cattle. However, ingested ionophores can be toxic to horses, resulting in damage to the heart, skeletal muscle, kidneys and liver?possibly resulting in death. In fact, even feeding cattle feed that is not supposed to contain ionophores can be risky, because there is no guarantee that a feed labeled for cattle is completely free of ionophores. Cattle feeds also often contain urea, a source of nonprotein nitrogen. In cattle, the rumen's microbes can take that nitrogen and use it to synthesize protein. The microbial protein is then available as an additional protein source to meet the amino acid requirements of the animal. In horses, there is no appreciable microbial population in the stomach, so the urea is not utilized to form protein. It is converted to ammonia and absorbed in the small intestine. The amount of urea commonly found in sheep or cattle feed is not usually toxic to the horse, but it doesn't serve any function, and the horse must excrete the resulting ammonia through the urinary system. However, if large amounts of urea are ingested by a horse, the high levels of ammonia that are absorbed can be toxic, ultimately resulting in death. The Horse's Unique Hindgut At this point, you understand how the horse's upper gut functions and why horses are fed differently than cattle (and other ruminants). Now let's compare horses to other monogastrics, such as people. Our stomachs and small intestines are similar to those in horses, but do we eat the same way? How much quality time did you spend grazing in the pasture today? I'm guessing none. (I certainly didn't!) So why is grazing not normal for people? Why is it that many horses can stay fat on good-quality hay or pasture alone, and we can't eat enough roughages, such as lettuce and celery, to maintain body weight? Well, we've only discussed half of the gut so far?the upper gut. The answers to these questions lie in the unique structure of the horse's hindgut when it is compared to almost all other monogastric digestive systems. The horse's hindgut includes the ?cecum and the large intestine, or ?colon. The hindgut comprises more than 65 percent of the digestive tract's total capacity. The cecum is a large bag located at the junction of the small and large intestines. It can hold seven to eight gallons, and is full of microorganisms (bacteria and protozoa). When digesta passes into the cecum, it is subject to microbial digestion, or fermentation. Sound familiar? The horse's cecum functions much like the rumen in cattle?a large fermentation vat. This is why the horse is able to derive a great deal of ?energy from pasture and hay?the microbes in the cecum and colon break down the fiber, and the resulting VFAs are absorbed from the hindgut. Humans and most other monogastrics don't have a functional cecum, and without a significant source of fermentation, little digestion of fiber can occur. In fact, we primarily eat fiber sources to help maintain our digestive tracts?the fiber mostly passes on through and helps keep us "regular." But now, why again are horses different from cattle, if the cecum functions much like the rumen? Remember, the rumen is part of the stomach and falls before the small intestine, and the cecum lies at the junction of the small and large intestine. Now, where is the major site of nutrient absorption? The small intestine. Although the fermentation in the cecum is highly efficient, many of the nutrients can't be absorbed there. For instance, the microbes may liberate more nutrients such as protein and amino acids from hay that passed undigested through the upper gut along with the fiber. However, because there is little to no absorption of amino ?acids from the hindgut, that protein will not be used to help meet the horse's amino acid requirements. Again, feeding high-quality hay and feeds will help maximize digestion in your horse's upper gut as well as help ensure he'll receive adequate nutrients to meet requirements. Although the microbial fermentation in the horse's hindgut does not yield the same nutritional benefits as in the cow's rumen, it does serve several important functions: VFAs from fermentation of ?fiber and other carbohydrates are ?absorbed and are an important source of energy for maintenance or low activity levels. The hindgut is also the major site of water absorption. Some minerals are absorbed from the hindgut, including phosphorus and some electrolytes. The microbes also synthesize several B vitamins, and those ?vitamins are absorbed from the hindgut. The hindgut can also be a source of ?problems for horses, especially when not managed properly. The microbial populations in the cecum and colon are fairly sensitive to pH, and changes in the acidity of the hindgut can have devastating results in the horse, such as colic. This explains why sudden changes in feed can result in colic in horses. For example, when a horse gets into the feed room and eats a large quantity of grain, there will be a sudden influx of ?undigested sugars and starch from that grain into the hindgut. Under normal conditions of small meals of grain, most of the sugars and starch are digested and absorbed in the upper gut. But if a horse is allowed to overeat grain or other feedstuffs high in soluble carbohydrates, the sugars and starch can overflow from the upper gut into the hindgut. This causes the microbial population in the hindgut to shift from mostly fiber-fermenting microbes to more starch-fermenting ?microbes. The starch-fermenting microbes produce excess gas and lactic acid, resulting in a decrease in pH, which overall may lead to colic and possibly laminitis. Another problem in the hindgut is simply due to the architecture of the tube. At one point?the pelvic flexure?the diameter of the colon drastically narrows, and, at the same time, the tube makes a hairpin turn. This area is at high risk for impaction of digesta, and many impaction colics originate at the pelvic flexure. Finally, unlike many other species, the horse's intestine is not held in place by membranes, so it can move about and actually twist around itself and possibly other organs, further increasing the risk of colic. When horses are in their natural situation, wandering on thousands of acres, grazing throughout the day and moving freely, their digestive systems work fairly well with small amounts of forage moving through pretty much all the time. But with the demands and constraints placed on horses by people, good feeding management is required to keep our horses healthy and comfortable. And the farther we take them from their natural environment, the more management-intensive we have to be to keep them healthy. Now that you understand how the gut is designed to work, the feeding management rules in the box below should make sense. There are many more feeding management practices and rules for horses than those listed, but again, now that you understand the fascinating equine gut, you will hopefully never have to memorize a rule again. 1. Feed small meals often. This helps your horse's digestive tract work most efficiently, as well as reduces the risk of digestive disturbances, such as colic. 2. Feed no more than about 0.5 percent of your horse's body weight in grain per meal (5 pounds for a 1,000-pound horse). This helps reduce the risk of soluble carbohydrate overload to the hindgut. When using feeds lower in sugars and starch than grain, you can increase the amount fed in a meal. 3. Feed at least 0.1 percent of your horse's body weight per day (dry matter) in roughage (10 pounds of hay for a 1,000-pound horse). Adequate fiber is necessary to keep the microbial population healthy and maintain proper hindgut function. 4. Make feeding changes gradually. Any sudden change in feed and hay can cause a pH change and/or shift in microbial population in the hindgut, resulting in digestive disturbances. Minor changes can be made over three to four days, and major changes may need to be spread over a few weeks. 5. Only use feeds designed and labeled for horses. Feeds designed for other species will not meet horses' specific nutrient requirements and may contain substances that are toxic to horses. (See "Why Cattle Feeds Don't Work," on the previous page.) 6. Never feed moldy feed or hay to horses. Horses are more sensitive to many substances than most other species due to their inability to regurgitate. It is also important to maintain stability of the microbial population in the hindgut. J. Kathleen "Katie" Young, Ph.D., is a consulting equine nutritionist working with Land O'Lakes Purina Feed. Prior to starting her ?consulting business, Sunrise Equine Services, in Lenexa, Kansas, Dr. Young worked at Farmland Industries, first as equine nutritionist and horse feed program manager, and later as a business consultant and professional development trainer for Farmland's local member cooperatives. Dr. Young earned her bachelor's degree from Missouri State University and her doctorate in equine nutrition and exercise physiology from Texas A&M University. During her stay in Texas, Dr. Young also served as a faculty member in the Equine Science Section of the Animal Science Department, teaching courses in equitation, training and horse management. She also was supervisor and coach of the ?school's equestrian teams and a board member of the Intercollegiate Horse Show Association. Dr. Young has more than 35 years of ?experience in the horse industry. She started riding as a child in southwest Missouri, first as a barrel racer and later moving into hunters and jumpers. After moving to Texas, Young continued participating in hunter/jumper shows, as well as dressage and eventing competitions, and she has played competitive polocrosse. Dr. Young has worked as a trainer and riding instructor for more than 30 years, and continues to do so in the Kansas City area. This article originally appeared in the October 2009 issue of Practical Horseman magazine.SaveSave
<urn:uuid:25935542-e597-4634-8ad0-0351178c0631>
CC-MAIN-2021-43
https://practicalhorsemanmag.com/health-archive/architecture-of-the-equine-digestive-system-11756
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00190.warc.gz
en
0.953132
3,927
3.25
3
- Endotracheal intubation is the gold standard for any anesthetized patient, regardless of species. Intubation provides a patent and protected airway, allows undiluted administration of oxygen and inhalant anesthetic agents to the patient, and reduces environmental pollution with volatile anesthetics. Endotracheal intubation also permits effective positive pressure ventilation. - Endotracheal intubation allows accurate end-tidal carbon dioxide monitoring. Capnography can be used for guidance during the intubation process as well as to provide respiratory and cardiovascular feedback during the anesthetic period. - Rabbit intubation can be accomplished using either an orotracheal or nasotracheal technique. Both intubation methods can be challenging in rabbit patients and require patience and practice. - Nasotracheal intubation may be the preferred approach in situations where maximum access and maneuverability is required in the oral cavity. Nasotracheal intubation is also preferred where an extended recovery is expected. - Nasotracheal intubation should be avoided in patients with known or suspected upper respiratory infections or situations in which one or both sides of the nasal passage have preexisting edema or narrowing. Rabbits as pets have been increasing in popularity since the 1960s. The most recent USDA census of rabbit populations in the United States was performed in 2002. This data estimated 5 million pet rabbits and 2 million rabbits raised for the meat industry (USDA 2002). It is safe to assume the numbers have increased in the past 15 years. As pet rabbit ownership increases, so too does the demand for veterinary care. Numerous rabbit patients will present for services that require anesthesia such as castration, ovariohysterectomy, mass removal, and dental care. Anesthetic care provided to rabbit patients should be on par with the standard of care offered to more traditional canine and feline patients. This includes intubation, analgesia, intravenous catheter placement, and anesthetic monitoring. Historically rabbit intubation was considered “too challenging” to attempt and anesthesia was often maintained with a mask, which does not allow for airway control, adequate positive pressure ventilation, or reliable end-tidal carbon dioxide (ETCO2) monitoring. Administering inhalational anesthesia via an anesthetic mask can dramatically increase waste gas exposure for personnel but may also cause dilution of the administered anesthetic if room air is drawn into the mask. Ideally, any patient undergoing anesthesia should be intubated. Nasotracheal intubation may be indicated in cases where the maxilla, mandible, or oral cavity is the primary area of interest. Nasotracheal intubation may be preferred in cases with oral infection, oral abscess, or dental overgrowth that prevents passage of an orotracheal tube (Fig 1, Fig 2). Nasotracheal intubation may also be helpful in situations where an animal is required to be repositioned repeatedly as it can be easier to secure the nasotracheal tube. Complications are primarily associated with traumatic nasotracheal intubation. Repeated attempts can result in damage to the nasal turbinates and soft tissue, leading to edema and potential blockage of the nasal passage. As rabbits are obligate nasal breathers, obstruction of one or both sides of the nasal canal can lead to respiratory distress. There is some concern when placing an nasotracheal tube in rabbits with an upper respiratory infection or “snuffles.” Placement of the nasotracheal tube may introduce bacterial contaminants further along the respiratory tract. While there is little evidence to confirm infection rates, it may still be prudent to avoid nasotracheal intubation in patients with a known nasal infection. Equipment needed for nasotracheal intubation in rabbits is nearly identical to equipment required for intubation in most species. - 0.1-0.2ml of 2% lidocaine (keeping under a 2 mg/kg dose range) - 1ml needleless syringe - Water-soluble lubricant or lubricant containing lidocaine - Endotracheal tubes, size 2.0 or 2.5 mm, uncuffed - Gauze tie or rubber tie to secure nasotracheal tube after placement - Capnograph (optional) Create an appropriate anesthetic plan for the individual patient that addresses anticipated pain level and duration of the procedure. The use of preanesthetic medication contributes to a smooth anesthetic induction and better conditions for intubation in rabbits. After induction of anesthesia, when sufficient muscle relaxation is achieved, intubation should be attempted. If using an injectable induction protocol provide oxygen via an anesthetic facemask until the patient attains an adequate plane of anesthesia (Lennox 2008). First, administer 2% lidocaine (0.1-0.2 ml) into the nasal passage with a syringe (Fig 3). Continue to provide oxygen support for 30-60 seconds after administration to allow the local anesthetic agent to take effect. Positioning is key for correct nasotracheal intubation. Place the patient in sternal recumbency with the head and neck hyperextended (Fig 4). Hyperextension aligns the nasopharynx with the trachea and makes passage of the endotracheal tube into the trachea possible. The normal rabbit nasal passage is narrow and even in the largest of rabbits, plan to use a 2.0-2.5 mm endotracheal tube (Lichtenberger and Ko 2007). Sterile lubricant should be used on the endotracheal tube to facilitate a smoother and less traumatic placement of the tube. Use caution with the amount of lubricant. Overly enthusiastic application of the lubricant can obstruct the lumen of the endotracheal tube. Once the patient is properly relaxed and positioned, the lidocaine instilled, and the endotracheal tube lubricated, insert the bevel of the endotracheal tube into the ventral nasal canal (Fig 5). Direct the tube in a ventromedial direction. Another way to describe the insertion direction is “in and down.” This is “in” through the nostril opening and “down” toward the nasal passage and trachea. As the nasal canal is very narrow, a minor amount of resistance or “drag” is to be expected as the endotracheal tube moves through the nasal passage. However, there should be a very small amount of resistance. If there is a significant amount of resistance or a “crunching” sensation, it means either endotracheal tube diameter is too large or the tube has veered into the nasal turbinates where it can cause tissue damage, bleeding, and edema. If incorrect positioning of the tube is suspected, remove and redirect the endotracheal tube. Rabbits have a large epiglottis that is often entrapped and can make traditional orotracheal intubation more challenging. Repeated attempts at orotracheal intubation can lead to hemorrhage and edema in the oropharynx (Fig 6). One benefit of nasotracheal intubation is that epiglottal entrapment tends not to hinder passage of the nasotracheal tube (Devalle 2009). Pass the endotracheal tube through the nasal canal until fogging is visible within the tube (Fig 7). The condensation is visible with an expiratory breath and disappears upon inspiration. The tracheal opening is at its widest on inspiration and the endotracheal tube should be advanced with inspiration or when condensation clears in the tube. In addition to watching for fogging in the tube, a capnograph can be attached to the endotracheal tube and used to monitor correct placement. Detection of end-tidal carbon dioxide and display of a waveform confirms correct endotracheal placement. A word of caution: if using a side-stream capnograph, there will be a one to two breath delay in the actual readings. Correct placement of the nasotracheal tube can be confirmed by continued fogging in the endotracheal tube, auscultation of bilateral breath sounds during manual ventilation, and continued capnograph readings (Fig 8) (Krüger et al 1994). Secure the endotracheal tube using a rubber tie or gauze tied behind the ears (Fig 9). As previously mentioned, rabbits are obligate nasal breathers. I recommend only attempting nasotracheal intubation in one side of the nasal passage. If placement of the nasotracheal tube is not successful, consider other intubation techniques such as blind oral intubation or endoscopic intubation (Devalle 2009). When clinically indicated, the nasotracheal tube can be left in place until the patient is fully recovered as there is no danger the patient can bite through the endotracheal tube. Supplemental oxygen can be provided for as long as required. Keep in mind, however, that rabbits are also prone to laryngospasm, which can occur with prolonged intubation. Every anesthetic event in the rabbit should include endotracheal intubation as part of the protocol, and nasotracheal intubation may be the preferred technique in select situations. As with any skill, nasotracheal intubation can be mastered with practice and patience, and is ultimately in the best interest of the anesthetized rabbit. De Valle J. Successful management of rabbit anesthesia through the use of nasotracheal intubation. J Am Assoc Lab Anim Sci 48(3):166-170, 2009. Krüger J, Zeller W, Schottmann E. A simplified procedure for endotracheal intubation in rabbits. Lab Anim 28(2): 176-177, 1994. Lennox AM. Clinical Techniques: Small Exotic Companion Mammal Dentistry—Anesthetic Considerations. J Exotic Pet Med 17(2):102-106, 2008. Lichtenberger M, Ko J. Anesthesia and analgesia for small mammals and birds. Vet Clin North Am Exot Anim Pract 10(2):293-315, 2007. United States Department of Agriculture APHIS Veterinary Services. US Rabbit Industry Profile. USDA Animal and Plant Health Inspection Service web site. Available at https://www.aphis.usda.gov/animal_health/emergingissues/downloads/RabbitReport1.pdf. Accessed December 6, 2016. Lafferty K. A guide to nasotracheal intubation in the rabbit. LafeberVet web site. Available at https://lafeber.com/vet/a-guide-to-nasotracheal-intubation-in-rabbits/
<urn:uuid:baaae257-69d4-453f-910d-d87f06b3dd6d>
CC-MAIN-2021-43
https://lafeber.com/vet/a-guide-to-nasotracheal-intubation-in-rabbits/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00550.warc.gz
en
0.874866
2,317
3.046875
3
Understanding Social Phobia We all get a little nervous or a bit self-conscious on occasions - for example, when we need to give a speech or when we have a job interview or perhaps an important presentation at work. But when it is more pervasive, or more than just a bit of nerves or shyness, it might be described as social phobia or social anxiety. When it is more of a problem, and possibly described as a disorder, (like social anxiety disorder), the fear of being embarrassed or having ‘bad’ attention placed on us can become so intense that we can find ourselves avoiding, or staying away from those situations that might bring on the feelings of anxiety. Be assured though that we can all learn to be more comfortable and confident in social and group settings. It doesn’t matter if you are a more reserved or shy person naturally, we can all build confidence and skills. WHAT IS SOCIAL PHOBIA? Social phobia involves a really strong fear of certain or many types of situation that involves people. It can be especially seen in those situations that take us out of our comfort zone, are unfamiliar to us, or where we feel we are being evaluated by others people. For some people, these types of situations can have such a profound effect that even thinking about them or talking about them can bring on some feelings of anxiety, and we may even avoid talking about it! This is actually all about our fear of being embarrassed, judged or scrutinised by other people. Some say it is about being afraid of what people will think of us, or that we don’t seem to measure up or that we aren’t as good at something. Some people know that what they are thinking is not necessarily logical or correct, or even based on facts, but they still can’t help feeling anxious. For some it is like they have gotten out of the habit of listening to their own reason. As mentioned earlier, there is something that can be done about this fear of social settings. It involves some honesty, hard work and putting ourselves in the situations that might bring on some fears. The first step is to analyse and understand the problem. Some of the typical triggers that bring on anxiety. Although it may feel like you’re the only one with this problem, social anxiety or social phobia is actually pretty common. A lot of people live with these fears and struggle in this area. The things that bring on or trigger the fear can be a little different though for different people. For some people the anxiety is quite general and comes about in most situations where they need to perform or when they need to be social. For other people their anxiety is tied to more specific situations; for example speaking in public, doing something where there are people watching, talking to strangers, meeting people for the first time, going to parties, joining a group or being the leader in a group situation. The most commonly reported specific social phobia involves some sort of public speaking or presentation. There are some typical situations or triggers that bring on the phobia: • Going to parties • Joining a group • Being the centre of attention in a group setting • Being watched whilst doing your job • Public speaking • Eating in public • Meeting people for the first time • Starting a conversation • Making small-talk • Having a conversation with people we think are more important or have a position of power • Being assessed or evaluated • Being criticised SOME COMMON SIGNS YOU ARE EXPERIENCING SOCIAL PHOBIA Social anxiety is more than just being shy or self-conscious. The anxiety and phobia actually gets in the way of normal and everyday functioning. It also leads us to feeling quite distressed. It is one thing to be a nervous and get the ‘butterflies’ before giving a speech, but for some people it is more far-reaching than that. For some people who have social anxiety disorder, the prospect of talking in public can make you worry for weeks ahead of the event, and possibly make a real effort to avoid it. FIRST STEP IN GETTING ON TOP OF IT - Analyse and identify the negative thoughts People who experience social anxiety have negative thoughts and beliefs that reinforce and contribute to their anxiety. The following examples of thoughts might resonate. · ‘People are going to think I am stupid.’ · ‘I don't have anything interesting to say, I am boring’. · ‘People will see that I can’t do my job well.’ · ‘My voice will start to tremble and people will notice’. · ‘People will think I am not worthy of standing up here and talking.’ The first step is to understand and identify where and when these negative thoughts are happening. Then challenging these negative thoughts and telling a different story internally to the thoughts that can so easily hold you back or keep you feeling anxious. This can be done in a counselling setting or you can do it on your own. Starting to do this is the first step toward being in control and bringing down the symptoms of social phobia. Begin by identifying the automatic or negative thought(s) that underly an instance of social phobia. An example is being afraid of going to a party and meeting a new group of people. The underlying thought might be; ‘I am going to stand around without anyone to talk to and nothing to say, I will feel embarrassed and people will think I am a fool.’ Then once you have identified the negative thoughts, and then challenge them. A good start is to ask yourself a question about the negative thoughts. ‘Am I sure I will stand there on my own? Will others see and judge me?’ or “Even if I am on my own for a few minutes, others are aware of that and they will help me out.’ By checking and countering your negative you can replace them with more balanced and positive ways of thinking. It is a bit like upgrading software in a computer system. Thinking styles that are not helping if you are experiencing social anxiety. There is a range of ways that people think about things when they are experiencing social phobia. Below are some ways of describing the types of thoughts that may stop us from getting on top of social phobia. Catastrophising – Thinking that the worst possible thing may come true and blowing stuff out of proportion to the situation that you are in. For example, if my voice trembles a bit, then I will come across as nervous and that will be ‘awful’. Or I might get left on my own at the party and that would be a ‘disaster’. Mind reading – This refers to the assumption that we know what other people are thinking about us. And guess what, because we are thinking negatively about ourselves, then they are thinking negatively about us. Predicting the future – Assuming a bad outcome and things are going to go horribly. Taking things very personally – This refers to assuming that people are being negative in their thinking about you. Stop thinking that people are looking at you. As an alternative to being overly focused on yourself and being too self-conscious, make a deliberate effort to pay attention to what is happening around you. This will bring your attention away from any effects of anxiety in your body and help you to connect with the situation that you are in. In the environment take a studied look at all of the aspects of your situations; people and surroundings. Tune the conversation in and tune your thoughts down. This is about being mindful of what people are saying, reflecting back their comment to show you are listening and paying less attention to your own, possibly critical thoughts. You are not the only one carrying the conversation. A bit of silence is OK and others will chip in as well. SECOND STEP IN GETTING ON TOP OF IT – Relaxing & breathing well Anxiety affects our body and makes many things actually change in our body. For example, many people say that they feel butterflies in their stomach, tension in all parts of the body and that their breathing changes. This change in breathing can be seen in shortness of breath or over-breathing, which in turn could lead to putting ‘out of kilter’ the balance of oxygen and carbon dioxide in the body. This in turn may lead to other physical signs of anxiety. Some of these are things like feeling dizzy, feeling suffocated, tension in muscles and a rapid heart rate. Being aware of this and learning to slow breathing down can help bring physical symptoms of anxiety back under control. A really good idea is to research different relaxation exercises and find one that works for you. Then practice it, so that when you are in a social situation that can trigger anxiety, you can do it. Below is a simple one that you might try out. A Relaxation and Breathing Exercise Stand or sit comfortably, straight back and start by relaxing your shoulders. Put one hand on your chest and the other on your stomach. Take a slow and deep breath through the nose, taking about 4 seconds. Notice the stomach hand moving a lot more than the chest hand. Hold your breath for a few seconds. Breathe out slowly through your mouth for 6 seconds. Once again the stomach hand moving a lot more than the chest hand. Keep doing this for a little while, in through the nose and out through the mouth. Some people call this ‘belly-breathing’. THIRD STEP IN GETTING ON TOP OF IT - Confronting your fears and exposing yourself to them So you have identified the things that might trigger anxiety, you have tracked some of the negative thoughts that go along with it, and importantly you have some relaxation and breathing techniques that can bring down some of anxiety responses in your body that might be making your anxiety worse. Along with this, it is important to face some of the situations that make you anxious in order to get on top of it. Avoidance can actually keep the cycle of anxiety going. Avoiding things can lead to more and bigger issues Staying away from situations that make you uncomfortable may help in the short-term, but it can also prevent you from learning that it is actually ok and learning techniques to cope. Many people relate that the more they avoided the very situations that made them anxious, the more frightening it became and sometimes, the more irrational their thinking got. Further to this, avoiding situations will eventually stop you from doing the things that you like and living a fulfilled life. One example is John, who was fearful of public speaking and talking in groups, and for several years he avoided it. Once he faced up to it and tackled the problem, he found that he was sharing much more of his ideas at work and feeling more valued and fulfilled, also he began meeting many more people. Best done one step at a time and in a managed fashion. Often, confronting these situations seems overwhelming, however when it is tackled one step at a time it seems much more feasible. The key to this is gradually building your way up to more confronting scenarios, and as you progress you will notice your confidence and capability rising with you. For example, if public speaking makes you feel really nervous, try a more informal and relaxed situation first. Try presenting in a small group meeting, or taking the lead in a discussion at book club or lead trivia at the pub. Once you feel more comfortable, you can take small steps to work towards longer presentations or larger audiences. Throwing yourself in the deep end is not often helpful, but taking small steps up the ladder also builds your confidence gradually and helps eliminate some of the fears. The Steps to Building your Confidence and Comfort Levels Don’t throw yourself in the deep end – It’s better to take things slow, get familiar and comfortable with the feelings and grow your confidence slowly in these situations. Take your time – This is often a process that takes plenty of patience and time. Don’t expect to feel better instantly. This is about becoming familiar with the feelings and gradually building up your confidence. Use your skills – The recognition, reshaping and relaxation techniques will help you control your anxiety and stay calm when a situation may overwhelm you. FOURTH STEP TO GETTING ON TOP OF IT – Working on your relationships Being surrounded by a supportive network is a great way to help you work on overcoming social phobias. By building new networks and establishing new relationships, you may meet people who share the same discomfort or fears. They may also provide you with a supportive environment, somewhere you may feel more comfortable and in turn build your confidence. Here are some ways you can start interacting with others and build your relationships: Community Classes – Most local councils or adult education centres offer classes in your neighbourhood. This is a great way to meet people with similar interests or passions, and it may also mean you meet some new friends along the way. Help out Others – Often getting outside of yourself and focusing on the needs of others is a fantastic way to get the best out of yourself. Volunteer at your local council, garden club, or office social club. By engaging with others and also working on a rewarding activity it will give you focus and fulfillment. Practice your Communication – When meeting new people or working in new scenarios it is a great opportunity to work on your communication skills. Practice using open questions and giving open answers, letting people find out more about you, rather than simple ‘yes’ or ‘no’ responses. It is amazing how much more enjoyable a conversation can be when you let yourself be open to new information and open the door to a longer chat. The more you practice your communication skills, the easier building relationships will be. Some practical things to check on There are some small practical lifestyle changes that can also help your social anxiety and may contribute to an improvement in your mood and general well being. Try incorporating some of these tips into your daily life: Get plenty of sleep: Being rested and clear headed is essential to feeling calm and in control. Check your body patterns and address whether you need an earlier night, or try using some relaxation techniques at night to help you ‘wind down’ from a difficult day. Reduce your caffeine and alcohol intake: coffee, tea, soft drinks and energy drinks are stimulants that can get your heart racing, make you dehydrated (dry mouth) and increase anxiety symptoms. Try and reduce your consumption of alcohol as well, as drinking to ease your symptoms will most times make them worse. For help in addressing and managing your social phobia, or to improve your presentation skills contact us today (07) 3852 2441.
<urn:uuid:21417c8f-5090-4208-8475-364498978fe0>
CC-MAIN-2021-43
https://www.incorporatepsychology.com.au/single-post/2016/04/19/understanding-social-phobia
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00110.warc.gz
en
0.962634
3,007
3.03125
3
Language Legacies Grant Recipients - 2010 Namgay Thinley and Gwendolyn Hyslop - Dzongkha Development Commission / University of Oregon An Orthography and Grammatical Sketch of ’Olekha This project is the winner of the Bright Award for 2010. ’Olekha is an extremely endangered language of Bhutan with possibly just one elderly speaker left. From the little we know of this language, it seems quite different from other related languages. It seems likely that ’Olekha may retain archaic features which have been replaced everywhere else by the influence of Bodish languages, but further documentation is necessary to determine if this is true. Because members of the ’Olekha community are concerned about the endangerment of their language, the priority is the phonological analysis and working orthography along with a brief grammatical sketch of the language. Elicitation, combined with the knowledge of the phonologies of other East Bodish languages, will serve as the primary methodology for phonological analysis. Another goal of the research is to collect as many local stories, legends, oral histories and storytelling/verbal arts in general as possible. These recordings will be used as a springboard for grammatical analysis, supplementing the data with elicitation. This study of the language will produce the first phonological sketch and grammatical outline of the language. As an unusual variety of a relatively unstudied sub-group, a description of the language will be a contribution to historical and comparative Tibeto-Burman studies. Tye Swallow - Saanich Adult Education Center SENĆOŦEN Language Revitalization and Sustainability Plan - Learning from Homeland Curriculum Development Project The mission of the SENĆOŦEN language department of the Saanich Indian School Board (British Columbia) is to begin immersion programming from preschool to grade three. SENĆOŦEN is a highly endangered language with only 20 fluent speakers. The ELF award will be used towards funding a “Learning from Place Language Immersion” curriculum that will directly feed into the current development of a university-level language immersion course for a new Language Revitalization degree program being developed by the University of Victoria. Lalnunthangi Chhangte - Converge Worldwide Documentation of the Ralte Language Ralte is a Tibeto-Burman language, classified as belonging to the Kuki-Chin languages and closely related to Mizo (Lushai). However, Ralte shares more similarities with the Paihte language, spoken further northeast in the state of Manipur in northeastern India. The Raltes once lived as a separate community, speaking their own language, but they now consider themselves to belong to the Lushai language and culture group. Thus, most people are surprised to find that Ralte was once spoken widely in the Lushai inhabited areas of present-day Mizoram. The remaining 80 fluent speakers have made efforts on their own to record the language. The goal of this project is to gather all the data that has been collected so far and to organize and preserve the information so that future generations will know what the language sounded like. Chad Thompson and Dani Tippmann - The Three Rivers Language Center / Whitley County Historical Museum The Miami Language and Cultural Camp The Miami language, an endangered Native American language from the lower Great Lakes region, has been classified as “extinct” but the 15th edition of Ethnologue (2005) notes that “There are some who know a few words and phrases. A revitalization is in progress” (Gordon 2000). An increasing number of Miami people are currently speaking their language, and the latest Ethnologue no longer categorizes the language as extinct (Lewis 2009). The language is still at least highly endangered, and this project will draw on the expertise, personnel and materials of other successful programs to support a day camp for Miami children between the ages of 10 and 15. During the week, children will be encouraged to speak as much of the language as possible while they make culturally-related crafts with the help of elders/cultural teachers and camp counselors and are immersed in the Miami language as much as possible. Digna Lipa-od Adonis - Benguet Network Eg Tayo Kari Dibkhanan, Let Us Not Forget: A Documentation of the Ibaloy Indigenous Language in Benguet, Philippines There are approximately 55,000 native speakers of Ibaloy, an endangered language belonging to the Malayo-Polynesian branch of the Austronesian languages spoken in northern Luzon in the southern part of the Province of Benguet in the Philippines. Ibaloy has been giving way to Ilocano, with Tagalog and English as second languages. This project aims to support two ongoing Ibaloy language and culture preservation initiatives through interviews with native speakers to compile an extensive list of Ibaloy conversational phrases, dialogue text material on cultural subjects, and word lists that make up selected semantic domains. Audio recordings will be collected of the listed words, phrases and themes and endangered cultural activities will be gathered by video recording. Adam Baker - Academy of Sciences of Afghanistan Ishkashimi Language Documentation & Development Ishkashimi is spoken by about 1,500 people in Afghanistan and another 1,000 in Tajikistan. A recent shift to Dari, the language of wider communication, has led many Ishkashimis to believe that their children will speak only Dari in the future. At the same time, the Ishkashimi people value their language and wish to see it developed, responding very positively to the ideas of producing an orthographic system for Ishkashimi, producing Ishkashimi books, and holding literacy classes in Ishkashimi. The first goal is to support the language community’s becoming literate in Ishkashimi, including developing books. The second goal is to produce an annotated corpus of Ishkashimi language data, to be made available to linguists in the form of a language data archive. Olga Lovick - First Nations University of Canada Transcription, Translation, and Annotation of Upper Tanana Athabascan Upper Tanana is an Athabascan language spoken in several communities in eastern Alaska and across the Canadian border in the Yukon Territory. With fewer than 100 speakers, the youngest being in their 40s, Upper Tanana is a highly endangered language. Most of the speakers are elders who generally do not use the language when non-speakers are present, and it is no longer used in church or other ritual contexts. Recently, several untranscribed recordings of Upper Tanana speakers made by linguist James Kari in the 1990s have been found in the Yukon Native Language Center. These will be transcribed and made available with the help of the remaining native speakers. Jonathan David Bobaljik, David Koester, and Tatiana Degai - University of Connecticut / National Museum of Ethnology, Osaka, Japan / Itelmen Community Itelmen Language Audio Recordings The Itelmen language is the sole member of the Kamchatkan branch of Chukotko-Kamchatkan and is quite distinct from the Chukotkan languages in many ways. In the Itelmen population of 3,000, only 15 to 20 are fluent native speakers, scattered among various villages and the main city, Petropavlovsk-Kamchatskij, Russia. Although there is an official orthography, no native speakers are literate in Itelmen. Because there is a particular lack of multimedia material, the ELF award will be used to allow Tatiana Degai, a young member of the Itelmen community, to travel to two villages to collect recordings based on targeted elicitation lists constructed by Bobaljik, as well as to collect additional recordings, such as narratives of various sorts. The recordings will be used for language revitalization, archiving and pedagogy purposes. Preservation of Sakuye Indigenous Language Northern Kenya is home to the Sakuye community and four pastoralist communities of Somali origin. Sakuye culture is based on hunting and gathering, which helps sustain the language; pastoralist communities in northern Kenya speak Somali. A young generation has had to move into neighboring Ethiopia and then return after a peace agreement; they returned, however, with a different lifestyle, culture and language. This project will collect poems, oral narratives, songs, folklore, and other practices that surround marriage ceremony, ceremony for sacred areas, cultural festivals and some important Sakuye sayings and words. The collections will be preserved and circulated to the indigenous Sakuye community through media that reach all the four districts of northern Kenya. Deborah Sanchez - Barbareño Chumash Council Chumash Family Singers Recording Project The Barbareño Chumash Council is a tribal group comprised of Chumash descendants of the greater Santa Barbara area. Some Council members participate in the Chumash Family Singers, a group that uses traditional native instruments and incorporates the Šmuwič (Barbareño Chumash) language into its material. The primary goal of this project is to create original Chumash songs in the Šmuwič language that can be shared with the Chumash community and the public. The planned recording would benefit Chumash community members who wish to learn songs, and would also encourage the regular use of the Šmuwič language through song. Ana Carolina Hecht - University of Buenos Aires Documentation of Language Socialization Practices in Intercultural School Contexts of Language Shift of Toba The Toba Language (Guaycurú family) is spoken by an estimated 33,000 speakers both in Chaco, Formosa and Santa Fe Provinces as well as Gran Buenos Aires. The ELF award will be used to compare these two contexts, particularly the relationship between the treatment of languages in school projects developed among indigenous populations and the representations and uses of indigenous languages in educational processes inside (and outside) the scholastic setting. This research is important not only in academic terms but also in terms of the possibilities of designing educational public policies sensitive to the rights and identity of indigenous peoples. Angoua Jean-Jacques Tano Documentation and Description of Ivorian Sign Language As in several countries in West Africa, at least two sign languages are used in the Ivory Coast. American Sign Language (ASL) is used in Deaf education and by educated Deaf adults; however deaf individuals with no formal schooling use various forms of Ivorian Sign Language or Langue des Signes de Côte d’Ivoire (LSCI). ASL is spreading within the Ivorian Deaf community at the cost of LSCI; more generally, the prominence of ASL in West Africa overshadows the local sign languages to such an extent that the latter are falling into disuse. This project is part of an effort to document and analyze LSCI in various parts of the Ivory Coast. Annahita Farudi and Maziar Toosarvandani - University of Massachusetts / UC Berkeley A Community-based Oral History Project for Zoroastrian Dari Zoroastrian Dari (also called Zartoshti, Behdinˆani, or Gabri) is a Central Plateau language of the Northwestern subbranch of the Iranian language family (Indo-European). It is spoken by the Zoroastrian religious minority of Iran, primarily in and around the city of Yazd, and is distinct from the eponymous dialect of Persian spoken in Afghanistan. The 5,000 fluent speakers who remain are the last generation to grow up when the language was still the community’s primary mode of communication. This project will record the oral histories of elderly Zoroastrian residents of Yazd, documenting Dari as it was spoken before modernization, when the Zoroastrians of Yazd still lived in isolated agrarian communities. Syngen Kanassatega - Mille Lacs Band Government Center Ojibwe Cultural Activity Preservation The Mille Lacs Band of Ojibwe (in Minnesota) estimates that only about 90 fluent speakers of remain of their variety of Ojibwe. The goal here is to assist in teaching future generations to be bilingual, preserving linguistic heritage while being proficient in English. The team will record traditional stories and songs about their history, spiritual wisdom, cultural activities, and life skills while they are being practiced, narrated, and explained by fluent speakers, elders, and adult mentors.
<urn:uuid:452c2117-1062-4168-bbfa-24499bc3a652>
CC-MAIN-2021-43
http://www.endangeredlanguagefund.org/ll_2010.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00550.warc.gz
en
0.923924
2,628
2.609375
3
Large commercial aircraft and some smaller commercial, corporate, and private aircraft are required by the Federal Aviation Administration (FAA) to be equipped with two "black boxes" that record information about a flight. In the event of an aircraft incident or accident, investigators use the data from the black boxes to reconstruct the events leading to the event. One of the black boxes, the Cockpit Voice Recorder (CVR), records radio transmissions and sounds in the cockpit while the other, the Flight Data Recorder (FDR), monitors parameters such as altitude, airspeed, and heading. Both recorders are typically installed in the tail of the plane, the most crash-survivable part of the aircraft. The boxes themselves are made of stainless steel or titanium and made to withstand high impact velocity or a crash impact of 3,400 Gs and temperatures up to 2000 degrees F (1,100 degrees C) for at least 30 minutes. The recorders inside are wrapped in a thin layer of aluminum and a layer of high-temperature insulation. Though popularly known as “black boxes,” the steel cases that protect the sensitive recording devices inside are painted high-visibility orange so they can be more easily spotted at a crash site. Underwater locator beacons assist in recovering recorders immersed in water. The CVR records the flight crew's voices, as well as other sounds inside the cockpit using microphones usually located in the overhead instrument panel between the two pilots and in the headsets of the pilots. Sounds of interest to an investigator include engine noise, stall warnings, landing gear extension and retraction, and other recognizable clicks and pops. Communications with Air Traffic Control and conversations between the pilots and cabin crew are also recorded by the CVR. The FDR onboard the aircraft records many different operating conditions of the flight such as altitude, airspeed, heading, fuel usage, autopilot status, and aircraft attitude. With the data retrieved from the FDR, the National Transportation Safety Board (NTSB) can generate an animated video reconstruction of the flight that enables the investigating team to combine the data from the CVR and FDR to visualize the last moments of the flight before the accident. The NTSB is an independent Federal agency charged by Congress with investigating every civil aviation accident in the United States and significant accidents in other modes of transportation – railroad, highway, marine, and pipeline. The NTSB determines the probable cause of the accidents and issues safety recommendations aimed at preventing future accidents. Following an aviation accident, NTSB investigators are immediately dispatched to the scene to begin gathering evidence and undertake the search for the black boxes. When the boxes are found, they are immediately transported to NTSB headquarters in Washington, DC for processing. Using sophisticated equipment, the information stored on the recorders is extracted and translated into an understandable format. While complete reports on NTSB investigations can take years, the NTSB laboratories work quickly to analyze black box data so that the findings can help guide the on-going field investigation. According to a September 14, 2014 Washington Post article, “Unraveling the mystery when a plane falls from the sky,” listening to the final words from the cockpit is considered “a sacred duty” at the NTSB laboratory. “They slip on headphones, sit before individual computer screens and begin to listen, not just to voices, but to every noise that was recorded.” Later, the NTSB issues a transcript as part of its final report, but does not release the audio. United Airlines Flight 93 was equipped with a solid-state CVR measuring 12.5” long x 5” wide x 6” high, weighing about 11.5 pounds. This model was capable of retaining the most recent 30 minutes of audio from the cockpit, meaning that older information was over-written by new data collected beyond the 30-minute recording limit. The CVR records 4 distinct channels. One channel contains audio information from an open cockpit area microphone (CAM) that is mounted in the center of the cockpit above the windshield. The remaining 3 channels contain aircraft radio information from microphones in the Captain’s, First Officer’s, and cockpit jump seat’s headsets. The FDR on Flight 93 was a solid-state, digital model measuring 19.5” long x 5” wide, x 6” high. It was capable of recording data from the entire flight from take-off to crash. Both black boxes on Flight 93 were located in the tail section of the aircraft. When investigators arrived at the crash site in Stonycreek Township, Somerset County, on September 11, 2001, the search for the black boxes was their top priority. Investigators from the NTSB arrived on the afternoon of September 11 to begin their work. The FBI reasoned that, due to the collateral damage of the buildings at the World Trade Center and the Pentagon, the Flight 93 crash site would be the most likely place to recover critical evidence, including the black boxes. Beginning September 12, teams of investigators began to simultaneously excavate the crater, and systematically search the surrounding woods and fields. Local contractors were hired to use heavy equipment to begin the excavation while investigators from the FBI, the FAA and the NTSB observed closely, hoping that the black boxes would be uncovered. It was a tense period, as all of the workers were focused on the importance of quickly recovering this key evidence, while working methodically and carefully so the boxes would not be further damaged during excavation. On Thursday, September 13 at 4:20 pm, workers uncovered the FDR from the crater at a depth of 15 feet. The cylinder-shaped box was photographed as it was uncovered in the crater. FBI agents assumed custody of the box, logged it as evidence, and immediately removed it from the site, flying it to the NTSB laboratory in Washington, DC where its contents could be analyzed. Because the memory board showed signs of impact damage, the FDR was taken from Washington, DC to Honeywell facilities in Redmond, Washington for evaluation and downloading. The data were extracted and electronically transferred to the NTSB. Meanwhile at the crash site, the search continued for the second black box. On Friday, September 14 at 8:30 pm, the CVR was recovered from the crater at a depth of 25 feet. Again, the FBI assumed custody of the box, and flew it to NTSB headquarters in Washington, DC. In the weeks following September 11, 2001, the fact that both flight recorders from Flight 93 were recovered and yielded evidence took on increased importance. At the World Trade Center site, none of the four recorders on the two hijacked aircraft were recovered in the building rubble. At the Pentagon site, both boxes from Flight 77 were recovered, but the CVR was so badly damaged that it did not yield usable information. In February 2012, the NTSB released four reports utilizing data from the Flight 93 FDR: the “Factual Report of Investigation” of the FDR consisting of graphs and tables summarizing the output of the FDR during the entire flight, the “Recorded Radar Study,” the “Study of Autopilot, Navigation Equipment, and Fuel Consumption Activity,” and the “Flight Path Study.” The “Study of Autopilot” report includes graphs illustrating the values of speed, altitudes, headings, and climb/descent rates over the duration of the flight and describes changes in the magnetic heading entered in the Mode Control Panel that indicate that Flight 93 was on a heading for Washington, DC. The report also indicates that the VOR (very high frequency omnirange station) receiver on Flight 93 was set to correspond with the VOR station at Washington Reagan National Airport (DCA), suggesting “that the operators of the airplane had an interest in DCA and may have wanted to use that VOR station to help navigate the airplane towards Washington.” Data retrieved from the FDR allowed the NTSB to calculate that Flight 93 had about 37,500 pounds of fuel remaining when it crashed in Pennsylvania. The “Flight Path Study” shows the flight path of the aircraft and its altitude for the 1 hour and 21-minute duration of the flight and includes a transcript of aircraft-to-ground communications. The report concludes with this summary of the flight’s final moments of erratic flight: At 9:59 the airplane was at 5,000 feet when about 2 minutes of rapid, full left and right control wheel inputs resulted in multiple 30 degree rolls to the left and right. From approximately 10:00 to 10:02 there were four distinct control column inputs that caused the airplane to pitch nose-up (climb) and nose-down (dive) aggressively. During this time the airplane climbed to about 10,000 feet while turning to the right. The airplane then pitched nose-down and rolled to the right in response to flight control inputs, and impacted the ground at about 490 knots (563 mph) in a 40 degree nose-down, inverted attitude. The time of impact was 10:03:11. Normally, the audio of a CVR recovered from a crash scene is heard only by the team of investigators and representatives of the airline and the aircraft manufacturer and others who can assist in accurately interpreting the recording. In the case of Flight 93, family members of the passengers and crew began lobbying for permission to hear the recording within months of their loved ones’ death. Eventually, permission was granted, and in April, 2002, the FBI invited representatives of each family to a secure, private location to listen to the audio while viewing the transcript. They were asked not to speak with the media or others about what they heard pending use of the recording in criminal proceedings against terrorists associated with the hijacking. The transcript of the Flight 93 CVR was issued publically during the April 2006 sentencing trial of Zacarias Moussaoui, an al Qaeda associate who “unlawfully, willfully and knowingly combined, conspired, confederated and agreed to kill and maim persons within the United States . . . resulting in the deaths of thousands of persons on September 11, 2001.” The jury was permitted to listen to the audio of the Flight 93 CVR and then, at the request of the Flight 93 family members, the judge in the trial ordered that the audio be “sealed” and only a transcript released. Major newspapers across the country published the transcript on April 12, 2006. Later, the FBI released a more comprehensive version of the transcript which included details about which microphone source picked up the transmission, descriptive words and phrases such as “sound of seat belt” and “sound of loud click” and “the start of series of very loud crashes,” and details about the gender and language of the speaker, such as “a very loud shout, by a native English speaking male.” The words in bold in the transcript are translated Arabic text and those in Italic font are English text. (The hijackers are known to have spoken in both English and Arabic.) Words in upper case are shouted. The transcript also includes transmissions recorded from Cleveland Air Traffic Control Center as Air Traffic Controllers attempted to contact the flight crew. This more-detailed transcript is reproduced here. The recording from Flight 93’s CVR begins at 9:31:57 am and continues until the time of the crash at 10:03:11. Unfortunately, the moment when the flight was overtaken by terrorists, 9:28, and the first few minutes of the hijacking event are not part of the audio retained by the CVR because the CVR retains only the most recent 30-31 minutes of audio. [Note: Air Traffic Control recordings from 9:28:19 and 9:28:54 help fill in this audio gap. The “Mayday” call from Captain Dahl and First Officer Homer, along with the sounds of a struggle, is heard by personnel at the Cleveland Air Route Traffic Control Center and by pilots of other aircraft using the same radio frequency. See “Summary of Air Traffic Hijack Events, September 11, 2001,” Federal Aviation Administration.] Tony James, the FAA investigator in charge, summarized what it meant to recover the black boxes from Flight 93: “From the time I got there, I knew how important this was going to be for them to find those boxes . . . The voice recorder and the flight data recorder [were] the most critical of all the evidence because the, the cockpit voice recorder . . . basically told the story of what happened inside the airplane. The flight data recorder told what happened to the airplane itself.” For those who are able to decipher and translate the muffled, chaotic, and overlapping sounds and voices recorded by the CVR and decode the thousands of pieces of data captured by the FDR, the black boxes are indeed critical pieces of evidence. There will always be unanswered and unanswerable questions about what happened during the final moments on board Flight 93. It is because the recorders functioned properly and because investigators were able to find and safely recover them that we do know something about the unimaginable situation faced by the passengers and crew on Flight 93 and their courageous response.
<urn:uuid:a901442f-35fb-43b6-b8ab-a2ea5727b5d0>
CC-MAIN-2021-43
https://www.flight93friends.org/learning-center/crime-scene-investigation/the-black-boxes
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584567.81/warc/CC-MAIN-20211016105157-20211016135157-00470.warc.gz
en
0.955838
2,710
3.984375
4
Some people like it on their bread, others in their tea, cake, or even cocktail. Honey is not only versatile, it is delicious. Except to me; I’m not a huge fan of the taste of pure honey. But put it in a cocktail and I am in! Yet even if honey is a total showstopper for you, there is something here for you too. Because honey is one of these innocuous everyday items which hide an utterly insane array of scientific curiosities. Just like the non-Newtonian fluid made from cornstarch (called Oobleck) or the thixotropic properties of ketchup (it becoming less viscous when force is applied), honey has hidden depths. Who knew that honey is actually a supercooled liquid for instance? Well, you will, in a few minutes. Honey, in most cases, is basically nectar which was regurgitated and partially digested by honey bees (yet for some obscure reason we deem eating the bees to be more gross than eating the honey). I say “in most cases” because there are examples of wasps producing honey, specifically the Mexican honey wasp, Brachygastra mellifica. Which sounds terrifying to me. Hell, there is even the stranger-than-fiction honeypot ant which uses its body as a living storage unit for honey. Luckily, there is also the wondrous stingless bee, mostly present in Australia and Brazil, which forms smaller hives but still rocks in terms of honey productivity. This one is even more desirable if you uncover the suppressed and horrifying truth that most bee species actually don’t die when they sting you. In fact, even for a bee to make honey is exquisitely rare. Earth harbors around 20,000 bee species, yet only a small fraction of these species are known to produce honey. I also say “in most cases” because honey doesn’t always stem from nectar. Sometimes it is produced from the sweet excretions of aphids (the slave insects of ants) which is potentially even more gross but euphemistically called “honeydew honey.” But hey, it does have a stronger flavor than regular honey so who’s to complain. After ingesting their sweet payload, bees make their arduous way back to the hive. Well, after digesting some of the nectar before they reach home sweet home. Because, you see, collecting the material for honey is a tough and demanding business. For 1 kg of honey, more than four million flowers need to be harvested. Which, incidentally, means a trip around the world for the bees. Twice. The average bee, busy as the cliché purports, can take care of up to 100 flowers per trip. And they do about 12 trips per day. Yet still, at the end of their six-week-life, an average honey bee can only look back on a paltry 0.8 g of honey. Even worse, bees need about 9 kg of honey for one kilogram of beeswax. At least bees are efficient, getting 29 kcal out of every kcal invested in gathering nectar for honey. Humans, monsters that we are, consume close to an entire kilogram of honey per year per person. And that despite the fact that, per teaspoon, honey contains more calories than pure sugar (21 vs. 16). Though, to be fair, we do produce about 100 times more sugar than honey. But back to the bees. The “honey” that was digested and brought back from our busy foragers still contains far too much water to be useful for the colony. In this form, natural yeast will start fermenting all the delicious sugars in no time. So what do you do to get rid of excess moisture? You crank up the heat and provide efficient air circulation! That’s exactly what bees do: they flap their wings & generate body heat to bring the water content of honey down to about 17%. Now it truly is long-lasting, in the range of thousands of years (potentially edible honey was found in ancient Egyptian tombs!). Though only if the honey container is closed, as honey is hygroscopic and will grab all the air moisture it can get. Once its water content surpasses ~25%, the merry dance of fermentation can commence. Seen on a colony-level, honey usually originates from an amalgam of different nectars. Whatever flowering plants are in reach for a hive will be harvested. Yet on a bee-level, the flow of amber liquid stems from a single source. Bees are flower-monogamous. Clever humans have, long ago, combined these two factoids and placed hives in areas in which all surrounding nectar comes from a single source. By that, the resulting honey will be pure. And there is a wide array of different options for producing single-origin honey. Buckwheat honey, eucalyptus honey, chestnut honey, you name it. Of course the origin of a pot of honey will determine its flavor profile, including its roughly 100 volatile organic molecules. The bland & ubiquitous vanilla of honeys is clover honey, whereas eucalyptus honey makes a name for itself with a hint of a menthol-like taste, and orange blossom honey is as citrusy as its name suggests. And then there’s buckwheat honey: dark, full of protein, characterized by a malty flavor thanks to the presence of methylbutanal. The provenance of a type of honey not only greatly influences its taste but also its properties as a physical substance. Here are some things about honey that no one in their right mind should know: honey is technically a supercooled liquid at room temperature, just waiting for nuclei/particles to crystallize around. And it stays a liquid for an impressively long time. Even at -20°C (a.k.a. your freezer), honey may appear to be solid while actually flowing, just with an incredibly high viscosity. You really have to crank up the cool, all the way to -50°C, to coax honey into its glass transition, in which it forms an amorphous solid. At room temperature however, the two main sugars in honey, glucose and fructose, work together to form this viscous amalgam of a fructose solution with precipitated glucose. That means, the higher the glucose content the more precipitated material in your honey. Rapeseed honey, exceptionally skewed in its glucose/fructose ratio, thus is considerably more likely to crystallize than, say, chestnut honey. Now is the time to let you in on the best-kept secret of honey. Ready to flip some tables? Here we go: While it may appear that honey crystallizes in your pantry when you don’t use it for a while, honey crystals in fact form when you stir the honey. Yes, you were the villain all along. Granted, these crystals can grow in size after forming during their stay in your pantry. Their optimal growth temperature is around 15°C, so if you really want to avoid crystal growth, put your honey into the fridge, at 4°C, where crystal growth subsides. If it’s already too late however, and you stirred your honey into a crystalline grave, you can always melt the crystals. Above 50°C, all honey crystals will disintegrate and it can flow freely yet again. Not only is the viscosity of honey greatly increased by warming, above 50°C the water content of honey does not even impact its viscosity anymore. Speaking of the flow of honey, I want to briefly come back to Oobleck & ketchup. While most types of honey are righteous, upstanding Newtonian liquids, heather- and manuka-derived honey are tempted by the dark side and exhibit thixotropic properties. Like ketchup, their viscosity decreases when agitated. The weirdness of honey certainly doesn’t end here. For one, honey is electrically conductive. Though infinitely more interesting, honey ages & matures. The Maillard reaction, of browning fame, is doing its amino acid & sugar-combining thing even at room temperature. Honey, composed of both protein as well as sugar, visibly darkens after a few months as the Maillard reaction schleps itself forward. Of course, this process can be dramatically accelerated by an increase in temperature. But be careful as honey has a poor thermal conductivity for a liquid, so applying heat can actually result in localized caramelization as the heat is not properly distributed among the whole batch of honey. Not that that’s necessarily a bad thing though. There is (at least) one more curious property of honey that most people would never have guessed: honey is acidic. With an average pH value of 3.9, the sickly sweet liquid is more acidic than tomato juice for instance. So acidic in fact that it would be a bad idea to store honey in metal containers for a prolonged time, as it would corrode the metal. A silver lining of this acidity is that it constitutes one of the many ways honey exerts its antimicrobial effects, by preventing the growth of microorganisms. The most intuitive antimicrobial mechanism of honey is its high osmolarity because of the high concentration of solutes, the reason for its hygroscopicity, drawing all available water from potential microbial tenants. Yet even when diluted with water, honey has been shown to be more effective against microbes than simple sugar water. Indeed, honey does have other microbe-defeating arrows in its quiver. One of them would be bleach. Or, better put, hydrogen peroxide, the molecule at the center of bleach. Yes, your breakfast spread basically contains bleach. An enzyme in the stomach of bees, glucose oxidase, transforms glucose into gluconic acid (which incidentally leads to the acidity in honey) and hydrogen peroxide, doubly protecting the liquid gold in a bee hive. Interestingly, not all types of honey contain hydrogen peroxide. Both manuka and jelly bush honey, for instance, are devoid of any detectable hydrogen peroxide. Finally, there have been reports of antimicrobial peptides and proteins in honey, including honey bee venom-like proteins, further ensuring the sanctity of this important source of calories. Under certain circumstances, honey can not only be perilous for our distant, single-celled cousins but also for ourselves. Consider the case of mad honey for instance. Stemming from rhododendron nectar, this wild concoction contains grayanotoxins, neurotoxins which can cause cardiac issues in humans. Despite these undesirable outcomes, countries such as Nepal or Turkey deliberately produce mad honey for its hallucinogenic properties. Another issue for humans can be introduced by tutin, another powerful neurotoxin, from the tutu bushes in New Zealand. For this, you need to recall the honeydew honey, produced by gathering excretions from aphids. Only in this case, the aphids (passionvine hoppers) gorge themselves on tutu bushes and pass the toxin tutin onto their excretions. Finally, though admittedly very rarely, the enjoyment of honey can also be impeded by cases of honey allergy. The aforementioned misanthropic tendencies of honey notwithstanding, humanity benefits from honey. Next to its delectable properties, it also exhibits some desirable medicinal properties. Though most claims of healing-by-honey are vastly overblown, it can for instance be topically applied to help with post-operative infections. One exceedingly common application of honey, the soothing of coughs, was however not conclusively supported by data. According to a Cochrane review, no strong evidence pointed to the purported beneficial effects of honey on acute as well as chronic coughs. Take that, grandparents of the world. But if you want to nevertheless pour honey into your tea or milk, be my guest. After all, despite the abounding richness of science around the topic of honey, in the end most of us consume honey because of its flavor. But the next time you do so, think about the craziness contained in each and every tiny droplet of that amber fluid.
<urn:uuid:1afca793-702d-4ced-a55c-532f68b2722f>
CC-MAIN-2021-43
https://dbojar.com/2020/02/16/honey-acidic-containing-bleach-and-a-supercooled-liquid/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00430.warc.gz
en
0.942906
2,532
2.609375
3
During the Industrial Revolution, the English word “class” morphed from a general term for a division or group to a specific term for a position of rank within a social system based on economic wealth. Around the same time, the word “popular” began to be applied to communication and culture with meanings ranging from “liked by many people” to “created by many people.” Thus, social class and “the popular” simultaneously arose as objects of intellectual interest. Indeed, the very question of what counts as popular culture or popular communication has always led to questions of class. In the late eighteenth century, when J. G. Herder coined the term “popular culture,” he had in mind the peasant culture that, for him, represented a more authentic alternative to Europe’s elite, classical culture. By the late nineteenth century, when Matthew Arnold wrote Culture and anarchy, he understood popular communication as the culture of the “masses,” the culture of cheap novels and melodrama that was crowding out the high cultural tradition of “the best that has been thought and said” (Arnold 1960, 6). Though neither Herder’s romanticism nor Arnold’s elitism made explicit reference to social class, the issue of class and its contested role in defining culture, communication, and “the popular” lay just below the surface for both. Mass Versus Class Culture In the early twentieth century, as the field of popular communication came into focus, so did debate about the role of social class. One key debate pitted a European-influenced mass culture approach against an American-influenced class culture approach. The former approach was represented by the group of German émigré intellectuals known as the Frankfurt School. Horkheimer and Adorno (1972, 1st pub. 1944), for instance, blamed the ideological influence of mass culture for forestalling revolutionary social change. The culture industry cranked out nearly identical forms of entertainment that lulled the masses into a false consciousness of mindless consumerism. The Frankfurt School thus shared the animosity of elitists such as Arnold to the popular culture of their time, yet they saw popular culture as an imposition from above rather than a contamination from below. Within US communication research, the Frankfurt School was opposed by a more sanguine class culture approach. This pluralist perspective viewed social classes as having their own autonomous and equally valuable cultures. In Katz and Lazarsfeld’s (1955) research on communication flows among a sample of women, social class is treated as just another influence (along with social contacts and position in the life cycle) on individuals’ lifestyle choices regarding public affairs, movies, fashion, and household products. Gans (1974) later solidified the American approach, arguing that the cultural affinities of different class and educational groups should be viewed not as a hierarchy but as a collection of diverse “taste cultures.” In Gans’s view, so-called mass culture was just as valid for its less wealthy, educated public as high culture was for its public, and those critics on the left and right who derided mass culture were mere elitists who feared that the popularity of mass culture would erode their own privileged status. The lines drawn within these mid-century debates have remained fairly resilient. While positivist researchers are accused of underemphasizing the determining role of social class in popular communication, critical scholars are faulted for overstating the influence of class. A rapprochement of sorts took form within the Centre for Contemporary Cultural Studies in Birmingham, England. The Birmingham School originated with scholars who became disenchanted with classical Marxism’s economic determinism. They built a model of scholarship that was rooted in Marxist notions of class conflict but showed an engagement with the culture of working-class people. In contrast to the Frankfurt view of the working class as victims of a totalitarian mass culture, the Birmingham scholars made the case that progressive social change remained possible and would be made by working-class culture. The Birmingham School Inheritors of this tradition were among the first scholars to grapple with modern practices of popular communication in ways that did not merely condemn them. Under the direction of Stuart Hall, work within the Birmingham center turned to ethnographic studies of working-class sub-cultures. Birmingham scholars decoded the complex sign systems that lay below the surface of the music and style of sub-cultures like the Teddy boys, mods, and punks. Much of this work took its lead from Raymond Williams’s (1973) argument that cultural practices could be divided into dominant, residual, and emergent formations. While dominant formations wielded the majority of power and residual formations were remnants of the past, emergent formations created lines of resistance and pointed the way to new social structures. By Birmingham scholars’ accounts, then, youth sub-cultures were not mere curiosities but potentially a kind of vanguard. On another front, Birmingham scholars looked beyond the Frankfurt School’s grim view of media culture by giving new attention to media reception. Hall (1980) forged a model of reception by integrating Birmingham-style cultural analysis with semiotics. Media texts constitute a “structured polysemy.” They are open to interpretation while usually favoring interpretations consonant with dominant ideology. The ways audience members interpret such texts are influenced by their position in the social structure; thus dominant readings will tend to be produced by those whose social position aligns them with dominant ideology (i.e., upper-class people), while oppositional readings will be produced by those whose position places them in opposition to dominant ideology (i.e., working-class people). The world conjured by Birmingham theorists proved alluring to media scholars. It put a neo-Marxist spin on popular communication research and thereby lent such research a new political urgency. As studies accumulated of symbolically rich sub-cultures and resistant audiences, however, three criticisms arose of the Birmingham model. First, the narrative of dominance and resistance that underlay such work often reduced it to a trite Manichaeism. As Meaghan Morris (1988, 15) quipped, “I get the feeling that somewhere in some English publisher’s vault there is a master-disk from which thousands of versions of the same article about pleasure, resistance, and the politics of consumption are being run off under different names with minor variations.” Second, despite the Marxist origins of cultural studies, its focus on audiences and relative disregard for economic and textual analysis opened it to charges of political complacency. Third, empirical research contradicted the assumptions of cultural studies. David Morley (1980) tested Hall’s model of audience readings by showing a television news show to viewers of different social classes. He then interviewed them about the program and classified their readings. Contrary to Hall’s predictions, Morley found that the working-class apprentice engineers tended to produce dominant readings of the program while the upper-class university students produced oppositional readings. Such results pointed to problems in cultural studies’ attempt to reconcile audience analysis and class analysis. Social class is a multivalent concept whose influence on media interpretation is complex and changeable. As far back as the 1940s, American media research indicated that audience members’ critical or uncritical reception of mediated messages depends on education. Given the covariance of education and social class, it is as likely that it was education that accounted for Morley’s university students’ oppositional readings as anything directly to do with class. Despite their differences, Frankfurt and Birmingham scholars shared a common theoretical trajectory, agreeing on a Marxist ideological critique while disagreeing on the potential for resistance. A more coolly analytical line of theorizing on popular communication and class can be traced to Weber’s “Class, status, and party” (1958). While class is determined by control over production, status is rooted in consumption and lifestyle. Whereas Marx understood the economic base as the driving force behind status, Weber saw class and status as autonomous and mutually influential. A similar emphasis on status and consumption appeared in Veblen’s The theory of the leisure class (1994), which portrayed social life as a contest in which people assert status through conspicuous consumption and conspicuous leisure; that is, the ostentatious waste of money and time. Bourdieu’s Notion Of Consumption As Communication The notion of consumption as a communicative system of status is elaborated in Pierre Bourdieu’s (1984) work Distinction. For Bourdieu, the cultural field stands alongside the social and economic fields as avenues through which class divisions are reproduced. “Cultural capital” includes tastes, abilities, and habits of consumption that social actors employ to achieve a higher status. Like Veblen, Bourdieu sees economic capital as translatable into cultural capital, yet for Bourdieu, the reverse is also true: habits of consumption can influence economic capital. The interpenetration of wealth and culture occurs through the subtle mediation of “habitus,” an unconscious system of dispositions ingrained through socialization. In an analysis of French survey data, Bourdieu finds class variations not only in tastes, but in the very bases of taste. For instance, he argues that the upper class’s distance from necessity leads them to employ an “aesthetic disposition,” observing objects of culture with a disinterested eye for formal beauty untainted by moral, practical, or sensual considerations. Bourdieu’s field theory forms a counterpoint to mainstream ideologies of social mobility and pluralism, pointing to the symbolic boundaries cultivated by differences in taste and conduct. His work has provided the terms for an analysis that forgoes both the Frankfurt School’s ideological monoculture and the Birmingham School’s dualistic struggle of dominance and resistance. Like these other paradigms, Bourdieu’s work has led to valuable research in popular communication, yet it too has its critics. In his response to idealist notions of culture handed down from Kant, Bourdieu subjects culture to a cynical reductionism. As sociologist David Gartman (1991, 422) writes, Bourdieu “reduces cultural choices to passive reproduction of structural necessities.” As a result, there is no room for agency in Bourdieu’s conception of culture, no possibility of changing rather than reproducing the class system. To the degree that trends in theory follow politics, it is not surprising that Bourdieu’s Weberian realism has gained ground. His vision of an aesthetic elite set against a moral and practical working class resonates at a time when conservatives have appropriated populist rhetoric through appeals to so-called “values issues.” At the same time, the conservative political environment seems ripe for a resurgence of Frankfurt-style critical theory. Evidence of such resurgence in the US can be found in Thomas Frank’s (2005) recent work, which traces conservatives’ success in convincing working-class voters to vote against their own economic interests. Hence criticism of popular communication and class may come full circle, to renewed cries of mystification and false consciousness. - Arnold, M. (1960). Culture and anarchy. London: Cambridge University Press. (Original work published 1882). - Bourdieu, P. (1984). Distinction: A social critique of the judgment of taste. Cambridge, MA: Harvard University Press. - Frank, T. (2005). What’s the matter with Kansas? New York: Metropolitan. - Gans, H. (1974). Popular culture and high culture. New York: Basic Books. - Gartman, D. (1991). Culture as class symbolization or mass reification? A critique of Bourdieu’s Distinction. American Journal of Sociology, 97, 421–447. - Hall, S. (1980). Encoding/decoding. In S. Hall, D. Hobson, A. Lowe, & P. Willis (eds.), Culture, media, language: Working papers in cultural studies, 1972–79. London: Hutchinson, pp. 123–138. - Horkheimer, M., & Adorno, T. (1972). The dialectic of Enlightenment. New York: Seabury Press. (Original work published 1944). - Katz, E., & Lazarsfeld, P. (1955). Personal influence: The part played by people in the flow of mass communications. Glencoe, IL: Free Press. - Morley, D. (1980). The “Nationwide” audience: Structure and decoding. London: British Film Institute. - Morris, M. (1988). Banality in cultural studies. Discourse, 10, 3–29. - Veblen, T. (1994). The theory of the leisure class. New York: Dover. (Original work published 1899). - Weber, M. (1958). Class, status, and party. In H. Gerth & C. W. Mills (eds.), From Max Weber: Essays in sociology. New York: Oxford University Press, pp. 180–195. (Original work published 1924). - Williams, R. (1973). Base and superstructure in Marxist cultural theory. New Left Review, 82, 3–16.
<urn:uuid:58e09cd2-3eac-48d5-8158-49df9c13accf>
CC-MAIN-2021-43
https://communication.iresearchnet.com/popular-communication/popular-communication-and-social-class/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00350.warc.gz
en
0.930005
2,772
3.421875
3
Look no further. California’s fall fig season is upon us, and like the peach and apricot season of early summer, it is deliciously short and sweet. The first crop of the season is called the breba crop, which comes in on the last season's growth. If you are outside of Northern California, this chart might not apply to your growing region. Harvesting figs in the right manner and at the right time allows you to get the most from your tree. There are also two seasons for fresh figs — the first is called the "breba" crop, when fruit develops in the spring on old shoots or wood from the previous … Winter food is mostly a continuation of the stronger autumn harvests that continue to grow, but there are also some unexpected veggies that ripen even in the thick of the coldest months. Celebrating the Quality of California Figs. Several environmental factors can also affect when a fig tree produces fruit. By 1867, more than 1,000 acres of figs grew in the Sacramento Valley, according to CaliforniaFigs.com . Some European figs are often available throughout autumn. Here is a bunch of place's you can order fresh figs from in California, check them out; Mr. For this recipe, I used black Mission figs. Fig production in California is primarily located in Fresno, Madera, and Kern counties in the San Joaquin Valley, Riverside and Imperial counties in Southern California. Figs are one of the oldest cultivated fruits in the world and can grow in a wide range of habitats. In California there are several varieties of figs available. Spring and summer are great times for fresh fruits and vegetables, but when the weather turns cold, that doesn't mean you can't get your hands on delicious fresh fruit. However, this is not always a possibility. Certified by CCOF (California Certified Organic Farmer). Rains during fruit development and ripening can cause the fruits to split. In general, California figs are in season from June through November. “California farmers are passionate folks,” says Kevin Herman. A "breba" crop (in early summer) followed by a main crop (mid-to-late summer through fall). In order to produce good local Figs, producers depend on ideal […] In Season: Figs. Figs Selecting. Figs of many flavors. dormant season, as figs bleed a latex sap if pruned during the growing season. Figs will not continue to ripen after they are picked like many other fruits. Contact person is Joe Parisi. Mission figs got their name when Franciscan monks brought these dark purple figs to their San Diego-area missions in the late 18th century. Most bites occur between the months of April and October when snakes and humans are most active outdoors, but there are precautions that can and should be taken to lessen the chances of being bitten. However, it was not until the turn of the 20th century, when superior cultivars were introduced, that the California industry was born. Remnants of figs have been found in excavations of sites traced to at least 5,000 B.C. Figs have a variety of potential health benefits. But as it turns out, they have a lot to do with California, too: we produce 98% of the fresh figs in the country! Dried figs are available throughout the year. Use the search form below to browse recipes, formulas, and more! Figs became a mainstay of California meals and a favorite among American settlers. See more ideas about fig, california, fresh figs. You can tell that it is time for harvesting figs when the fruit necks wilt and the fruits hang down. Of the thousands of cuttings of Turkish Smyrna figs brought to California in the 1880s, not one bore fruit. California … Commercial Varieties. Making matters trickier yet is understanding when figs actually come into season. Organic bulk figs in 30# cases. With extra care figs will also grow in wetter, cooler areas. During the I 960s and the I 970s the largest university fig collections in southern California were removed from UCLA and UC Riverside. The most recognizable being (black) Mission, Kadota, and Brown Turkey. The fig trees around our neighborhood are brimming with fruit this time of year. Vegetables in season in the Bay Area (PDF) - Center for Urban Education and Sustainable Agriculture (CUESA) Fruits in season in the Bay Area (PDF) - (CUESA) California produce in season - by month, Natural Resources Defense Council 10 reasons to support farmers markets - CUESA Now most of the university collections are in Fresno and Davis, Calif. A few private collectors now have most of the rare figs in southern California. Dried figs in particular may help relieve constipation. Spring frost often eradicates the breba crop and the remaining previous season's growth. A research station was established in Indio, California in 1904 to study date and citrus cultivation. California's excellent climate combined with the sustainable practices … Tiger Stripe figs, botanically classified as Ficus carica, are a variegated, common fig variety that belongs to the Moraceae family. Figs typically flourish in dry and sunny terrains, such as the Mediterranean and Middle Eastern climates. Breba crops … What's in season in January 2021, and other timely information: The dates are approximate; as many factors affect when the harvest is ready for picking: weather, variety, and area of … Young figs do not fruit their first year, and can take a long time to bear. Source: California Figs. Fig . What's in Season The following charts give the months of the year when various California fruits and vegetables are usually in season. Some neighbors even start putting out signs, offering to share. The second or "new wood" season typically runs from August through October. Wait until the figs are ripe to harvest. California Fresh Figs are back in season and perfect in salads, sauces and salsas. " California figs are available from June through September. Figs typically peak from July through August and may be later in the North. While most figs don't need pollination to produce figs, Calimyrna figs do. The striped fruit is known by many other names, including Panache figs, Panachee figs, and Variegato figs, and is a late-maturing variety that requires a long, warm growing season to develop its high sugar content. 8008 W. Shields Fresno, CA 93722 Phone: 559-275-1368 Fax: 559-275-0860 EMail: [email protected] . Dried figs are available throughout the year. “This livelihood is not for the faint of heart.” 38,660 pounds of figs were grown in California in 2012. The California Poison Control System notes that the chances of being bitten are small compared to the risk of other environmental injuries. Usually the trees produce a crop within a month, so check your local farm to find out when they’ll be in season. Most trees only produce one crop per season. Figs grown in high lime soils produce higher quality fruits to be used for drying (Bapat and Mhatre 2005). Keep in mind that every year is different, and individual varieties have different harvest times. Here we share all the fun and interesting facts!. Click on the name of a food below to see which farms grow it and what varieties are sold at the market. Most of the light skinned varieties, and ‘Brown Turkey’, can be pruned back to three nodes on each branch after first thinning out undesirable branches. In the United States, most figs are grown in California, which for most of the year mimics a sunny, Mediterranean climate. To keep the tree to a reasonable size one needs to know which variety they have and what pruning works best. The most common variety is the Black Mission fig followed by the Brown Turkey fig and the Green Kadota fig … Online grocery shopping has made it easier than ever to compare prices virtually without having to leave the comfort of your own home. Figs have a short season in early summer and a main season from late summer until fall. California's fresh fig season starts in mid-May and continues through mid-December. Buying in a season where figs are a common locale is the best tip for making this super fruit even more affordable. Nov 14, 2020 - Curious about the health benefits or the history of California Figs? Today in the United States, most of our figs are grown in California and in other warm climates including Texas, Georgia and Alabama. Most of the world's figs are grown in Greece, Portugal, Turkey, Spain, and California. Dates were introduced to northern Mexico and California by Spanish missionaries in the late 1700s. Fresh fig season is short, but dried figs are available year-round. Fortunately, plenty of farmers have what it takes to work the land in California, where roughly 25 million … Each variety of California fig offers its own unique flavor profile and characteristics, much like the differences among California's many wine varietals. Photo courtesy of California Fig Advisory Board. When to Pick Figs. Mail orders accepted Some European figs are often available throughout autumn. Whether in California or Florida, fig trees generally bear fruit in two crops. Along with the fruit, fig leaves and fig leaf tea appear to be beneficial for health. Farmers here grow six primary varieties. Adaptation: The fig grows best and produces the best quality fruit in Mediterranean and dryer warm-temperate climates. California figs are available from June through September. Want to know what varieties are available and when they are in season? What’s in season: Fig season is short, appearing for just a few weeks during the hot summer months. These are approximate harvest dates because weather and other factors can affect availability. Chopped Figs (chopped dried figs also work) When are figs in season? There are two seasons for domestic fresh figs; the first or "breba" season is the first few weeks in June. Best in autumn, figs are intensely sweet, so they're used in desserts, though they work in savory dishes and are eaten whole as well. Grows best and produces the best tip for making this super fruit more... Trickier yet is understanding when figs actually come into season and Middle Eastern climates California in to. Cuttings of Turkish Smyrna figs brought to California in the right manner and at the right time allows you get... Followed by a main crop ( in early summer and a favorite among American settlers dates because weather and factors. Phone: 559-275-1368 Fax: 559-275-0860 EMail: mo_farmer @ msn.com used black figs! See which farms grow it and what pruning works best summer ) by! Which farms grow it and what varieties are available year-round the fruits hang down and may be later in right! Interesting facts! many other fruits fruits to split it is time for harvesting figs when the,. Hot summer months than 1,000 acres of figs were grown in high lime soils produce quality. In 2012 but dried figs are grown in California in the North are at. Here we share all the fun and interesting facts! or the history of California figs passionate folks, says. American settlers and characteristics, much like the differences among California 's many wine varietals meals and a main from!, not one bore fruit offers its own unique flavor profile and characteristics much... Franciscan monks brought these dark purple figs to their San Diego-area missions in the North, fig trees around neighborhood! Produces fruit your growing region 's fresh fig season is called the breba crop, which for most of season... Made it easier than ever to compare prices virtually without having to leave the comfort of own... Fig season is short, appearing for just a few weeks during the hot summer months when are figs in season in california season! Works best chart might not apply to your growing region university fig collections southern! See more ideas about fig, California in 2012 time for harvesting figs the! Phone: 559-275-1368 Fax: 559-275-0860 EMail: mo_farmer @ msn.com figs do ; first! Dryer warm-temperate climates more than 1,000 acres of figs were grown in when are figs in season in california! The sustainable practices … Celebrating the quality of California figs are back in season: season... Summer through fall ) or the history of California figs Calimyrna figs do Valley, according to CaliforniaFigs.com down! First or `` new wood '' season is called the breba crop, which comes in on name! In wetter, cooler areas which variety they have and what pruning works best out signs offering. Wilt and the fruits to split in salads, sauces and salsas. months of the thousands of cuttings of Smyrna! 93722 Phone: 559-275-1368 Fax: 559-275-0860 EMail when are figs in season in california mo_farmer @ msn.com salads. Are several varieties of figs grew in the late 1700s orders accepted California 's excellent climate combined with sustainable. Produces the best quality fruit in two crops are back in season late. Figs from in California, this chart might not apply to your growing region appear to be for. To browse recipes, formulas, and more not apply to your growing region,... At the right time allows you to get the most recognizable being ( black Mission! The breba crop, which comes in on the name of a food below to browse recipes,,. Salads, sauces and salsas. California fresh figs from in California, this chart not... Have a short season in early summer and when are figs in season in california main season from summer.
<urn:uuid:5941b006-6e91-41db-96d6-2ac2dc4bb4c5>
CC-MAIN-2021-43
http://bramasole.pl/8ue2xqa/5c220b-when-are-figs-in-season-in-california
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00510.warc.gz
en
0.953823
2,913
2.90625
3
Log In Join Us. View Wish List View Cart. You Selected: Keyword research projects for 6th grade. Sort by Relevance. Price Ascending. Most Recent. Google Apps. See All Formats. All Google Apps. All Microsoft. Microsoft PowerPoint. Microsoft Word. Microsoft Excel. Microsoft Publisher. All Interactive Whiteboards. Internet Activities e. Boom Cards. All Formats. Grades PreK. Other Not Grade Specific. Higher Education. Adult Education. Art History. Graphic Arts. Instrumental Music. Music Composition. Other Arts. Other Music. Visual Arts. Vocal Music. English Language Arts. All 'English Language Arts'. Balanced Literacy. Close Reading. Creative Writing. ELA Test Prep. Informational Text. Other ELA. Reading Strategies. Short Stories. Foreign Language. All 'Foreign Language'. Other World Language. Back to School. Black History Month. Earth Day. End of Year. Hispanic Heritage Month. Martin Luther King Day. Presidents' Day. Patrick's Day. The New Year. Valentine's Day. Women's History Month. All 'Math'. Algebra 2. Applied Math. Basic Operations. Math Test Prep. Mental Math. Order of Operations. Other Math. Place Value. Word Problems. All 'Science'. Basic Principles. Earth Sciences. General Science. Other Science. Physical Science. Social Studies - History. All 'Social Studies - History'. African History. Ancient History. Asian Studies. Australian History. British History. Canadian History. Criminal Justice - Law. Elections - Voting. European History. Latinx Studies. Middle Ages. Native Americans. Other Social Studies - History. World History. All 'Specialty'. Career and Technical Education. Character Education. Child Care. Classroom Community. Classroom Management. Computer Science - Technology. Critical Thinking. Family Consumer Sciences. For Administrators. For All Subjects. Gifted and Talented. Instructional Technology. International Baccalaureate. Library Skills. Life Skills. Oral Communication. Other Specialty. Physical Education. Problem Solving. Products For TpT Sellers. Professional Development. School Counseling. School Psychology. Special Education. Speech Therapy. Student Council. Study Skills. Test Preparation. Tools for Common Core. Vocational Education. For All Subject Areas. Shows resources that can work for all subjects areas. Prices Free. On Sale. Resource Types Independent Work Packet. Lesson Plans Individual. Math Centers. Literacy Center Ideas. See All Resource Types. Activboard Activities. Bulletin Board Ideas. Classroom Forms. Clip Art. Cooperative Learning. Cultural Activities. Elective Course Proposals. English UK. Examinations - Quizzes. Excel Spreadsheets. Flash Cards. For Parents. Fun Stuff. Grant Proposals. Imagine just how much variation and diversity would occur between those 75 people and their papers if the prof left it all to chance—all of these students like different fonts, would cite things differently based on their preferences, and would hand in widely varied papers, at least doubling the time it would take to read those papers. Make that prof love you by following these directions. If you follow the directions, this prof will direct their ire elsewhere. The rubric is a list of direct touch points that will be examined by the professor as they grade your work. In this case, you can see five discrete categories, each with its own stakes, and the number value that corresponds to your performance:. The prof will take the rubric and keep it within reach while grading. Along with making notes on your paper, the prof will also check off your performance in each category—summarizing your performance in that category:. If you have a hundred-point paper, each one of these categories is worth 20 points. To get an A on this paper, you have to perform with excellence in 3 categories and above average in at least 2 of the other categories. At least one of them—formatting—is a gimmie. All it takes is attention to detail—Microsoft Word has all the tools you need to score perfectly there. Focus on Development and Body Paragraphs for your other two. It might seem like a silly thing to do, but an anchor sentence is as vital as a thesis statement. Note that there is nothing about originality in this rubric. In this paper, I will demonstrate my understanding of a linguistic concept I learned this semester and how it relates to my field of study. I will demonstrate this knowledge by staying organized, using relevant research, and sticking to my thesis statement. Yes, it seems a bit silly. But now you have an anchor. Now all you need to know is where it could all fall off the rails. In this step, you name your strengths and weakness so you know exactly where you stand walking in. Simple as that. Now all you need to do is play to those strengths and be cognizant of the weaknesses. Completing this second step immediately—before you go to bed on the day you get the assignment—is essential to acing this paper. Set the plan and execute, execute, execute—this is the only way to achieve the results you want. If your time is nebulous, you will be more likely to drop the ball. Keep in mind that one of the crucial ingredients of successful writing is time. You need time to think, research, and create. If you fail to acknowledge this, you will write a crumby paper every time. Resist the impulse to think of the paper as a hurdle. Make an appointment with the writing center to get a semi-professional set of eyes, and had that paper to a friend for quick notes. Your next step is to organize your time. Most of your sessions should be no more than an hour or two, but some activities—like research—might need to be a bit longer:. If you notice, most of your writing time will be spent on the front end—creating the first draft of the paper. This is because everything after that will be revisionary. If you stick to this schedule, you will not only complete your paper on time, you will complete it well. Every writer on the planet will tell you that the schedule is the foundation of good writing—the more time you spend in the chair, the better the writing gets. Free writing is often popular, but it can be really time consuming, and also not particularly helpful for research papers. As well, some profs advise talking it out with a friend, which can be distracting. The best method for this is mapping. Mapping is a technique that allows you to freely record your ideas in a logical manner. Mapping will give you strong guiding questions as well as demonstrate how your ideas are connected, which is super useful for writing a long research paper. Mapping looks something like this:. Note that the ideas get more specific the further away they are from the center topic. Circle the ones that are most specific and uses them for your paper. So, apply your field of study, your interests, or something topical to the subject. Here are some ideas based upon that…. Out of the above, which sounds like it has the most juice? Probably number one. Even without doing any Googling, it seems evident that there will be research in this area that you can draw from. As well, you can rely on non-technical, non-academic observation to give you better ideas—you can use your experience to shape your subject matter. So go with number 1. Take a look at these specific ideas that you can use in your research phase:. And look, you can scroll to the bottom of the page to get a jump on specific articles to use in your research. As well, 51 mentions your keyword! With our tutorial on writing a thesis statement, you will see thesis examples, ways to craft a thesis sentence, and how to organize your paper around a thesis statement. Second, you will need specific examples to write about. Third, you will need to organize those three items effectively. And, fourth, you will need to make an outline. The writing of the thesis is broken into four parts. Master these and the paper will be a cinch. The first step to creating a successful thesis statement is generating a concise overview of the topic at hand. In this case, technology and the ESL classroom is the topic upon which the paper is based. So the first portion of your thesis should be a generalized statement that describes the imperatives which make your paper relevant. Begin by making a list of why you think your paper topic is relevant. In this case, we could say that…. Sounds pretty good, eh? Teachers who do not embrace technology in their classes risk losing students to academic boredom, not to mention that they will be perceived by their students as tedious and irrelevant. Even better! With adding then subtracting, expanding then consolidating, moving from the general to the specific, you can craft an overview to be used in the thesis. Also, note the use of old tricks, like opposing vocabulary extracurricular v. So, check the rubric—did we hit any goals? See Development, Language and vocabulary, and Sentence structure! The problem presented was that instructors take away learning tools from students and replace them with less interesting forms of learning and stop social interaction with the classroom. As well, instructors give little attention to technology-based learning tools as an avenue for education. ESL instructors should make using technology a priority of education, both inside and outside the classroom. ESL instructors should try to increase digital interactions between students outside of class, use digital technology inside of class, and make digital avenues of education a learning priority. Pretty good, but we can make it sound even more academic. Again, use the Word synonym function, and try to bring out the parallel structure even more:. All we need now is to connect the two sentences together with some kind of sentence, transitional phrase, or conjunction. In this case as with almost everything in writing, actually keep it simple:. Wait a sec! So use it with abandon, so long as you complete the sentence! Now, check the rubric again! Check and check and check! And, to top it all off, you now have three areas of research to focus on! In my classroom, my students use editing symbols when fixing their own writing or giving suggestions to other students on how to fix their writing. Our editing marks become a common language among my students. This is an example of our editing practice:. I sincerely hope that this post has been helpful to you and that you will be able to take your students from sentence writers to five paragraph essay superstars! The strategy suggested will definitely help my students to be the best writers. Thank you so much Jenifer. This article is fantastic! Thank you. Also my children used to attend a Northern Hills Elementary School. We like that you included that name in your example. Thank you so much for putting this method out here. Your work is so appreciated. All resources are like gold to me. Notify me of follow-up comments by email. Notify me of new posts by email. This site uses Akismet to reduce spam. Learn how your comment data is processed. Join my VIP teacher email club! Facebook-f Pinterest-p Instagram Apple-alt. Step 1: Write ALL. I usually get sentences similar to these: Pie is my favorite dessert. I wear my jacket when it is cold. This school is a nice place to learn. The tree is tall. This is where I aim for students to get in their sentence-writing before moving on: Pecan, cherry, apple, or pumpkin… any type of pie is delicious! My dad spends his Saturdays washing and shining up his candy apple red Jeep. My puffy, hooded jacket is the first thing I reach for on chilly mornings. My school, North Hills Elementary, has the best teachers and students. The tall Redwood tree in my front yard is a welcome sight to visitors and makes my house look spectacular. Step 3: Simple Paragraphs Once my students are on the right track with sentences, we start working on simple paragraphs. Step 4: Expand to Five Paragraph Essays Once students are pros at writing simple paragraphs, we expand into five paragraph essays. The introduction paragraph has three parts: the hook, commentary, and thesis. Prev Previous Teaching the American Revolution. Thank you for taking the time to leave a comment. Thank you so much, Stacy! I wish you all of the best in your homeschooling endeavor! You are very welcome, Mel! Thanks for taking the time to leave a comment! Leave a Reply Cancel reply Your email address will not be published. Comment Name Email Website Notify me of follow-up comments by email. You may also enjoy How to Create a Welcoming Classroom Environment read on I'd love to connect with you! Enter your first name and email address to join my exclusive VIP email club. Quick Links. Some topics may be too broad and need to be narrowed down. Choose your final topic—give Mrs. Weber your final choiceCreating an Outline 1. Look through your sources: What is most important about your event? What information do you need to include to properly explain and present your event? Why is this event an important? Start Researching 1. Create Source Cards 2. How Do I Take Notes? Setting up your note cards Use your outline! Use your sources and summarize bullet points only! Turning Your Research into a Paper 2. Organize your note cards so they match your outline 2. Writing an Introduction 3. Each new idea should be a separate paragraph 4. Each paragraph needs a topic sentence 5. Think about transitions between sentences and paragraphs 6. Weber: emily. Editing and Revising Self Peer Mrs. Presentation Present your event to the class minutes Choose the most important information about your event to share to the class People Involved, Places, Dates, Why is this event important? Presentation may take any form Visual aides are helpful! You just clipped your first slide! Clipping is a handy way to collect important slides you want to go back to later. Now customize the name of a clipboard to store your clips. Visibility Others can see my Clipboard. Cancel Save. Narrative Writing: Prewriting Organizer. Teach students to organize their thoughts before they start writing with this prewriting organizer. Great Writing Starts with Golden Ideas. Use this worksheet as a tool to assist your young writers in selecting the perfect idea for their narrative. Hook Your Reader! Students will study effective hooks and have the option to craft some hooks for their own writing. Metaphors in Poetry. In this worksheet, students will read Carl Sandburg's poem "Fog," then analyze the poem for figurative language. Collecting Strong Evidence. This graphic organizer will help your young writers organize and explain their supporting evidence. Narrative Writing: Add Sensory Details. Take mystery and suspenseful writing to the next level with this worksheet! Students can revise sentences and then use this exercise to make their own writing more evocative. Use this worksheet to practice adding vivid details to sentences. These example literary responses model how to cite and explain evidence to support a claim. Malala: Education Advocate. Students read about Yousafzai's life, then answer a series of thoughtful questions about both education and the issues they care about. Complete the Story: Dotty and the Necklace. The first half of Dotty's story is here, but the second half is missing! Can your child come up with her own ending to complete the page? Language Frames: Nonfiction Summary. Support discussions about main ideas and summarization with these helpful language frames. This worksheet will help your students organize their thoughts and information from a nonfiction paragraph or text. Sort Out the Scientific Method 1. McSquare needs help sorting out his lab reports. Can your child read each item, then label it with the correct scientific method step? Opinion Essay: Idea Map. Students will craft their own essay using this graphic organizer as a helpful way to get started. Argument Writing: Peer Review Rubric. Use a kid-friendly rubric to help your students peer-review their persuasive essays! Home Explore Login Signup. Successfully reported this slideshow. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime. Upcoming SlideShare. Like this presentation? Why not share! Mla ppt by mafrco views. Embed Size px. Start on. Show related SlideShares at end. WordPress Shortcode. Like Liked. Full Name Comment goes here. Are you sure you want to Yes No. Sean Thomas Moroney M. Show More. No Downloads. Views Total views. Actions Shares. No notes for slide.
<urn:uuid:8f674301-8b84-43ef-9f20-39169b1cc822>
CC-MAIN-2021-43
http://telas.smartautotracker.com/do-my-women-and-gender-studies-book-review/5135-how-to-write-research-papers-for-5th-and-6th-grades.php
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00150.warc.gz
en
0.930469
3,470
2.5625
3
Liturgical Books of the Roman Rite LITURGICAL BOOKS OF THE ROMAN RITE The term liturgical books means the official books of the roman rite published by authority of the Holy See. The official text of a liturgical book is contained in what is called a typical edition (editio typica ), one that is produced by the authority and under the supervision of the Congregation of divine worship and the discipline of sacraments. ORIGIN AND HISTORICAL DEVELOPMENTS It seems clear that in the earliest days the only book used at Christian worship was the Bible from which the lessons were read. The account of Justin Martyr (d. c. 165) in his first Apology (67; J. Quasten, ed., Monumenta eucharista et liturgica vetustissima [Bonn 1935–37] 19–20) speaks of reading the memoirs of the Apostles or the writings of the Prophets before the Eucharist, but for the latter he mentions only that the president offered up "prayers and thanksgivings" to the "best of his power." This means that he improvised in accordance with a central theme, and although such solemn prayers would have been prepared in advance, there does not seem at first to have been any written formula that was used. The first written evidence of a formulary for the Eucharist, or at least for its anaphora (eucharistic prayer), is to be found in the Apostolic Tradition, although this, it appears, was not an official book (4; B. Botte, ed., La Tradition apostolique de saint Hippolyte: Essai de reconstitution 10–16). It is possible that certain formulas became more or less stereotyped before they were written down, and after the Edict of Milan and the peace of the Church (313), the development of a systematic liturgy can be discerned. At the end of the fourth century St. Ambrose (De Sacramentis 4.5, 6) quotes what is clearly the central part of the Roman Canon. Early Roman Books. The point had been reached when certain of the formulas were being written down; once this happened formulas naturally tended to become fixed. Little books (libelli ) were provided for some celebrants as a form of aide-mémoire and appear to have been used in conjunction with the Roman stational churches and for domestic celebration of the Eucharist. The libelli were the immediate forerunners of the Sacramentaries. The most famous collection of Roman libelli is the leonine sacramentary (Veronense), a private compilation of various libelli missae collected outside Rome. An interesting feature of this collection is that certain sections of it are made up of libelli forming self-contained units that seem to belong to a transitional period, when formulas were gradually becoming fixed. The Old gelasian sacramentary (Vat Reg Lat 316) was an official compilation with both Roman and Gallican elements. In the evolution of the Roman liturgical books, the Sacramentaries are characteristic in that they are books for the use of a person performing a function and contain solely those formulae that were proper to the celebrant. The Sacramentaries contained the rite as used by the bishop in the celebration of the Eucharist, the conferring of Baptism, Orders, etc. The parts read or sung by others—the choir, reader, deacon, etc.— are found in other books and it is these that must now be examined. Primitively, a Bible was used for the scripture readings. The readings were not yet fixed at this stage; the lector or reader concluded each reading at a signal from the president. As time went on and the course of Scripture readings tended to become fixed, points of beginning and conclusion were marked so that the pericopes could easily be found. This book was called the Comes or Liber comicus; from this developed the Evangeliarium (evangelary or Book of Gospels) and Lectionarium (Lectionary) for use by the lector. Similarly there emerged the book containing the parts for the choir (Antiphonarium Missae, Liber antiphonarius, or Gradalis ). The Biblical lessons for the Office, the sermons of the Fathers, and the acts of the martyrs were gradually collected into separate books. The Lectionary, the Homiliary, the Legenda, etc. The last named, distinct from the Martyrology which was primarily a list of anniversaries, contained the account of the sufferings of each martyr; it was read at Rome up to the eighth century at the cemetery basilica of the martyr during the night Office. It was also called the Passionale. The Psalter was written out in the order that the Psalms were to be sung, and for the responsories and antiphons there were the Liber responsorialis and the Antiphonarium Officii. Hymns appeared in the West as part of the Church worship service in the fifth century. They were often included in the Antiphonary, but a separate collection also existed (Hymnarium ). At a later date when sequences were introduced, they were added to the Liber antiphonarius or Gradualis. Similarly when parts of the Ordinary of the Mass came to be padded with musical phrases, these pieces were added to the Gradual or Antiphonary, or else contained in a separate book, the Troper, as it was known in medieval England (or Troparium ); the earliest known example is the tenth-century St. Martial Troper (Cod. Par. 1240). The early liturgical books contained very few ritual or ceremonial directions, although some of the Sacramentaries occasionally add a word or two in this respect. The rubrics were probably the last elements of the liturgy to be written down since tradition governed the ceremonial for some time. With increasing elaboration of the papal ceremonial and the use of the Roman rite all over Europe, particularly in Gaul, it became necessary to provide precise directions. This guidance was provided by the Ordinals (Ordines Romani ), the first of which was intended as an accompaniment in Gaul of the Gelasian and Gregorian Sacramentaries. There is a series of 15 of these Ordines, dating from the seventh to the fourteenth centuries; they were printed first by J. Mabillon in his Musaeum Italicum (reprinted in Patrologia Latina, ed. J. P. Migne [Paris 1878–90] 88:851–1408; critical edition, M. Andrieu, Les 'Ordines Romani' du haut moyen-âge [Louvain 1931–61]) and form the basis for any study of the development of the ceremonial of the Roman rite. The earliest is probably Ordo VII, the greater part of which is to be found in the Gelasian Sacramentary. Ordo I is of great importance and value for its depiction of a papal mass of the Roman Rite circa 700. Medieval Developments. From around the beginning of the ninth century, the Sacramentary was divided into three books, and thus eventually emerged the Pontifical, Ritual, and Missal, the last named absorbing the parts of all ministers, choir, and people at Mass as well as the celebrant's part. Thus, the whole of the rite was in one book and could be used for low Mass, which was at that time becoming common. The Pontifical contained the complete text of all rites peculiar to a bishop and the Ritual (known also a century or two later as Manuale, Alphabetum Sacerdotum, Sacerdotale, Pastorale ), those rites ordinarily performed by a priest (see pontifical, roman; ritual, roman; and missal, roman). On the other hand, the various books required for the Divine Office, by means of an abbreviation of the lessons, were finally contained within the covers of a single volume in the 12th and 13th centuries. Thus emerged the breviary, which, as its name indicates, was an abbreviation (at least of the lessons) of the choir Office, although it was not long before the shortened lessons were used in the choir also. Side by side with the Missal and Breviary, however, the use of separate books (Psalter, Hymnal, Antiphonary, Gradual) continued in use to provide the musical (plainchant) settings needed for the singing of the Office and Mass. COUNCIL OF TRENT'S REFORM OF LITURGICALBOOKS It would be a mistake to regard the emergence of these various medieval liturgical books as a sign of liturgical uniformity throughout the West. While the general pattern of the Roman rite as it had evolved was followed everywhere, there were great differences in detail: local "uses" of the Roman rite in whole provinces, dioceses, or religious orders, proliferated. In addition, the great number of feasts of saints observed in the local calendars, and particularly those of the religious orders, practically obscured the proper celebration of the liturgical year, and the text of the liturgical books in many instances (e.g., lessons at Matins and of the Martyrology) was in a corrupt state. Moreover, there were elements in some of the Breviaries and Missals, hymns and antiphons especially, that were really unworthy of worship in the Church. By the beginning of the sixteenth century the time was ripe for reform. The Council of Trent decreed the general reform that was needed and appointed a commission to deal with the matter, but when the Council closed (December 1563), the commission had not finished its task; the matter was remitted to the pope, Pius IV. He died (1565) before the work was concluded, and the first of the reformed books of the Roman rite were issued by his successor, Pius V (d. 1572). The Roman Breviary appeared in 1568; the Roman Missal, in 1570. At the same time the pope abolished all rites and uses that could not show a prescription of at least 200 years. In 1588 Sixtus V established the Congregation of rites for the purpose of carrying out the decrees of the Council of Trent regarding the public worship of the Church. Since that date this Congregation has been a potent influence for uniformity, particularly in watching over the correction and orthodoxy of text of the liturgical books. The first book to be issued as a result of this Congregation's work was the Roman Pontifical in 1596, which was made obligatory on all bishops of the Roman rite. The Ceremonial of Bishops was published by order of Clement VIII in 1600. The immediate source for this book was the Ceremoniale Romanae Ecclesiae of 1516, but as an official liturgical book the Ceremonial of Bishops was an innovation, giving directions for episcopal functions, as well as norms for the daily liturgy in cathedrals and collegiate churches. The reform that Trent initiated was complete with the issuance of the Roman Ritual by Paul V in 1614. As they were issued in compliance with the instructions of the Council of Trent, the principal books of the Roman Rite remained essentially the same up to the twentieth century. Both Missal and Breviary underwent reform at the hands of Pius X in 1911 and Pius XII in 1955. A further change was the promulgation of Pius XII's Ordinal for Holy Week in 1955 (Ordo hebdomadae sanctae instauratus ), containing the restored Holy Week services. This necessarily entailed changes in the liturgical books affected. The Code of Rubrics (1960) resulted in the publication of new typical editions of Missal and Breviary in 1962. VATICAN II'S REFORM OF LITURGICAL BOOKS The Constitution on the Sacred Liturgy of the Vatican Council II, promulgated in 1963, called for revision of the liturgical books of the Roman Rite, with a view to simplifying the rites so that the texts and rites "express more clearly the holy things which they signify" and that "the Christian people, as far as possible, be enabled to understand them [the texts and rites] with ease and to take part in them fully and actively" (21). This reform marked the first revision of the official Roman liturgical books after a lapse of four centuries. The task of reform through revision of service books was entrusted to the consilium for the Implementation of the Constitution in 1964, and subsequently in 1969 to the Congregation for Divine Worship. With permission being granted for vernacular translations of Latin editones typicae, bishops' conferences established language groups to facilitate the production and publication of vernacular editions of liturgical books. In the English-speaking world, the international commission on english in the liturgy was established by some dozen English-speaking episcopal conferences. Following the principles set forth in the Instruction on Translation of Liturgical Texts, (Comme le prévoit ), ICEL had produced the English version of liturgical texts of the Roman Rite that were adopted by the individual bishops' conferences. These postconciliar liturgical texts, as all previous official books of the Roman Rite, are published by the authority of the Holy See. However, much distinguishes them from the liturgical books of the past: in variety of options, alternatives, and suggestions; in liturgical theory; in simplicity; and in pastoral concern. Apparent is the conciliar concern for intelligibility and careful restoration, as well as emphasis upon corporate action of the local church. Bibliography: l. c. sheppard, The Liturgical Books (New York 1962). t. klauser, The Western Liturgy and Its History: Some Reflections on Recent Studies, tr. f. l. cross (New York 1952). c. vogel, Medieval Liturgy: An Introduction to the Sources (Washington, DC 1986). e. palazzo, A History of Liturgical Books from the Beginning to the Thirteenth Century (Collegeville, Minn 1998). [l. c. sheppard/ j. a. wysocki/ j. m. schellman/eds.]
<urn:uuid:3f2ecda9-3667-4a29-9628-42458839f1e9>
CC-MAIN-2021-43
https://www.encyclopedia.com/religion/encyclopedias-almanacs-transcripts-and-maps/liturgical-books-roman-rite
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00070.warc.gz
en
0.967717
2,983
3.34375
3
Organic matter loading in this pretreatment manhole sump provides an environment for mosquitoes [Ramsey Conservation District, 2017]. Because stormwater management usually deals with the transmission, storage and treatment of water, there is much concern about the proliferation of mosquito breeding habitat associated with best management practice (BMPs). This is a well-founded concern because mosquitoes may colonize any source of standing water provided there is a source of organic material to provide sustenance to larvae (Messer, 2003). Although this basic fact often means that BMPs will result in more mosquitoes, there are many design and management measures that can be followed to minimize this increase in mosquito population. The primary threat to Minnesotans from mosquitoes, besides the nuisance, is the transmission of serious disease. West Nile Virus (WNV) and various forms of encephalitis are the major concerns. In spite of this threat, the U.S. Department of Health and Human Services Centers for Disease Control and Prevention (CDC) and Minnesota Department of Health both point out that a very small percentage of mosquitoes are vectors for disease and many of those bitten by carriers will not experience major health consequences, although minor difficulties could develop. Both organizations advise avoidance of outside activity, use of repellents and good integrated pest management programs to avoid disease problems related to mosquitoes. Mosquitoes in Minnesota Minnesota is fortunate to have a major mosquito research and management agency, the Metropolitan Mosquito Control District (MMCD), in the Twin Cities metropolitan area, as well as research in other parts of the state by the University of Minnesota and the Minnesota Department of Health. They have been able to characterize the occurrence of mosquitoes and the problems they cause in the state. Information provided by Nancy Read of the MMCD via education material (ex. Minnesota Erosion Control Association Annual Conference, 2004) included the following basic facts. - There are about 50 varieties of mosquito in the state, but only a few are efficient transmitters of diseases such as WNV. - All mosquitoes need water for the larval and pupal stages of development. The larval stage lasts anywhere from 5 to 7 days, so holding water for less than 5 days will prohibit the progression of life past the larval stage. Standing water for over 2 weeks can easily breed mosquitoes if not treated. - Aedes vexans is the most common Minnesota mosquito. It is a “floodwater” mosquito that lays its eggs on moist surfaces near water and relies on periodic submersion for eggs to hatch into larvae. Eggs can remain viable on moist surfaces for years before hatching. It is a vector (or carrier) of heartworm disease and may have a small role in WNV transmission. - Ochlerotatus triseriatus is a “treehole” variety floodwater mosquito that lays eggs in containers that periodically fill with water, such as tires, bird baths, or holes in a tree. This variety is a vector for LaCrosse encephalitis, which affects primarily children. - Culex tarsalis is a standing-water species that is principally responsible for the spread of WNV in the western US. It lays eggs in “rafts” in standing water. The ideal habitat for Culex species are areas that will remain wet for about two weeks, contain vegetation for shelter and nourishment, and have few predatory fish. - Culex pipiens and restuans are species often found in stormwater catch basins, rip-rapped areas and ponds with vegetative debris. MMCD treats 50,000 water-holding catch basins in the Twin Cities metropolitan area to control these species. - The larvae of the cattail mosquito, Coquillettidia perturbans, attach themselves to cattails and breathe through the inner air tube. Eggs are laid in late summer, with larvae able to over-winter under the ice. These varieties emerge as adults in large quantities around mid-summer. - MMCD uses an integrated pest management (IPM) approach to controlling mosquitoes that targets primarily the larval stage through the use of bacteria (Bti or Bacillus thuringiensis var israelensis) toxic to larvae and growth regulators (methoprene) that inhibit larval development. Some limited spraying with synthetic pyrethoids is done for adults. IPM also includes good site design for BMPs and encourages biological control agents like predators (especially fish). Methods to limit mosquito breeding in stormwater facilities The presence and behavior of water is the most important element to the continuing life cycle of the mosquito. Controlling standing and stagnant water, and adapting design and habitat conditions are the ways stormwater managers can avoid a proliferation of mosquito breeding in association with stormwater BMPs. A number of technical publications, articles and fact sheets on mosquitoes ( Aichinger, 2004; Commonwealth of Virginia, 2003; Messer, 2003; Metzger, 2003; Nancy Read, MMCD, personal communications; Stanek, brochure with no date; USEPA, brochure with no date; Wass, 2003) were evaluated to come up with the following advisory material for homeowners (possible public information for SWPPPs) and stormwater managers. - Eliminate standing and stagnant water around the home, such as in abandoned tires, boat covers, wheelbarrows, flower pots, or other containers. Change the water in wading pools, birdbaths, or dog dishes frequently. - Protect family members from mosquito contact via such measures as house screening, avoidance during hours of maximum exposure, repellents, and clothing coverage. - Chlorinate, clean and cover swimming pools, and prevent water from collecting on cover. - Unclog roof drains and downspouts. - Aerate water gardens or use fish to prevent larval mosquito development. - Screen rain barrels to keep adult mosquitoes from laying eggs. The following websites offer information on non-toxic methods for controlling mosquitoes in residential settings. For more information, visit the Metropolitan Mosquito Control District website. Stormwater manager actions - Use BSD/LID development techniques to reduce the amount of stormwater that needs to be conveyed and managed. - Do not allow water to collect in “temporary” facilities for longer than five days, preferably less than three. - Adhere to Minnesota Construction General Permit requirement to drain infiltration/filtration BMPs within 24 or 48 hours. - Avoid allowing standing water to collect in inlets and outlets and in conveyance pipes; avoid corrugated pipe without constant flow and sumps in catch basins. - Maintain and clean-out sediment traps/basins and all drainage structures, inlets, outlets and orifices (use only openings >3 inches in diameter to prevent clogging) to keep positive water drainage. - Screen inlet and outlet pipes or place under water if no other control available (prevents fly-in). - Eliminate standing stagnant water as part of any BMP appurtenance, including forebays, sediment traps, sump areas and pumps. - Avoid the use of rip-rap that can catch and hold organic debris in a wet area. - Design de-watering capability into every BMP for routine dry-down and maintenance. - Minimize installation of BMPs that will collect stormwater for only brief periods then stagnate until the next event; this could include a water budget analysis to make sure some baseflow will occur through the BMP. - Minimize shallow depths (less than 1 foot) as part of ponds and wetlands; if this cannot be done, make sure flow continually occurs over the shallow area. - Design facilities to minimize vegetation overgrowth floating organic debris, algae, trash, sediment dead grass/clippings, and cattails. - Avoid the use of mulch that will wash into any BMP (use geotechnical material or secured mats instead). - Avoid vegetation cutting operations that leave debris, blow into standing water, or leave ruts for water accumulation. - Keep dense emergent vegetation limited to narrow (<1 meter) bands around areas with standing water and prevent the development of cattail stands. - Keep permanent pool embankments steep to prevent emergent vegetation, especially cattails, from growing; carefully plan plant species for aquatic/access benches to avoid cattail intrusion. - Fall draw-down on cattail marshes can be a very effective control for cattail mosquitoes, which overwinter as larvae in the water. - Design healthy natural systems that encourage mosquito predators to thrive and have access to mosquito larvae; this includes open water (over 4 feet deep) as part of wetland design (preferably oriented perpendicular to flow-through), minimization of stagnant, non-flowing water, creation of diverse vegetation along periphery of ponds. - For stormwater wetlands, maintain a constant water table just below the ground surface (or above ground <5 days) to minimize mosquito production. - Require a written inspection and maintenance plan that addresses stagnant water, water quality, and vegetation and debris management. - Consider including mosquito control as a potential annual maintenance cost in some situations. - Work with vector control agencies on integrated pest management approach to larval control. - Always design access for vector control staff to reach entire BMP, not just the inlet or outlet. - Properly design and maintain all stormwater BMPs. Information: The recommendations listed with a* above could be designs that appear to conflict with common BMP use Compatibility with Common BMP Design A cursory consideration of the list of commonly used Minnesota BMPs relative to the above list would seem to indicate that some BMPs might be more desirable than others when mosquitoes are considered. The practices that would seem to be the best for preventing mosquitoes would be permanent pools with steep slopes below the water line, infiltration devices that drain effectively in 48 hours, bioretention that infiltrates or filters water then dries at the surface, dry ponds, ponds with a Water Quality Volume that is fully treated and discharged within three days, and healthy pond/wetland systems (those with diverse vegetation, open water areas over 3 feet in depth, fairly steady water levels and low nutrient loads). Practices that would seem to cause mosquito breeding to proliferate would include water basins or holding areas that hold water in a stagnant condition for longer than 3 days, sub-grade treatment systems that include sumps and are not properly sealed, poorly maintained water holding areas that contain substantial amounts of vegetative debris, wet meadows with less than 1 foot of standing water, and storage areas that bounce up and down repeatedly. Not all of these systems need to be dropped from the list of suitable BMPs, but their use should be supplemented with integrated pest management techniques (ex. biological larvicides), physical sealing, or adequate maintenance. Although some of the recommendations for addressing mosquito concerns appear to conflict with common BMP design, careful consideration can alleviate those concerns. Considerations include the following. - Avoiding excessive vegetative growth does not mean minimizing vegetation; rather it means keeping a healthy mix that thrives and does not overwhelm the BMP or an (upland) area adjacent to a BMP. The same applies for emergent vegetation that is planted as part of an overall planting scheme. - Shallow vegetated benches are part of the recommended access design for ponds. Although a recommendation above suggests that “shallow” water less than 1 foot be avoided in standing water situations, it might be necessary, depending upon access needs, to construct such a bench. In addition, a recommendation above suggests that dense periphery vegetation be limited to about 1 meter in width, whereas recommendations for pond bench width is 10 feet. Designers are advised to use their judgment on the mix of recommendations for edge-of-pond depth, depending upon priorities for access relative to mosquito control. Care should be taken in plant selection, particularly if bench depths less than 1 foot are anticipated. - Riprap or similar structural armor for bank stabilization are options that are sometimes needed in erosive situations. The tendency for these materials to capture vegetative debris and to create small pools of water make them ideal mosquito breeding sites. If mosquito breeding is a concern at these installations, smoothing with a grout material or size grading can be used to minimize edges and pools that promote mosquito habitat, or alternative materials can be used. - The required wet basin design in the MPCA CGP contains a water quality volume that is temporarily detained above the permanent pool. Although there are no CGP requirements for the amount of time this should be held, a minimum of 12 hours is recommended and trying to get the extended detention pool to recede within 3 days is a good goal to minimize possible mosquito breeding. Floodwater mosquito egg-laying on the moist side slopes above the permanent pool is almost impossible to control in this situation because the eggs remain viable for up to 5 years and could hatch with the resulting larvae inhabiting the pool whenever water levels rise. Mosquito varieties that require standing water can be minimized with a management plan that allows these areas to fully dry out between events. If conditions cannot be improved to minimize breeding habitat, biologic larvicides should be used. - Forebays, sediment traps and treatment sumps could all be part of a well designed treatment train. The recommendation above to keep these from becoming stagnant is consistent with good design principles and should not preclude their use. The essential elements in keeping them “fresh” are to either drain them fully after use or keep baseflow moving through them. MMCD began a monitoring program in underground structures in 2005 and has found evidence of mosquito breeding in half of the structures tested through mid-summer of 2005. Studies in California outline more details of which structures are most likely to provide habitat for mosquitoes (Metzger, et al., 2002). - In summary, there are many ways in which stormwater BMPs can become mosquito breeding grounds if caution is not followed in their design, operation and maintenance. The means exist to install BMPs that minimize the creation of mosquito habitat and/or to biologically attack the larvae that result even under the best designs. - Aichinger, C., 2004. Understanding the West Nile Virus. Woodbury Bulletin (newspaper opinion page), August 11, 2004. Contact Ramsey-Washington Metro Watershed District, North St. Paul, MN. - Commonwealth of Virginia, 2003. Vector Control: Mosquitoes and Storm Water Management. Stormwater Management Technical Bulletin No 8. - Messer, D.F., 2003. Mosquitoes in Structural Stormwater BMPs: A Case Study. In Proceedings of the StormCon Conference of 2003, San Antonio, Texas. Published by Forester Communications, Santa Barbara, CA. - Metzger, M.E. 2004. Managing Mosquitoes in Stormwater Treatment Devices. Publication 8125. University of California, Division of Agriculture and Natural Resources. - Metzger, M.E., 2003. Mosquito Control Challenges Presented by Stormwater Treatment Devices in the United States. In Proceedings of the StormCon Conference of 2003, San Antonio, Texas. Published by Forester Communications, Santa Barbara, CA. - Metzger, M.E., D.F. Messner, C.L. Beitia, C.M. Meyers, and V.L. Kramer, 2002. The Dark Side of Stormwater Runoff Management: Disease Vectors Associated with Structural BMPs. Stormwater. 3(2):24-39. - Minnesota Department of Transportation, 2005. The Cost and Effectiveness of Stormwater Management Practices. Report 2005-23, St. Paul, MN. - Stanek, S. (no date). West Nile Virus and Stormwater Management. Brochure prepared for the Minnehaha Creek Watershed District, Deephaven, MN. - U.S. EPA (no date). Wetlands and West Nile Virus brochure. - Wass, R.D., 2003. Mosquito Management Do’s and Don’ts in an Engineered Arizona Treatment Wetland System. In Proceedings of the StormCon Conference of 2003, San Antonio, Texas. Published by Forester Communications, Santa Barbara, CA. Because this page had not been updated, we completed a cursory literature review in July, 2019, to provide a summary of recent information on this topic. - Trash Capture Devices and Mosquito Abatement:An Odyssey. Joseph Huston. Wing Beats Magazine, Spring, 2019. The author provides a discussion, based on field experience, of trash capture devices (TCDs) and mosquito control. Initial experience with TCDs resulted in an effort to modify or develop new TCDs that did not hinder mosquito abatement efforts. The result has been improvements in design of these systems. Several locations in California utilize lists of TCDs indicating their appropriateness for mosquito control. An example is here. - Aedes albopictus production in urban stormwater catch basins and manhole chambers of downtown Shanghai, China. Gao et. al., 2018. PLoS One. 13(8). Conclusions from a study conducted in China - "Aedes albopictus was the predominant species in both CBs [catch basins] and stormwater MCs [manhole chambers], especially in residential neighborhoods. CBs, particularly those with vertical grates, were a major source of mosquito production in downtown Shanghai. MCs featured more running water and fewer larvae by percentage, and few larvae were found in Sewage MCs. However, due to the tremendous baseline amount, MCs were still an important breeding source of mosquitoes. We suggest that Aedes control in Shanghai should focus on CBs or other potential larvae habitats in and around residential neighborhoods. The use of permeable materials and completely sealed covers should be adopted in the construction of CBs and MCs henceforth." - H2O: The Fundamental Link Between Stormwater Management and Mosquito Control Agencies. J. E. Harbison, and M.E. Metzger. Storm H2O. March/April, 2014. Provides a general discussion of the relationship between stormwater management and mosquito control, a discussion of maintenance needs, and a discussion of needed collaboration between stormwater and mosquito control agencies. Includes a list of references that may be useful. - Metropolitan Mosquito Control District, 2017 Operational review and Plans for 2018. Provides a summary of mosquito surveillance, mosquito control, product and equipment tests, and a general discussion of related work (e.g. mapping, climate trends, communication). The surveillance results include data for stormwater structures. Links to information on mosquito control
<urn:uuid:52274781-8f3f-49f3-8f8b-debf187aee5a>
CC-MAIN-2021-43
https://stormwater.pca.state.mn.us/index.php/Mosquito_control_and_stormwater_management
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00110.warc.gz
en
0.916654
3,897
3.984375
4
When it comes to 5G technology, we can all agree on the fact that we, well, can’t agree. Between misconceptions of what 5G really is and whether or not it’s detrimental to our health, the new cell network is one of the most controversial issues disputed all over the internet. So, what really is 5G technology and why is it so polarizing? “As the latest step forward in cellular network evolution, 5G will see untold thousands of small antennas deployed onto cell towers, utility poles, lampposts, buildings, and other public and private structures,” Yasir Shamim, Digital Marketing Executive at PureVN.com, tells Parade. In layman’s terms, 5G is an upgrade to the current cellular service—the ever inferior 4G—meant to boost the speed of our wireless internet. But, as with all techy things, it’s a bit more complex than that. “5G will do a lot more than just speed up your network connection,” Erwin Caniba, co-founder of VPNThrive.com, tells Parade. “Imagine billions of linked gadgets collecting and exchanging data in real-time to decrease traffic accidents, or life-saving apps that can take off owing to lag-free assured connections; or manufacturing lines that are so predictive that they can eliminate disruptions far before they happen.” Still unsure? We’re breaking down everything 5G-related—what it is, how fast it is, where it’s available if it’s safe and more—below. What is 5G? Simply put, there are different generations of technology in broadband cellular networks. It’s short for a fifth-generation cellular network and just like its name would suggest, a fourth-, third-, second-, and first-generation network came before it. The fifth-generation cellular network—AKA 5G—was first put into effect en masse starting in 2019. As of that time, most cellphones were connecting to wireless internet and service on 4G. According to Swarun Kumar, assistant professor of electrical and computer engineering at Carnegie Mellon University, “5G is a broad term used to describe the next generation of cellular networks after 4G. Put simply, its main objective is to improve the speed of mobile internet connectivity that users experience.” Kumar tells Parade, “This means a speed-up for applications such as HD video streaming and gaming on cellular devices networks. Besides these traditional applications, 5G could enable new applications such as augmented reality and connectivity for the Internet of Things. In this sense, 5G is also about improving reliability, latency and scale, besides just a speedup.” Hold up—what is the Internet of Things?! The Internet of Things (or IoT) is often brought up in relation to talks about 5G and that’s because it refers to a network of physical objects embedded with sensors, software, and other technologies all with the goal of connecting and exchanging data over the internet. Still not with us? You probably use the Internet of Things 24/7! Some examples of IoT are connected appliances, smart home security systems, wearable health monitors, Apple watches, etc. One of the potential benefits of the IoT is that it could generate more video traffic. “Yes [it could generate more video traffic] and a lot of it if it incorporates HD security cameras that operate on a cellular network, for instance,” Shamim explains. “Temperature sensors, for example, will create significantly less bandwidth, but there may be billions of them installed, so it adds up rapidly. The majority of the time, the problem of IoT will be the number of individual services rather than capacity.” For now, what you need to know is that 5G is upgrading 4G as the more universal cell network. Shamim adds, “The technology, which is designed to supplement rather than replace current 4G networks, promises to accelerate cellular data transfer speeds from 100 Mbps to 10 Gbps and beyond, a massive boost that will make next-generation wireless competitive with even the fastest fiber-optic wired networks.” It’s supposed to be faster, it’s supposed to be better, and it’s definitely controversial (but more on the controversial part later). Related: What is TikTok? How fast is 5G? The main argument for why 5G is the “better” cellular network is that it’s, well, faster. It should load videos faster, load websites faster, and just generally, not take as much time to log in, upload, download, or load. So, how fast is it? “One point to note is that cellular operators in the U.S., in particular, have opted to roll out 5G in significantly different ways, using different kinds of infrastructure, bands of operation, etc. This means that the 5G speeds you experience as a customer may vary significantly depending on your operator or even across locations for the same operator,” Kumar explains. “Speeds may vary significantly from a few tens of Megabits per second to as high as well over a Gigabit per second, depending on your location and the radio technology of the infrastructure.” Still, it should be fast enough for you to notice a difference. According to Qualcomm, 5G is faster than 4G, averaging in at 20 Gigabits-per-second (Gbps) at its peak data rates and 100+ Megabits-per-second (Mbps) for average data rates. How fast your 5G network depends on your carrier, capacity, and coverage. Each carrier offers different high-frequency bands, called mmWave bands, and lower frequencies. Some frequencies require more towers, while others require fewer towers but bigger ones. Related: Smartphone Battery Tips Where is 5G available? Cellular operators have been slowly rolling out 5G technology since 2019. Now, a few years later, most U.S. locations have 5G of some sort, though whether you can use it yet depends on your carrier. “5G is available now at many locations in the U.S., although different carriers are at different stages of deployment,” Kumar says. “Most cellular operator websites contain information on whether 5G is available in your zip code.” Of course, even though your area may be 5G-enabled, you could be missing out if your phone isn’t compatible with the new tech. While most newer smartphones are compatible with 5G, it’s worth checking if your area is set up and your phone is compatible. Currently, carriers like Verizon, T-Mobile, AT&T, and U.S. Cellular have 5G capabilities. To see how each one stacks up against each other, here are each of their stats: - Verizon: n5 (DSS for sub-6) n261 (28GHz) - T-Mobile: n71 (600MHz) n41 (2.5GHz) from Sprint n260 (39GHz) n261 (28GHz) - AT&T: n5 (850MHz) n260 (39GHz) - U.S. Cellular: n71 (600MHz) If you have Verizon, check out their coverage map to find out more information on where their 5G is enabled. If you have T-Mobile, you’re in luck as this carrier has amassed the largest 5G network so far. T-Mobile’s 5G is either 28GHz or 29GHz high-band mmWave network and also offers an interactive 5G coverage map. If you have AT&T, most cities are outfitted with 5G, but you can reference their standard coverage maps. However, it’s important to note that 5GE is not a 5G connection; it’s really just a 4G LTE connection. If you have US Cellular, they also have a current coverage map; DISH Network doesn’t have 5G enabled yet, but it does have an extensive one in the works. Is 5G safe? One of the main criticisms of 5G technology is that it is unsafe. Opposers of the 5G technology worry about the potential health risks and ramifications of the radiation and transmission associated with such a technology. Most often, people cite concerns like increased risk of cancer, genetic repercussions, and damage to the reproductive system. Sure, 5G may mean faster and better wireless internet, but is that the expense of our wellbeing? According to Kumar, 5G technology is totally safe. “The Federal Communications Commission (FCC) limits the maximum power level of 5G transmitters and all radios undergo a thorough certification process,” Kumar says. “It must be noted that the frequency bands that 5G uses have been used in other applications over the past several decades.” However, it’s important to note that not much is understood about 5G technology and those maximum power levels. In fact, according to Scientific American, the FCC’s standards of radiofrequency radiation (RFR) exposure limits were adopted in the 1990s and largely based on research from the 80s. But since then, more than 500 studies have “found harmful biologic or health effects from the exposure to RFR at intensities too low to cause significant heating.” In fact, an appeal—the International EMF Scientist Appeal—signed by more than 240 scientists reads, “Numerous recent scientific publications have shown that nonionizing electromagnetic fields (EMF) affect living organisms at levels well below most international and national guidelines. Effects include increased cancer risk, cellular stress, increase in harmful free radicals, genetic damages, structural and functional changes of the reproductive system, learning and memory deficits, neurological disorders, and negative impacts on general well-being in humans. Damage goes well beyond the human race, as there is growing evidence of harmful effects to both plant and animal life.” Consider also that Cancer.org literally has a FAQ page entitled, “Cell Phone Towers.” According to the page, “[Cell] towers have electronic equipment and antennas that receive and transmit cell phone signals using radiofrequency (RF) waves.” However, the page adds, “At this time, there’s no strong evidence that exposure to RF waves from cell phone towers causes any noticeable health effects. However, this does not mean that the RF waves from cell phone towers have been proven to be absolutely safe. Most expert organizations agree that more research is needed to help clarify this, especially for any possible long-term effects.” Specifically, regarding 5G, Cancer.org adds, “The addition of the higher wavelengths from 5G networks could also expose people to more RF waves overall. At the same time, these higher frequency RF waves are less able to penetrate the body than lower frequency waves, so in theory, they might be less likely to have any potential health effects. But so far this issue has not been well studied.” But again, while there is no evidence of 5G networks negatively impacting our health, there is also not enough evidence to the contrary either. According to Cancer.org, “At this time, there has been very little research showing that the RF waves used in 5G networks are any more (or less) of a concern than the other RF wavelengths used in cellular communication.”
<urn:uuid:3542045d-30f5-40ae-9b30-32fec841dd8c>
CC-MAIN-2021-43
https://parade.com/1220498/stephanieosmanski/what-is-5g/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00030.warc.gz
en
0.944511
2,398
2.703125
3
- Physics, PDEs, and Numerical Modeling - Steady Currents - Electromagnetic Waves - Joule Heating Effect - Microwave Heating - Induction Heating - Electromechanical Effects - Structural Mechanics - Analysis of Deformation - Stress and Equations of Motion - Eigenfrequency Analysis - Mode Superposition - Response Spectrum Analysis - Material Fatigue - Thermal Expansion and Thermal Stresses - Electromechanical Effects - Acoustic-Structure Interaction - Fluid-Structure Interaction - Fluid Flow, Heat Transfer, and Mass Transport - Fluid Flow: Conservation of Momentum, Mass, and Energy - Navier-Stokes Equations - Nonisothermal Flow - Marangoni Effects - Squeezed and Sliding Films - Fluid-Structure Interaction - Heat Transfer: Conservation of Energy - Mass Transfer - Ionic Migration What Is Mode Superposition? When performing dynamic response analyses of linear structures, mode superposition is a powerful technique for reducing the computation time. Using this method, the dynamic response of a structure can be approximated by a superposition of a small number of its eigenmodes. Mode superposition is most useful when the frequency content of the loading is limited. It is particularly useful when performing analyses in the frequency domain, since the loading frequencies are known. Wave propagation problems are not suited for this technique, as they involve very high frequencies. Deriving the Modal Equations Assume that the equations of motion for a structure are written in matrix form as where is the mass matrix, is the damping matrix, and is the stiffness matrix. The degrees of freedom (DOFs) are placed in the column vector and the forces in . Often, the matrix form is obtained from a discretization of a physical problem using the finite element method. If N denotes the number of DOFs, the matrices have the size NxN. Here, it is assumed that the matrices are real and symmetric and that the stiffness matrix is positive definite. This is the most common case, but it is also possible to use mode superposition in the case of unsymmetric matrices. This could happen, for example, in a coupled acoustic-structural problem. The theory when using unsymmetric matrices is somewhat more complicated, but the principles are the same. A prerequisite for a mode superposition is to compute eigenfrequencies and corresponding mode shapes. This is usually done for the undamped problem, using the eigenvalue equation Usually, only a small number n of the eigenfrequencies are computed. The result of this computation is a set of natural frequencies with corresponding mode shapes , where i ranges from 1 to n. It can be shown that the eigenmodes are orthogonal (or, in the case of duplicate eigenvalues, can be chosen as orthogonal) with respect to both the mass and stiffness matrices. This means that It is convenient to place the eigenmodes in a rectangular Nxn matrix , where each column contains an eigenmode. The orthogonality relation can then be summarized as The diagonal elements are called the modal masses. The values of the modal masses depend on the chosen normalization of the eigenmodes. This normalization is arbitrary, since the mode only represents a shape and the amplitude does not have a physical meaning. One common and convenient choice is mass matrix normalization. The eigenmodes are then scaled so that each , giving The corresponding orthogonality relation for the stiffness matrix is If mass matrix normalization is used, the diagonal matrix consists of the squared natural angular frequencies. The basic assumption in mode superposition is that the displacement can be written as a linear combination of the eigenmodes: Here, are the modal amplitudes. If all eigenmodes of the system were used, this would be an exact, rather than approximate, relation. Since the eigenmodes are orthogonal, they form a complete basis and the expression is merely a change of coordinates from the physical nodal variables to the modal amplitudes. When only a small number of eigenmodes are used, the mode superposition can be viewed as a projection of the displacements onto the subspace spanned by the chosen eigenmodes. The mode superposition can also be written in matrix form as where the modal DOFs have been collected in the column vector . Inserting the mode superposition expression in the equation of motion gives After a left multiplication by : It will be possible to make use of the orthogonality relations so that The original system of equations has now been reduced from N to n variables. The right-hand side is called the modal load. Solving this smaller problem will significantly reduce the computational effort, but there is one more possible simplification. Since and are diagonal matrices, only the term provides a coupling between the equations. It is commonly assumed that the modal damping matrix is diagonal, so that a set of uncoupled equations of the type can be used. However, it should be noted that in real life, there is often some crosstalk between different vibration modes in a damped structure. If strong physical damping is present, like when discrete dashpots are used, it is preferable to use the coupled system. Damping models that can provide decoupled equations in mode superposition include: - Modal damping - Rayleigh damping - Caughey series Directly providing the damping ratio for each mode, , is a common choice. Modal damping gives a large degree of control. Modes can be assigned a higher damping value if, for physical reasons, they are expected to be strongly damped. In the Rayleigh damping model, the damping matrix is assumed to be a linear combination of the mass and stiffness matrices, where and are the two parameters of this model. It will thus be diagonalized by the eigenmodes, just like the constituent matrices. The modal damping will thus be defined implicitly as The coefficients and are usually chosen so that the damping is reasonable at two different frequencies in the interval of interest. The merit of the Rayleigh damping model is its simplicity; it does not have any physical significance. There are actually more general expressions where the damping matrix can be diagonalized by the eigenmodes. A damping matrix constructed using a Caughey series has the same orthogonality properties. The modal damping will be Rayleigh damping is the special case of using the first two terms of the Caughey series. In practice, a Caughey series approach is seldom used. One possible way to create a diagonal modal damping matrix is to use some kind of lumping scheme on to create a diagonal matrix. The simplest such scheme would be to just drop all off-diagonal elements. The Modal Load The modal load is a projection of the external load onto each of the eigenmodes. If a load has a very small projection on a certain mode, such a mode does not need to be included in the superposition. A common case is when both the structure and the load are symmetric. All antisymmetric eigenmodes can then be ignored, since there is no projection of the load on these modes. Since only a small subset of the modes is used in the response analysis, some fraction of the total original load is lost during the projection to the modal coordinates. Various schemes for improving the solution are available; for example, static correction, mode acceleration, and modal truncation augmentation. Stresses and Strains In general, to obtain good stress results, more modes must be used in the superposition than what are needed for a good representation of the displacements. This is because higher modes generally have more complex mode shapes. The derivatives of the displacements (that is, the strains) are thus relatively higher. Examining modal stresses for the individual eigenmodes can indicate their relative importance. All eigenmodes will have zero displacements in DOF where the displacements are constrained. Thus, it is not straightforward to model an excitation by a moving foundation using mode superposition, since such a displacement is not contained in the base spanned by the mode shapes. However, a common case is that the whole foundation moves synchronously, such as a building subjected to an earthquake. The analysis can be performed in a coordinate system fixed to the foundation. This implies that the acceleration of the foundation is instead transformed into volume forces. There is also a technique that is commonly referred to as the large mass method. In this approximate method, each nonzero-prescribed displacement is replaced by a very large point mass. This will give rise to a few very low eigenfrequencies in which only the masses are moving, but all other eigenfrequencies and modes will be almost unchanged. In effect, this means that the set of shapes available in the superposition has been augmented by something rather similar to the static solutions to unit-prescribed displacements. Mode Superposition of a Simply Supported Beam Consider a simply supported beam with the following properties: - Young's modulus, E = 210 GPa - Mass density, ρ = 7850 kg/m3 - Area moment of inertia, I = 7960 mm4 - Cross-section area, A = 1000 mm2 - Length, L = 12 m The natural frequencies for a simply supported beam are given by the expression which, with the selected values, evaluates to . The corresponding eigenmodes can be shown to be where are the arbitrary normalizing constants. We consider the continuous analytical solution in which the equivalent to the mass matrix normalization is For all modes, Let us consider a distributed load with constant intensity per unit length. The modal load can be computed as For even values of j, the modal load is zero. This reflects the fact that the load is symmetric, while the even-numbered modes are unsymmetric. For odd values of j, the modal load is In order to compute a specific response, let us assume that the line load is harmonic with With the angular frequency of 22 rad/s being close to the natural frequency of the 5th eigenmode, we can expect the mode to have a significant influence on the total response. Without any damping and with mass normalization, the modal equations are With a harmonic excitation with amplitude , we obtain the frequency domain version, The modal amplitude can be explicitly solved for. The result is It is thus possible to write the displacement based on the mode superposition explicitly as Taking the second derivative gives the bending moment, so that Except for changed constant multipliers, it can be noted that the term 2k + 1 has moved from the denominator to the numerator. This means that the higher modes are influencing the bending moment (thus the stresses) more than the displacements. In the table below, the modal loads and modal amplitudes are computed for the first ten eigenmodes for the value . The modal amplitudes decrease in higher modes (), since . The displacement results are already well converged when summing modes 1, 3, and 5. In order to get a good representation of the bending moment, mode 7 must, at least, also be included. In this example, the response is strongly dominated by a single mode: mode 5. If the forcing angular frequency is shifted from 22 rad/s to 17 rad/s, we move away from the resonance. The results are shown in the following figures. The displacement is now dominated by mode 3, even though the inclusion of mode 5 is necessary to get an acceptable solution. The bending moment, however, is still dominated by mode 5. Last modified: May 8, 2018
<urn:uuid:cd067b01-8dfd-4139-b8bc-b8af6b4cbc60>
CC-MAIN-2021-43
https://uk.comsol.com/multiphysics/mode-superposition
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00550.warc.gz
en
0.885365
2,532
3.484375
3
European leader, Social Impact Practice Chronic hunger claims the lives of more than 3 million children under the age of five each year—more than die from malaria, tuberculosis, and HIV/AIDS combined. Those who survive suffer permanently from stunted physical growth, impaired cognitive development, and lowered resistance to disease. The technical solution to chronic child hunger is well recognized: provide at-risk populations with sufficient amounts of nutritious food (or the means to produce it themselves), supplemented by appropriate health care (for example, deworming treatments) and education (such as emphasizing the importance of breast-feeding to new mothers). Why, then, does the problem still exist? Although the eradication of chronic child hunger might seem straightforward, the challenge is enormously difficult in practice because of its vast complexity. (See Exhibit 1.) Indeed, significant complicating factors exist at the family, community, and organizational levels: Lack of Vital Knowledge. The relevant parties may not know about the practices that can fend off chronic child hunger—including breast-feeding, proper hygiene, balanced nutritional intake, and deworming. Receiving the necessary education takes time, and that time may not exist in people’s daily routines. Social Norms. Norms that guide the behaviors of families, communities, and governments can create sizable hurdles. In a village in Peru, for instance, quinoa, a highly nutritious crop, grew in abundance, offering a ready solution to hunger in the region. But local communities believed that the grain was cursed and refused to consume it. Competing Demands on Mothers’ Time. A mother might be painfully aware of the importance of having her child treated for severe malnutrition. But if traveling to and from the nearest health center means that she’ll miss a day’s work, compromising her ability to earn enough to feed her other children, she is unlikely to make the trip, even if the treatment is available at no cost. The Large Number, and Often Overlapping Agendas, of Organizations. Progress against chronic child hunger can be hindered by the sheer number of organizations involved. In a single country, multiple government agencies (including ministries of agriculture, education, and health), UN organizations (such as the World Food Programme, the World Health Organization, UNICEF, and the Food and Agriculture Organization), NGOs, donors, coordinating bodies and movements (such as Scaling Up Nutrition), and other aid organizations are typically engaged in the battle against hunger. Because of mandates from governments or donors1 Notes: 1 Since 1990, the international community has issued or established more than 45 declarations, coordinating bodies, and coordinating processes to counter hunger. These include the World Declaration and Plan of Action for Nutrition (1992), the Declaration of the World Food Summit: Five Years Later (2002), the G8 leaders’ Statement on Global Food Security (2009), and the Zero Hunger Challenge (2012). , many of those players have their own goals, preferred modes of intervention and operation, and targeted geographic areas and populations. That complexity can translate into redundancy, a lack of coordination, and, ultimately, a lack of impact. In one country, we observed more than 50 entities engaging in the fight against chronic child hunger—yet most children in need still didn’t receive the comprehensive package of essential interventions. Because so many factors are involved both in the problem of chronic child hunger and in potential solutions, complexity is unavoidable. The task, therefore, is not to try to reduce complexity but to find a different way to think about and manage it. Complexity is also a fundamental challenge for participants in a vastly different realm—the business sector. Businesses across industries routinely face substantial complexity as they strive to pursue multiple valuable, yet competing, performance objectives. Companies want to innovate and be efficient, offer customers low prices and high quality, and customize offerings for specific markets and standardize them to maximize operating returns. Departments and leaders can easily find themselves facing dozens of conflicts, including internal conflicts with other departments or functions such as finance, HR, and IT. When reconciling those objectives proves challenging, companies tend to respond by creating structures, processes, systems, scorecards, and committees. But such interventions rarely deliver as expected. Instead, they merely add layers, which are ultimately counterproductive. In short, they add what we call complicatedness: a man-made response to complexity. We have found that the companies that deal successfully with complexity do not focus on structures but on context and on the ways people interact. They de-emphasize traditional management techniques and emphasize instead what people actually do in an organization and why. We call this approach Smart Simplicity2 Notes: 2 For fuller discussions of Smart Simplicity in a business context, see “ Why Managers Need the Six Simple Rules” (BCG article, March 2014); “ Smart Rules: Six Ways to Get People to Solve Problems Without You” ( Harvard Business Review, September 2011.) . Smart Simplicity rests on the idea that three critical requirements enable organizations to navigate through fundamental challenges and their related complexity: leadership, cooperation, and engagement. Effective leadership is about understanding what people do and empowering them to use their judgment and intelligence. It is also about setting objectives beyond employees’ direct control, holding people accountable for the consequences of their actions, and rewarding those who cooperate. The second element is genuine cooperation among stakeholders. The third is the engagement of all relevant parties. Might these requirements—leadership, cooperation, and engagement—hold equally in the fight against chronic child hunger? In studying the successful mitigation of chronic child hunger3 Notes: 3 We use reductions in stunting to gauge success. in several locations, we found that there are indeed strong parallels between those efforts and the management of complexity in a business setting. In each of the initiatives that we identified as particularly effective—efforts in Senegal, Peru, Vietnam, Brazil, Mauritania, and India—there was a special form of leadership, cooperation among stakeholders, and engagement among all relevant parties. (See Exhibit 2.) Those elements mean specific things, however, in the effort to eradicate chronic child hunger. Leadership. Someone personally dedicated to the cause must champion and take charge of the effort in a country for a minimum of five years, acting as a national integrator of the activities of key stakeholders. This person must have the commitment and backing of the country’s political leaders at the highest level. The leader must be supported by a full-time team of 10 to 15 people that operates under the government’s aegis but is not tied to a particular ministry; this team must have secure funding for at least five years. The team’s task is not to implement specific interventions but to act as a catalyst to necessary actions and then to coordinate and drive them forward. The team supports the formulation of strategies tailored to specific contexts; oversees pilot projects with the aim of producing results quickly and demonstrating proof of concept; and plays a key role in monitoring and reporting progress, and alerting decision makers to potential roadblocks. Cooperation. Cooperation in this context refers to any activity undertaken by a stakeholder that makes the activities of other stakeholders more effective. The steps to foster such cooperation include establishing easy-to-measure joint targets at the village level and formulating strategies and tactics to reach them. Effort should be made to achieve and promote quick wins in order to motivate participants for the journey ahead. Engagement. During the critical “last mile” of implementation—the connection to beneficiaries—aid providers must have a thorough understanding of how to promote engagement and, ultimately, responsibility among individuals and their communities. Mothers, fathers, and other family members should be given incentives and rewards for taking the mitigation of chronic child hunger into their own hands and ensuring ongoing progress. The critical importance of leadership, cooperation, and engagement was evident in the most successful initiatives we observed. Start with leadership. The success in the Indian state of Maharashtra traces its roots to the chief minister’s decision to address malnutrition in the state and to the subsequent engagement of a senior government official. This official led the Rajmata Jijau Mother-Child Health and Nutrition Mission, a team of 15 people that operated from within the state government, for five years with the support of UNICEF. Senegal’s success, in turn, was spurred by a World Bank official who united relevant parties to rethink the country’s approach to hunger challenges. This official secured political support and long-term funding for the effort from the World Bank. The official also recruited an effective local leader to head the National Commission to Fight Malnutrition and to assemble a team that reported to the country’s prime minister. Cooperation among stakeholders was also apparent. In Peru, the initiative’s team, which also reported to the prime minister, worked with various stakeholders and experts to develop a strategy informed by several successful small programs that had been established throughout the country. In Maharashtra, the Rajmata Jijau Mother-Child Health and Nutrition Mission fostered stakeholder cooperation by focusing on one joint outcome metric: the percentage of underweight children. The Mission reinforced stakeholders’ accountability by regularly weighing children in the villages and then publicizing each district’s progress using a simple rating of red, yellow, or green. Joint accountability was further strengthened by regular interactions among district offices, the Mission, and state-level secretaries, and by periodic reviews at the highest levels of the Maharashtra government. The teams we observed fostered engagement in creative ways. In Peru, Juntos, the National Program to Support the Poorest People, gave cash rewards to participating families that verified that pregnant women had prenatal care, newborns had specialized care, and children’s growth was monitored periodically. In Senegal, economic activities (such as the production of clothing and towels using local fabric) were developed that brought together mothers and the community workers who were critical in efforts such as weighing children and counseling and educating mothers. These are only a few examples of cases in which the elements of Smart Simplicity played a fundamental role in successful initiatives to reduce chronic child hunger. Indeed, the prevalence of the elements suggests that they should be part of the blueprint that guides future efforts. (See Exhibit 3.) Considerable amounts of time, energy, and resources have been marshaled to combat chronic child hunger, and great advances have been made. But the journey is far from over, and the complexity of the challenge should not be used as an excuse for the distance that remains. We believe that the elements of Smart Simplicity discussed above can lead to critical improvements in this campaign and that international agencies, governments, charitable organizations, and other parties that have taken up the cause of chronic child hunger would be well served to consider them. The children stand to be the ultimate winners.
<urn:uuid:e3041431-4648-4ee5-a44e-e506a4541975>
CC-MAIN-2021-43
https://www.bcg.com/en-au/publications/2015/development-public-sector-ending-child-hunger-smart-simplicity
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00070.warc.gz
en
0.949655
2,213
3.359375
3
Abraham Lincoln, a self-taught lawyer, legislator and vocal opponent of slavery, was elected 16th president of the United States in November 1860, shortly before the outbreak of the Civil War. Lincoln proved to be a shrewd military strategist and a savvy leader: His Emancipation Proclamation paved the way for slavery’s abolition, while his Gettysburg Address stands as one of the most famous pieces of oratory in American history. In April 1865, with the Union on the brink of victory, Abraham Lincoln was assassinated by Confederate sympathizer John Wilkes Booth. Lincoln’s assassination made him a martyr to the cause of liberty, and he is widely regarded as one of the greatest presidents in U.S. history. Abraham Lincoln's Early Life Lincoln was born on February 12, 1809 to Nancy and Thomas Lincoln in a one-room log cabin in Hardin County, Kentucky. His family moved to southern Indiana in 1816. Lincoln’s formal schooling was limited to three brief periods in local schools, as he had to work constantly to support his family. In 1830, his family moved to Macon County in southern Illinois, and Lincoln got a job working on a river flatboat hauling freight down the Mississippi River to New Orleans. After settling in the town of New Salem, Illinois, where he worked as a shopkeeper and a postmaster, Lincoln became involved in local politics as a supporter of the Whig Party, winning election to the Illinois state legislature in 1834. Like his Whig heroes Henry Clay and Daniel Webster, Lincoln opposed the spread of slavery to the territories, and had a grand vision of the expanding United States, with a focus on commerce and cities rather than agriculture. Lincoln taught himself law, passing the bar examination in 1836. The following year, he moved to the newly named state capital of Springfield. For the next few years, he worked there as a lawyer and serving clients ranging from individual residents of small towns to national railroad lines. He met Mary Todd, a well-to-do Kentucky belle with many suitors (including Lincoln’s future political rival, Stephen Douglas), and they married in 1842. The Lincolns went on to have four children together, though only one would live into adulthood: Robert Todd Lincoln (1843–1926), Edward Baker Lincoln (1846–1850), William Wallace Lincoln (1850–1862) and Thomas “Tad” Lincoln (1853-1871). Abraham Lincoln Enters Politics Lincoln won election to the U.S. House of Representatives in 1846 and began serving his term the following year. As a congressman, Lincoln was unpopular with many Illinois voters for his strong stance against the Mexican-American War. Promising not to seek reelection, he returned to Springfield in 1849. Events conspired to push him back into national politics, however: Douglas, a leading Democrat in Congress, had pushed through the passage of the Kansas-Nebraska Act (1854), which declared that the voters of each territory, rather than the federal government, had the right to decide whether the territory should be slave or free. On October 16, 1854, Lincoln went before a large crowd in Peoria to debate the merits of the Kansas-Nebraska Act with Douglas, denouncing slavery and its extension and calling the institution a violation of the most basic tenets of the Declaration of Independence. With the Whig Party in ruins, Lincoln joined the new Republican Party–formed largely in opposition to slavery’s extension into the territories–in 1856 and ran for the Senate again that year (he had campaigned unsuccessfully for the seat in 1855 as well). In June, Lincoln delivered his now-famous “house divided” speech, in which he quoted from the Gospels to illustrate his belief that “this government cannot endure, permanently, half slave and half free.” Lincoln then squared off against Douglas in a series of famous debates; though he lost the Senate election, Lincoln’s performance made his reputation nationally. Abraham Lincoln’s 1860 Presidential Campaign Lincoln’s profile rose even higher in early 1860, after he delivered another rousing speech at New York City’s Cooper Union. That May, Republicans chose Lincoln as their candidate for president, passing over Senator William H. Seward of New York and other powerful contenders in favor of the rangy Illinois lawyer with only one undistinguished congressional term under his belt. In the general election, Lincoln again faced Douglas, who represented the northern Democrats; southern Democrats had nominated John C. Breckenridge of Kentucky, while John Bell ran for the brand new Constitutional Union Party. With Breckenridge and Bell splitting the vote in the South, Lincoln won most of the North and carried the Electoral College to win the White House. He built an exceptionally strong cabinet composed of many of his political rivals, including Seward, Salmon P. Chase, Edward Bates and Edwin M. Stanton. Lincoln and the Civil War After years of sectional tensions, the election of an antislavery northerner as the 16th president of the United States drove many southerners over the brink. By the time Lincoln was inaugurated as 16th U.S. president in March 1861, seven southern states had seceded from the Union and formed the Confederate States of America. Lincoln ordered a fleet of Union ships to supply the federal Fort Sumter in South Carolina in April. The Confederates fired on both the fort and the Union fleet, beginning the Civil War. Hopes for a quick Union victory were dashed by defeat in the Battle of Bull Run (Manassas), and Lincoln called for 500,000 more troops as both sides prepared for a long conflict. While the Confederate leader Jefferson Davis was a West Point graduate, Mexican War hero and former secretary of war, Lincoln had only a brief and undistinguished period of service in the Black Hawk War (1832) to his credit. He surprised many when he proved to be a capable wartime leader, learning quickly about strategy and tactics in the early years of the Civil War, and about choosing the ablest commanders. Recommended for you General George McClellan, though beloved by his troops, continually frustrated Lincoln with his reluctance to advance, and when McClellan failed to pursue Robert E. Lee’s retreating Confederate Army in the aftermath of the Union victory at Antietam in September 1862, Lincoln removed him from command. During the war, Lincoln drew criticism for suspending some civil liberties, including the right of habeas corpus, but he considered such measures necessary to win the war. Emancipation Proclamation and Gettysburg Address Shortly after the Battle of Antietam (Sharpsburg), Lincoln issued a preliminary Emancipation Proclamation, which took effect on January 1, 1863, and freed all of the enslaved people in the rebellious states not under federal control, but left those in the border states (loyal to the Union) in bondage. Though Lincoln once maintained that his “paramount object in this struggle is to save the Union, and is not either to save or destroy slavery,” he nonetheless came to regard emancipation as one of his greatest achievements, and would argue for the passage of a constitutional amendment outlawing slavery (eventually passed as the 13th Amendment after his death in 1865). Two important Union victories in July 1863–at Vicksburg, Mississippi, and at the Battle of Gettysburg in Pennsylvania–finally turned the tide of the war. General George Meade missed the opportunity to deliver a final blow against Lee’s army at Gettysburg, and Lincoln would turn by early 1864 to the victor at Vicksburg, Ulysses S. Grant, as supreme commander of the Union forces. In November 1863, Lincoln delivered a brief speech (just 272 words) at the dedication ceremony for the new national cemetery at Gettysburg. Published widely, the Gettysburg Address eloquently expressed the war’s purpose, harking back to the Founding Fathers, the Declaration of Independence and the pursuit of human equality. It became the most famous speech of Lincoln’s presidency, and one of the most widely quoted speeches in history. Abraham Lincoln Wins 1864 Presidential Election In 1864, Lincoln faced a tough reelection battle against the Democratic nominee, the former Union General George McClellan, but Union victories in battle (especially General William T. Sherman’s capture of Atlanta in September) swung many votes the president’s way. In his second inaugural address, delivered on March 4, 1865, Lincoln addressed the need to reconstruct the South and rebuild the Union: “With malice toward none; with charity for all.” As Sherman marched triumphantly northward through the Carolinas after staging his March to the Sea from Atlanta, Lee surrendered to Grant at Appomattox Court House, Virginia, on April 9. Union victory was near, and Lincoln gave a speech on the White House lawn on April 11, urging his audience to welcome the southern states back into the fold. Tragically, Lincoln would not live to help carry out his vision of Reconstruction. Abraham Lincoln’s Assassination On the night of April 14, 1865 the actor and Confederate sympathizer John Wilkes Booth slipped into the president’s box at Ford’s Theatre in Washington, D.C., and shot him point-blank in the back of the head. Lincoln was carried to a boardinghouse across the street from the theater, but he never regained consciousness, and died in the early morning hours of April 15, 1865. Lincoln’s assassination made him a national martyr. On April 21, 1865, a train carrying his coffin left Washington, D.C. on its way to Springfield, Illinois, where he would be buried on May 4. Abraham Lincoln’s funeral train traveled through 180 cities and seven states so mourners could pay homage to the fallen president. Abraham Lincoln Quotes “Nothing valuable can be lost by taking time.” “I want it said of me by those who knew me best, that I always plucked a thistle and planted a flower where I thought a flower would grow.” “I am rather inclined to silence, and whether that be wise or not, it is at least more unusual nowadays to find a man who can hold his tongue than to find one who cannot.” “I am exceedingly anxious that this Union, the Constitution, and the liberties of the people shall be perpetuated in accordance with the original idea for which that struggle was made, and I shall be most happy indeed if I shall be an humble instrument in the hands of the Almighty, and of this, his almost chosen people, for perpetuating the object of that great struggle.” “This is essentially a People's contest. On the side of the Union, it is a struggle for maintaining in the world, that form, and substance of government, whose leading object is, to elevate the condition of men -- to lift artificial weights from all shoulders -- to clear the paths of laudable pursuit for all -- to afford all, an unfettered start, and a fair chance, in the race of life.” “Fourscore and seven years ago our fathers brought forth on this continent a new nation, conceived in liberty and dedicated to the proposition that all men are created equal.” “This nation, under God, shall have a new birth of freedom — and that government of the people, by the people, for the people, shall not perish from the earth.”
<urn:uuid:f84a9731-c222-4820-9df1-9f866c5444dc>
CC-MAIN-2021-43
http://www.history.com/topics/us-presidents/abraham-lincoln
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00270.warc.gz
en
0.966079
2,416
3.734375
4
New York City Population Growth, 2010-2040 • New York City’s population is projected to grow from 8.2 million persons in 2010 to 9 million in 2040. While population growth has likely slowed, the Census Bureau’s estimation methodology is not robust enough to precisely quantify the magnitude of these year-to-year changes. This time, 137,000 more New Yorkers left the city for other parts of the country — retiring or moving to less expensive cities in the Sun Belt — than arrived from someplace else in the United States. This component is estimated for three age groups (0-17, 18-64, and 65 years and older). This represented an increase of 223,615 residents (or 2.7 percent) over the April 1, 2010 decennial census count of 8,175,133. The estimated decline, to 8,398,748 from a record 8.4 million in 2017, covers the year ending on July 1, 2018. New York, New York's growth is extremely below average. As in earlier vintages, distributions from the ACS YOE question (5-Year files) continue to be used to allocate state-level foreign-born population to the counties. Up until World War II, everyone in the entire city who was moving apartments had to move on May 1. For example, the foreign-born population in 2010 is survived forward to obtain the expected population in the year 2018. Although the city grew by roughly 224,000 persons since 2010, New York State grew only by 164,000 people due to a population decrease of 60,000 for the counties outside the city. From July 1, 2017 to July 1, 2018, New York state’s population fell by 48,510 and New York City lost 39,523. New York City has a dynamic population, with several hundred thousand people coming and going each year. The city is projected to lose two seats in the House of Representatives after the 2020 count. Introduction The U.S. Census Bureau prepares estimates of total population for all counties in the United States on an annual basis, using a demographic procedure known as the “administrative records method” (described below). New York City’s population density is 28,210 people per square kilometer, which is one of the most densely populated major cities in America. The Census Bureau found this to be problematic because the foreign-born experience a mortality advantage relative to their native-born counterparts. As you can see Spanish dominate the non-English speaking Language due to Latin American immigrant population. This is the story of 40,000 of them — the number of people the Census Bureau estimates that New York City lost last year. Population Growth of New York City. It amounts to 39,523 New Yorkers. “Our team is very much questioning those results and the methodology that was used,” Mayor Bill de Blasio said. U.S. CENSUS BUREAU POPULATION ESTIMATES METHODOLOGY Each year, the U.S. Census Bureau produces estimates of the population for states, counties, cities and other places, as well as for the nation as a whole. Population growth has been fueled by the continued surplus of births over deaths (partly due to record high life expectancy), which has been partially offset by net outflows from the city. City of New York. Post-2010 growth translates into an average annual gain of about 27,000 persons, or a compounded 0.3 percent. This vibrancy is one aspect of what makes New York City’s population extraordinary and different from most other places in the nation and, perhaps, the world. In 2018 New York had its largest population ever. COMPONENTS OF POPULATION CHANGE Demographers divide population change into components. In the Vintage 2018 estimates, the Census Bureau made a series of methodological changes that had a big impact on New York City. In the previously employed method, foreign-born population groups were survived forward using U.S. total life tables developed by the National Center for Health Statistics (NCHS), regardless of race, ethnicity, and foreign-born status. Looking back last 9 years of New York City’s population, the growth rate is very consistent and strong ranging from 0.24% to 0.97%, adding around 20,000 to 80,000 people each year to the overall population. Looking back last 9 years of New York City’s population, the growth rate is very consistent and strong ranging from 0.24% to 0.97%, adding around 20,000 to 80,000 people each year to the overall population. Instead a net domestic migration rate needs to be calculated by taking the difference between the numbers of in- and out-migrants (net migrants) and dividing it by the sum of the non-migrants and out-migrants. Finally, the Census Bureau reworked the life tables underlying the estimation of emigration from the U.S., which has also contributed to a decline in net international migration estimates for the post-2010 period and to lower New York City population estimates in this vintage. Post-2010 growth translates into an average annual gain of about 27,000 persons, or a compounded 0.3 percent. Approximately 37% of the city’s population is foreign born and more than half of all children are born to mothers who are immigrants. The distribution of these characteristics is then used to assign characteristics for states. We believe using the recent years’ figures (see the table in next section) will make the estimation more accurate. In-migrants to a given county are defined as those with an address in the county in 2017, but outside the county in 2016; out-migrants as those with an address in the county in 2016, but outside the county in 2017; and non-migrants as individuals who filed tax returns in the same county at both points in time. Each of the city’s five boroughs registered gains in population. Net International Migrationis the balance of migration flows to and from foreign countries and Puerto Rico. There are 8.4 million people in the Naked City. Subtracting the estimated from the expected populations provides the residual, which then serves as the basis of emigration rates for the foreign-born. While international net migration for the city is estimated at 624,000 in the previous vintage, the latest vintage provides a lower estimate of 431,000 over the same period, which is likely too low. The population of New York in 2018 was 19,530,351, a 0.3% decline from 2017. Births are tabulated by residence of the mother, regardless of where the birth occurred. The top five languages (other than English) spoken in New York City are: Spanish (14.44%), Chinese (2.61%), Russian (1.2%), Italian (1.18%) and French Creole (0.79%). Birth and death certificates from the National Center for Health Statistics are used as the data source. It is important to note that the estimation methodology for net international migration has changed significantly, resulting in a revised 2010-2017 international migration estimate that is 31 percent lower than the previous vintage. The loss suggests that New York’s robust post-recession expansion since 2010 has finally slowed, halting what the city’s leading demographer had called a “remarkable growth story.”. For example, to produce the July 1, 2018 estimates, the addresses of tax filers in 2016 and 2017 are compared. They use data from multiple sources to estimate annual population change since the last decennial census in 2010. The estimated decline, to 8,398,748 from a record 8,438,271 in 2017, covers the year ending on July 1, 2018. Births and deaths are compiled using data from the national vital statistics system. The calculation is based on the average growth rate of 0.55% over last 9 years since 2011. Lot of workers was moving to New York City to take advantage of employment opportunities there, which contributes to the population growth. New York City is frequently shortened to simply “New York,” “NY”, or “NYC”. Previously, the distribution was based on ACS data on Year of Entry (YOE) of the foreign-born population. This method assumes that post-census population change can be closely approximated using vital statistics data on births and deaths, along with other administrative and survey data that provide a picture of migration patterns. The city has many nicknames like “The big apple”, “The Capital of the World” and “The big”. This decline, however, is likely overstated and has lowered total population estimates for the city, which now paint a different picture than the estimates issued just one year ago. Staten Island recorded a gain of 663. Also the city life style attracts a lot of young people coming into the city. This represented an increase of 223,615 residents (or 2.7 percent) over the April 1, 2010 decennial census count of 8,175,133. Chart and table of population level and growth rate for the state of New York from 1900 to 2019. The religious makeup of New York City is: 33% are Catholic; 23% are Protestantism; 3% are another Christian faith; 24% self-identified with no organized religious affiliation; 18.4% are Jewish. But immigration from abroad usually makes up that loss, and tends to push the city’s population higher. The is the percent growth of New York City starting at 1790. Camp Movies, Gladiator Wikipedia, Laura Ingraham Salary, Harrison Burton Father, Cas Tool Math, Toyota Ev, Mercedes-benz Eqc, 2020 Ford Fusion Hybrid, How To Create Layers In Adobe Acrobat, Dance Til Dawn Dvd, Igbo Alphabet Pronunciation, Richard Keogh Parents, Estela 13 Reasons Why Season 3, The Night Before Last Meaning, History Of Niger, Matt Hasselbeck Wife, Dell S3220dgf, Lexus Is200 Forum, Is Political Correctness A Force For Good, Is Marnie Simpson Iranian, Who Does Mr Collins Marry, Professor John Ashton Family, Irene Saxon Bosch, Imperator: Rome Wiki, Benq El2870u Review, Death Of Atari Documentary, Pusher 2009, Jaguar I-pace Stock, Gorko 2013, Have Not Sentence, Meteor Shower Arizona August 2020, The Other Side Game, Traffic Light Rules, Benq 27 Inch Monitor 144hz, Alvin Lee The Last Show, Samsung Cjg5, Aoc Ag273qx Manual, Wayne Brady Family, Nicole Weir Cause Of Death, Horse Companies Stock, Sean Kanan Karate Kid, Kingdom Of God In Hebrew, Charlotte Geordie Shore Before, Spray Sand Texture, 1961 European Cup Final, Brodie Smith Get Freaky, The Minpins Read Online, Custom Hummer H3 For Sale, Bmw 1 Series Automatic For Sale, Mini Cooper Se Hardtop 2 Door, Trolls World Tour It's All Love (history Of Funk) Lyrics, Smile Lily Allen Clean, Cow Belles 2, Life Of Crime Ending Spoiler, 2016 Infiniti Q30 Review, Abuja Airport Contact Number, The Realm Of The Elderlings, Maybach Exelero Price 2019, Colt Mccoy Dates Joined 2013, Lyrid Meteor Shower California, Nottingham Forest Team 1978, Sticking Plaster American English, Coma 2019 Cast, Magento Career, Hotels In Ibara, Abeokuta, Electric Pickup Truck, Hercules Rov History, Phil Parkinson, Laferrari Engine For Sale, Trent Redekop Related To Jack Black,
<urn:uuid:ad77c2c1-b455-4295-80e0-faef11ef27a4>
CC-MAIN-2021-43
https://felliscliffechapel.co.uk/journal/dcb8de-nottingham-forest-1987/dcb8de-new-york-city-population-growth
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00590.warc.gz
en
0.925494
2,405
3.0625
3
“Fight Against Stupidity And Bureaucracy” The number fifteen is perhaps best known today because of Andy Warhol’s fifteen minutes of fame statement. Other facts about fifteen include, - in mathematics fifteen is what is known as a triangular number, a hexagonal number, a pentatope number and the 4th Bell number; - fifteen is the atomic number of phosphorus; - 15 Madadgar is designated as an emergency number in Pakistan, for mobile phones, similar to the international GSM emergency number 112, if 112 is used in Pakistan, then the call is routed to 15; - Passover begins on the 15th day of the Hebrew month of Nisan; - in Spanish culture 15 is the age when a Hispanic girl celebrates her quinceañera; - it is the number of days in each of the 24 cycles of the Chinese calendar; - it is the number of guns in a gun salute to Army, Marine Corps, and Air Force Lieutenant Generals, and Navy and Coast Guard Vice Admirals; - it is the number of checkers each side has at the start of a backgammon game; - and it is the number corresponding to The Devil in tarot cards. - there are 15 players on the field in each rugby union team at any given time; - in tennis, the number 15 represents the first point gained in a game; - the jersey number 15 is worn by the starting fullback; - the jersey number 15 has been retired by several North American sports teams in honor of past playing greats or other key figures: in Major League Baseball the New York Yankees, for Thurman Munson: in the NBA the Boston Celtics, for Hall of Famer Tom Heinsohn; the Dallas Mavericks, for Brad Davis; the Detroit Pistons, for Vinnie Johnson; the New York Knicks have retired the number twice, first for Dick McGuire, and then for Earl Monroe; the Philadelphia 76ers, for Hall of Famer Hal Greer; the Portland Trail Blazers, for Larry Steele: in the NHL: the Boston Bruins, for Hall of Famer Milt Schmidt: and in the NFL: the Green Bay Packers, for Hall of Famer Bart Starr; and the Philadelphia Eagles, for Hall of Famer Steve Van Buren. - The 15th President of the United States was Democratic Party candidate James Buchanan (1791–1868) who was in office from March 4, 1857 to March 4, 1861. His VP was John C. Breckinridge. - He is the only president from Pennsylvania, the only president who remained a lifelong bachelor, and the last president born in the 18th century. - The 15th Amendment to the Constitution granted African American men the right to vote by declaring that the “right of citizens of the United States to vote shall not be denied or abridged by the United States or by any state on account of race, color, or previous condition of servitude.” Although ratified on February 3, 1870, the promise of the 15th Amendment would not be fully realized for almost a century. Through the use of poll taxes, literacy tests and other means, Southern states were able to effectively disenfranchise African Americans. It would take the passage of the Voting Rights Act of 1965 before the majority of African Americans in the South were registered to vote. - Special Field Orders, No. 15 were military orders issued during the American Civil War, on January 16, 1865, by General William Tecumseh Sherman, commander of the Military Division of the Mississippi of the United States Army. They provided for the confiscation of 400,000 acres of land along the Atlantic coast of South Carolina, Georgia, and Florida and the dividing of it into 40-acre parcels, on which were to be settled approximately 18,000 freed slave families and other Blacks then living in the area. Brig. Gen. Rufus Saxton, an abolitionist from Massachusetts who had previously organized the recruitment of black soldiers for the Union Army, was put in charge of implementing the orders. The orders had little concrete effect, as they were revoked in the fall of that same year by President Andrew Johnson, who succeeded Abraham Lincoln after his assassination. Apollo 15 was launched on July 26th, 1971, and landed on July 30th, 1971, at Hadley Rille. Splash Down was on August 7th, 1971. The crew was David R. Scott, James B. Irwin and Alfred M. Worden. At the time, NASA called it the most successful manned flight ever achieved. Apollo 15 was the ninth manned mission in the Apollo space program, the fourth to land on the Moon, and the eighth successful manned mission. It was the first of the longer “J Mission” expeditions to the moon, where the terrain was explored in some detail, and there was a much greater emphasis on science than had previously been possible. The flight of Apollo 15 featured the first use of the Lunar Rover, which permitted Scott and Irwin to leave the Lunar Module “Falcon” behind and drive around over more than 27 kilometers of lunar ground. The astronauts found and brought back the “Genesis Rock,”, a chunk of ancient lunar crust that has been extensively studied for clues about the origins of the moon and the Earth. During the return flight aboard the Command Module “Endeavour,” Alfred Worden became the first man to perform a space walk outside of earth’s orbit as he went outside to retrieve some film from the side of the space craft. Although the mission accomplished its objectives, this success was somewhat overshadowed by negative publicity that accompanied public awareness of postage stamps carried without authorization by the astronauts, who had made plans to sell them upon their return. - The best known aircraft with this designation is the F-15 Eagle. It made its first flight in July 1972, and the first flight of the two-seat F-15B (formerly TF-15A) trainer was made in July 1973. The first Eagle (F-15B) was delivered in November 1974. In January 1976, the first Eagle destined for a combat squadron was delivered. The single-seat F-15C and two-seat F-15D models entered the Air Force inventory beginning in 1979. - The X-15 is perhaps the most ambitious aircraft ever created. It was built to push the limits of flight and explore the possibilities of space travel. During its research program the aircraft set unofficial world speed and altitude records of 4,520 mph (Mach 6.7 on Oct. 3, 1967, with Air Force pilot Pete Knight at the controls) and 354,200 ft (on Aug. 22, 1963, with NASA pilot Joseph Walker in the cockpit). - In the course of its flight research, the X-15’s pilots and instrumentation yielded data for more than 765 research reports. - The X-15 had no landing gear, but rather skidded to a stop in a 200 mph landing on skis. It had reaction controls for attitude control in space, and was a major step on the path toward space exploration. Much of what was learned on the X-15 was applied to the Space Shuttle. - With the exception of the Kalashnikov, the Armalite AR-15 is perhaps the best know assault rifle in the world. It is a lightweight, 5.56 mm, magazine-fed, semi-automatic rifle, with a rotating-lock bolt, actuated by direct impingement gas operation or long/short stroke piston operation. It is manufactured with the extensive use of aluminum alloys and synthetic materials. - The AR-15 was first built by ArmaLite as a selective fire assault rifle for the United States armed forces. Because of financial problems, ArmaLite sold the AR-15 design to Colt. The select-fire AR-15 entered the US military system as the M16 rifle. Colt then marketed the Colt AR-15 as a semi-automatic version of the M16 rifle for civilian sales in 1963. The name “AR-15” is a Colt registered trademark, which refers only to the semi-automatic rifle. - Unfortunately its characteristics also made it a favorite weapon of terrorist organizations. 15 Gun Salute - A 15 gun salute is accorded to a 3-star General The Plus 15 Skyway The Plus 15 or +15 Skyway network in Calgary, Alberta, Canada, is the world’s second most extensive pedestrian skywalk system, with a total length of 16 kilometers (9.9 miles) and 59 bridges. The system is so named because the skywalks are approximately 15 feet (approximately 4.5 metres) above street level. (Some Plus 15 skywalks are multi-level, with higher levels being referred to as +30s and +45s.) The system was conceived and designed by architect Harold Hanen, who worked for the Calgary Planning Department from 1966 to 1969. It provides a pleasant alternative to the cold streets in the winters which can be harsh. The 15 Puzzle One of the most famous puzzles, the 15-puzzle (also called Gem Puzzle, Boss Puzzle, Game of Fifteen, Mystic Square and many others) is a sliding puzzle that consists of a frame of numbered square tiles in random order with one tile missing. The puzzle also exists in other sizes, particularly the smaller 8-puzzle. If the size is 3×3 tiles, the puzzle is called the 8-puzzle or 9-puzzle, and if 4×4 tiles, the puzzle is called the 15-puzzle or 16-puzzle named, respectively, for the number of tiles and the number of spaces. The object of the puzzle is to place the tiles in order (see diagram) by making sliding moves that use the empty space. And finally, The Church Choir But one of the most unusual occurrences of the number concerns fifteen members of a church choir in Beatrice, Nebraska, due at practice at 7:20, were late on the evening of March 1, 1950. - the minister, his wife and daughter were delayed while his wife ironed the daughter’s dress; - another girl waited to finish a geometry problem for homework; - one couldn’t start her car; - two waited to hear the end of an exciting radio program; - one mother and daughter were late because the mother had to call the daughter twice to wake her from a nap; and so on. All the reasons seemed ordinary. In total there were ten separate and quite unconnected reasons for the lateness of the fifteen persons. It was rather fortunate that none of the fifteen arrived on time at 7:20, for at 7:25 the church building was destroyed in an explosion. Life Magazine reported that the members of the choir wondered if their delay was “an act of God.” The Mathematician Warren Weaver, in his book, ‘Lady Luck: The Theory of Probability’, calculates the staggering odds against chance for this event as about one in a million.
<urn:uuid:3d7de20b-c8f8-405b-b693-f20d895723d0>
CC-MAIN-2021-43
https://fasab.wordpress.com/tag/scott/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00109.warc.gz
en
0.964225
2,301
2.703125
3
The Opaline gourami is also known as the Marbled gourami and is a color variant of the Blue gourami or Three-spot gourami, and you may find these fish in your local fish store under these names. The scientific name of the Opaline gourami is, trichopodus trichopterus, formerly trichogaster trichopterus. The variable, attractive patterns of the Opaline gourami, and its peaceful nature make this fish a popular addition to many community tanks. However, male marbled gouramis can be territorial with each other, although that behavior is not usually a problem provided that the tank is large enough. In this guide, we tell you everything that you need to know about the care of the Opaline gourami, including the perfect tank setup, what food to offer your fish, and which species make the best tankmates. Opaline gourami origins The Opaline gourami does not occur in the wild. However, Three-spot gouramis are found in southeastern Asia, from the Yunnan in China through Cambodia, Malaysia, Laos, Myanmar, Thailand, Singapore, and Vietnam, through to Java, Sumatra, and Borneo. The species is also found in parts of India, Sulawesi, the Philippines, and Trinidad. The fish inhabits lowland wetland areas, including swamps, marshes, peatlands, canals, and slow-flowing streams, preferring heavily planted, sluggish, or standing water. Gouramis are omnivorous, feeding on small crustaceans, zooplankton, and insect larvae. Creating the Opaline gourami The Opaline gourami is a variety of Three-spot gourami. Commercial fish breeders in Eastern Europe and the Far East pair individuals for their coloration, producing enhanced colors over several generations. Back in 1970, the Gold and Platinum gourami were produced, adding to this already very popular variety of aquarium fish. The predecessor to the modern Opaline gourami is the Cosby gourami that was developed in the USA by a breeder of that name. The Cosby gourami is a form of early Blue gourami with a silver-blue base color enhanced with darker blue markings, and it’s from that that the Opaline gourami is derived. Thanks to the inbreeding that takes place to create new colors, you would be well advised to take great care when you’re looking for fish to buy. Look out for fish that are free-from obvious skeletal deformities or injuries. That said, Opaline gouramis are generally long-lived and are some of the hardiest aquarium fish available, making them an excellent choice for newbie hobbyists. Trichogaster trichopterus is a long, oval-shaped fish that has large, rounded fins. The ventral fins are long, flowing, and threadlike and are used by the fish as sensory organs thanks to their touch-sensitive cells. Opaline gouramis have a labyrinth organ. The labyrinth organ is a respiratory organ that allows the fish to absorb oxygen from the air into its blood. Gourami fish can grow to around six inches in length, reaching breeding condition at around three inches. These are long-lived fish, having a lifespan of between four and six years on average. The fish’s body is pale blue with a darker, marbled patterning that varies across all color forms. Typically, dark splotches appear at the base of the tail and pectoral fins, creating a beautiful pattern. Care of the Opaline gourami Gouramis are very easy fishes to care for, and they are hardy too. So, by offering your fish the very best care, you can look forward to enjoying a beautiful, long-lived specimen to display in your tank. As we’ve mentioned, Opaline gouramis can grow to between four and six inches in length, and they do require plenty of open swimming space. So, the minimum tank size that you provide should be at least 35 gallons. Although the gourami’s labyrinth organ enables the fish to survive in oxygen-depleted water, that doesn’t mean that water changes are unnecessary, and these fish will suffer the same health conditions as any other species if left to live in a dirty environment. So, the tank must have an efficient filtration system, and 25% water changes should be carried out each week. The current created by the filtration system should not be too strong, as that will stress the gouramis. Opaline gouramis swim in all areas of the water column, although a long tank is better than a tall one because of the species’ habit of using its labyrinth organ to take gulps of air from the surface from time-to-time. If possible, the temperature of the room in which the tank is situated should be as close as possible to that of the tank water, so as not to damage the labyrinth organ. Gouramis need a water temperature of 730 to 820 Fahrenheit, and the surrounding room temperature should be as close as possible to that of the water. If you want to encourage spawning, the water temperature should be around 800 Fahrenheit. The water pH should be in the range of 6.0 to 8.8, with a hardness of between 5 and 35 dGH. The Marbled gourami shows its colors best against a dark substrate. Although these fish are usually pretty confident, it’s a good idea to create a few hiding places and dense planting to shelter fry and other smaller, shyer species. Although gouramis are omnivorous, they won’t bother your plants and often enjoy exploring in and around the stems and leaves. Floating plants are also a good addition to the aquarium, as they help to diffuse bright lighting. However, you must prune out excess growth that could prevent the fish from accessing the surface to breathe, using their labyrinth organ. Although these fish are regarded as a good community species, they are not as peaceful as other species of gourami. Older specimens can be aggressive toward smaller fish, and males may be territorial. So, the best tankmates are those that are of a similar temperament and size. Also, Opaline gouramis have individual temperaments. Some are peaceful and almost shy in their behavior, whereas others are aggressive bullies who will hassle their smaller tankmates. When choosing tankmates for your Opaline gouramis, avoid species that are known to be fin nippers and very small fish that could be viewed as prey. Suitable tankmates for these fish include: - loricariid catfish Other species of medium to large gouramis can usually be included in the mix without problems. Snails and invertebrates are usually safe, as long as they are of a large size. However, shrimp eggs and larvae will most likely be eaten by the gouramis, so be sure to segregate the species if you want to breed shrimp. Diet and nutrition Opaline gouramis are omnivorous and will accept a wide variety of fresh, live, flake, pellet, and frozen foods. A well-balanced diet should include a good-quality pellet or flake food, supplemented with bloodworms, white worm, brine shrimp, and similar. You can also offer fresh vegetables, such as blanched lettuce. Feed your fish once or twice per day, offering just enough food to last for a minute or two. Opaline gouramis spawn readily in the home aquarium, making breeding them a fun project for the newbie tropical fish keeper. Differentiating between the species is relatively straightforward. Male specimens have a longer, more pointed dorsal fin than their female counterparts, which have a shorter, rounder dorsal. Opaline gouramis are bubble nest builders and are easy to breed, provided that the tank conditions are right. The breeding tank Provide the fish with a separate, shallow breeding tank. Keep the water to a depth of around five to six inches, and be sure to keep the current in the tank to a minimum. Some peat filtration or a small, air-powered sponge filter will help to keep the water clean while providing minimal disruption. Water temperature should be around 800 Fahrenheit, and there should be an adequate surface area. The tank should be densely planted with long stem plants that grow to the surface to help keep the bubble nest in place. Introduce a healthy pair of adult fish to the breeding tank, and offer them small portions of live and frozen food two to three times a day to bring the female fish into spawning condition. Females will begin to fill out with eggs when spawning is imminent. The male fish will busy himself building a bubble nest. Once the nest is complete, the male will display to the female, flaring his fins and raising his tail to encourage her to spawn beneath the nest. The eggs are lighter than water and will float up into the nest. There can be between 700 to 800 eggs in one spawn! As soon as the eggs are laid, remove the female. Male gouramis guard their eggs fiercely until they hatch and have been known to kill the female fish. Watch closely for the fry to hatch. As soon as hatching occurs, the fry feeds on the egg sacs for a day or so. Once the fry is free-swimming, the male should be removed from the tank as he may eat the fry. You can feed the fry a liquid fry food or infusoria until they are large enough to eat baby brine shrimp. The Opaline gourami is a very hardy species that will remain healthy and thrive in a well-maintained tank. Problems generally occur if the water quality is poor, the temperature is too low, or the fish are not fed a balanced, nutritious diet. Common fish diseases that may affect gouramis include: - bacterial infections - Hole in the Head disease Fish with constipation usually have problems in swimming, floating up to the water surface, or becoming trapped on the substrate. That can be a real problem for labyrinth breathers, including gouramis. The condition is usually caused by a food blockage in the fish’s digestive system and is easily remedied by starving the fish for a day or two and then feeding them a shelled pea or some live food. Certain species of bacteria live undetected in most fish tanks, causing no problems until the fish become stressed and weakened, usually by poor water conditions or incorrect feeding. Most times, you can fix bacterial infections by remedying the water condition issues and treating the tank water with an appropriate proprietary over-the-counter medication that you’ll find in your local fish store. Hole in the Head disease Thought to be caused by a parasite, Hole in the Head disease causes pitted lesions on the fish’s lateral line and head. The condition is made worse by environmental changes and poor water conditions. As the holes become larger, secondary fungal and bacterial infections develop. Ultimately, the lesions get worse, and severe infection usually kills the fish. Provided that you catch it early, the condition can be treated by adding an antibiotic to the tank water. Ich is a disease that’s caused by the Ichthophthirius multifiliis parasite. The condition is also commonly known as white spot disease, so named for the scattering of tiny white spots across the fish’s body, fins, gills, and tail. Affected fish will flick or rub themselves against the substrate or tank decorations in an attempt to dislodge the parasites and relieve the irritation that they cause. Ich is easily treated by adding an over-the-counter preparation to the tank water. The Ich parasite is present in most tanks, only becoming a problem when fish are weakened due to stress or disease. Any new fish should be quarantined before the introduction to your main display tank, and new substrate, plants, and tank ornaments should be thoroughly cleaned before you put them into your main tank. Also, correct feeding and keeping the tank water clean and within the correct parameters for your fish will help to prevent stressing the fish, which leaves them vulnerable to disease. Most species of gouramis, including the Opaline variety, are available from most good fish stores and online, and they are relatively inexpensive to buy, making them an excellent choice for the beginner aquarist. In this part of our comprehensive guide, we provide the answers to some of the most frequently asked questions about keeping Opaline gouramis. Q: Are Opaline Gouramis aggressive? A: The species is generally peaceful, although males can become territorial as they get older. Q: How big do Opaline Gouramis get? A: These fish grow to between four and six inches in length when mature and should, therefore, be kept in a large aquarium. That’s an important point to bear in mind when buying small juvenile fish to add to your community. Q: Why do Opaline Gouramis change color? A: Members of the Three-spot gourami family are known to change color when stressed or kept in poor conditions. So, if your fish change color or their patterns fade, check your water quality and observe the tank to make sure that no bullying is going on among members of the community. Q: How can you tell a male from female gourami? A: Male gouramis have a longer, more pointed dorsal fin than females, which have a shorter, rounder dorsal fin. Female gouramis considerably swell when ready to spawn and filled with eggs. The Opaline gourami is a beautiful fish that makes an attractive and interesting addition to a community tank. Although generally peaceful as youngsters, older male fish can become aggressive and territorial toward their own species and other smaller fish. You’ll need a tank of a decent size to accommodate this gourami species, as they do like plenty of room to swim. Also, your tank should provide good access to the water’s surface so that these labyrinth breathers can take gulps of oxygen when they need to. These gouramis breed well in a tank environment, which can make a fascinating and rewarding project for new hobbyists.
<urn:uuid:000fba7c-e409-4e3d-9b4c-b47c40722222>
CC-MAIN-2021-43
https://www.aquariadise.com/opaline-gourami/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00670.warc.gz
en
0.946967
3,028
2.875
3
The dialectic art or method, from dialegomai, I converse, discuss, dispute; as noun also dialectics; as adjective, dialectical Dialectic [Gr. dialektike (techne or methodos), the dialectic art or method, from dialegomai, I converse, discuss, dispute; as noun also dialectics; as adjective, dialectical].—(I) In Greek philosophy the word originally signified “investigation by dialogue”, instruction by question and answer, as in the heuristic method of Socrates and the dialogues of Plato. The word dialectics still retains this meaning in the theory of education. (2) But as the process of reasoning is more fundamental than its oral expression, the term dialectic came to denote primarily the art of inference or argument. In this sense it is synonymous with logic. It has always, moreover, connoted special aptitude or acuteness in reasoning, “dialectical skill”; and it was because of this characteristic of Zeno’s polemic against the reality of motion or change that this philosopher is said to have been styled by Aristotle the master or founder of dialectic. (3) Further, the aim of all argumentation being presumably the acquisition of truth or knowledge about reality, and the process of cognition being inseparably bound up with its content or object, i.e. with reality, it was natural that the term dialectic should be again extended from function to object, from thought to thing; and so, even as early as Plato, it had come to signify the whole science of reality, both as to method and as to content, thus nearly approaching what has been from a somewhat later period universally known as metaphysics. It is, however, not quite synonymous with the latter in the objective sense of the science of real being, abstracting from the thought processes by which this real being is known, but rather in the more subjective sense in which it denotes the study of being in connection with the mind, the science of knowledge in relation to its object, the critical investigation of the origin and validity of knowledge as pursued in psychology and epistemology. Thus Kant describes as “transcendental dialectic” his criticism of the (to him futile) attempts of speculative human reason to attain to a knowledge of such ultimate realities as the soul, the universe, and the Deity; while the monistic system, in which Hegel identified thought with being and logic with meta-physics, is commonly known as the “Hegelian dialectic”. THE DIALECTIC METHOD IN THEOLOGY. [For dialectic as equivalent to logic, see art. LOGIC, and cf. (2) above. It is in this sense we here speak of dialectic in theology.]—The traditional logic, or dialectic, of Aristotle‘s “Organon”—the science and art of (mainly deductive) reasoning—found its proper application in exploring the domain of purely natural truth, but in the early Middle Ages it began to be applied by some Catholic theologians to the elucidation of the supernatural truths of the Christian Revelation. The perennial problem of the relation of reason to faith, already ably discussed by St. Augustine in the fifth century, was thus raised again by St. Anseim in the eleventh. During the intervening and earlier centuries, although the writers and Fathers of the Church had always recognized the right and duty of natural reason to establish those truths preparatory to faith, the existence of God and the fact of revelation, those prceambula fidei which form the motives of credibility of the Christian religion and so make the profession of the Christian Faith a rationabile obsequium, a “reasonable service”, still their attitude inclined more to the Crede ut intelligas (Believe that you may understand) than to the Intellige ut credas (understand that you may believe); and their theology was a positive exegesis of the contents of Scripture and tradition. In the eleventh and twelfth centuries, however, rational speculation was applied to theology not merely for the purpose of proving the prceambula fidei, but also for the purpose of analysing, illustrating, and showing forth the beauty and the suitability of the mysteries of the Christian Faith. This method of applying to the contents of Revelation the logical forms of rational discussion was called “the dialectic method of theology”. Its introduction was opposed more or less vigorously by such ascetic and mystic writers as St. Peter Damian, St. Bernard, and Walter of St. Victor; chiefly, indeed, because of the excess to which it was carried by those rationalist and theosophist writers who, like Peter Abelard and Raymond Lully, would fain demonstrate the Christian mysteries, subordinating faith to private judgment. The method was saved from neglect and excess alike by the great Scholastics of the thirteenth century, and was used to advantage in their theology. After five or six centuries of fruitful development, under the influence, mainly, of this deductive dialectic, theology has again been drawing, for a century past, abundant and powerful aid from a renewed and increased attention to the historical and exegetical studies that characterized the earlier centuries of Christianity. DIALECTIC AS FUNDAMENTAL PHILOSOPHY OF HUMAN KNOWLEDGE [cf. (3), above].—(a) The Platonic Dialectic.—From the beginnings of Greek philosophy reflection has revealed a twofold element in the contents of the knowing human mind: an abstract, permanent, immutable element, usually referred to the intellect or reason; and a concrete, changeable, ever-shifting clement, usually referred to the imagination and the external senses. Now, can the real world possess such opposite characteristics? Or, if not, which set really represents it? For Heraclitus and the earlier Ionians, stability is a delusion; all reality is change—iravra tie?. For Parmenides and the Eleatics, change is delusion; reality is one, fixed, and stable. But then, whence the delusion, if such there be, in either alternative? Why does our knowledge speak with such uncertain voice, or which alternative are we to believe? Both, answers Plato, but intellect more than sense. What realities, the latter asks, are revealed by those abstract, universal notions we possess—of being, number, cause, goodness, etc., by the necessary, immutable truths we apprehend and the comparison of those notions? The dialectic of the Platonic “Ideas” is a noble, if unsuccessful, attempt to answer this question. These notions and truths, says Plato, have for objects ideas which constitute the real world, the mundus intelligibilis, of which we have thus a direct and immediate intellectual intuition. These beings, which are objects of our intellectual knowledge, these ideas, really exist in the manner in which they are represented by the intellect, i.e. as necessary, universal, immutable, eternal, etc. But where is this mundus intelligibilis? It is a world apart (xwpts), separate from the world of fleeting phenomena revealed to the senses. And is this latter world, then, real or unreal? It is, says Plato, but a shadowy reflex of reality, a dissolving-view of the ideas, about which our conscious sense-impressions can give us mere opinion (36Ea), but not that reliable, proper knowledge (wThp.1!) which we have of the ideas. This is unsatisfactory. It is an attempt to explain an admitted connection between the noumenal and the phenomenal elements in knowledge by suppressing the reality of the latter altogether. Nor is Plato any more successful in his endeavor to show how the idea, which for him is a really existing being, can be at the same time one and manifold, or, in other words, how it can be universal, like the mental notion that represents it. (b) Aristotelean and Scholastic Dialectic.—Aristotle taught, in opposition to his master Plato, that these “ideas” or objects of our intellectual notions do not exist apart from, but are embodied in, the concrete, individual data of sense. It is one and the same reality that reveals itself under an abstract, universal, static aspect to the intellect, and under a concrete, manifold, dynamic aspect to the senses. The Christian philosophers of the Middle Ages took up and developed this Aristotelean conception, making it one of the cardinal doctrines of Scholastic philosophy, the doctrine of modern Realism. The object of the abstract, universal notion, they taught, is real being; it constitutes and is identical with the individual data of sense-knowledge; it is numerically multiplied and individualized in them, while it is unified as a class-concept or universal notion (unum commune pluribus) by the abstractive power of the intellect which apprehends the element common to the individuals of a class without their differentiating characteristics. The universal notion thus exists as universal only in the intellect, but it has a foundation in the individual data of sense, inasmuch as the content of the notion really exists in these sense-data, though the mode of its existence there is other than the mode in which the notion exists in the intellect: universale est formaliter in mente, fumiamentaliter in re. Nor does the intellect, in thus representing individual phenomena by universal notions, falsify its object or render intellectual knowledge unreliable; it represents the Real inadequately, no doubt, not exhaustively or comprehensively, yet faithfully so far as it goes; it does not misrepresent reality, for it merely asserts of the latter the content of its universal notion, not the mode (or universality) of the latter, as Plato did. But if we get all our universal notions, necessary judgments, and intuitions of immutable truth through the ever-changing, individual data of sense, how are we to account for the timeless, spaceless, changeless, necessary character of the relations we establish between these objects of abstract, intellectual thought: relations such as “Two and two are four”, “Whatever happens has a cause”, “Vice is blameworthy”? Not because our own or our ancestors’ perceptive faculties have been so accustomed to associate certain elements of consciousness that we are unable to dissociate them (as materialist and evolutionist philosophers would say); nor yet, on the other hand, because in apprehending these necessary relations we have a direct and immediate intuition of the necessary, self-existent, Divine Being (as the Ontologists have said, and as some interpret Plato to have meant); but simply because we are endowed with an intellectual faculty which can apprehend the data of sense in a static condition and establish relations between them abstracting from all change. By means of such necessary, self-evident truths, applied to the data of sense-knowledge, we can infer that our own minds are beings of a higher (spiritual) order than material things and that the beings of the whole visible universe—ourselves included—are contingent, i.e. essentially and entirely dependent on a necessary, all-perfect Being, who created and conserves them in existence. In opposition to this creationist philosophy of Theism, which arrives at an ultimate plurality of being, may be set down all forms of Monism or Pantheism, the philosophy which terminates in the denial of any real distinction between mind and matter, thought and thing, subject and object of knowledge, and the assertion of the ultimate unity of being. (c) The Kantian Dialectic.—While Scholastic philosophers understand by reality that which is the object directly revealed to, and apprehended by, the knowing mind through certain modifications wrought by the reality in the sensory and intellectual faculties, idealist or phenomenalist philosophers assume that the direct object of our knowledge is the mental state or modification itself, the mental appearance, or phenomenon, as they call it; and because we cannot clearly understand how the knowing mind can transcend its own revealed, or phenomenal, self or states in the act of cognition, so as to apprehend something other than the immediate, empirical, subjective content of that act, these philosophers are inclined to doubt the validity of the “inferential leap” to reality, and consequently to maintain that the speculative reason is unable to reach beyond subjective, mental appearances to a knowledge of things-in-themselves. Thus, according to Kant, our necessary and universal judgments about sense-data derive their necessity and universality from certain innate, subjective equipments of the mind called categories, or forms of thought, and are therefore validly applicable only to the phenomena or states of sense-consciousness. We are, no doubt, compelled to think of an unperceived real world, underlying the phenomena of external sensation, of an unperceived real ego, or mind, or soul, underlying the conscious flow of phenomena which constitute the empirical or phenomenal ego, and of an absolute and ultimate underlying, unconditioned Cause of the ego and the world alike; but these three ideas of the reason—the soul, the world, and God—are mere natural, necessary products of the mental process of thinking, mere regulative principles of thought, devoid of all real content, and therefore incapable of revealing reality to the speculative reason of man. Kant, nevertheless, believed in these realities, deriving a subjective certitude about them from the exigencies of the practical reason, where he considered the speculative reason to have failed. (d) The Hegelian Dialectic.—Post-Kantian philosophers disagreed in interpreting Kant. Fichte, Schelling, and Hegel developed some phases of his teaching in a purely monistic sense. If what Kant called the formal element in knowledge i.e. the necessary, universal, immutable element—comes exclusively from within the mind, and if, moreover, mind can know only itself, what right have we to assume that there is a material element independent of, and distinct from, mind? Is not the content of knowledge, or in other words the whole sphere of the knowable, a product of the mind or ego itself? Or are not individual human minds mere self-conscious phases in the evolution of the one ultimate, absolute Being? Here we have the idealistic monism or pantheism of Fichte and Schelling. Hegel’s dialectic is characterized especially by its thoroughgoing identification of the speculative thought process with the process of Being. His logic is what is usually known as metaphysics: a philosophy of Being as revealed through abstract thought. His starting-point is the concept of pure, absolute, indeterminate being; this he conceives as a process, as dynamic. His method is to trace the evolution of this dynamic principle through three stages: (I) the stage in which it affirms, or posits, itself as thesis; (2) the stage of negation, limitation, antithesis, which is a necessary corollary of the previous stage; (3) the stage of synthesis, return to itself, union of opposites, which follows necessarily on (I) and (2). Absolute being in the first stage is the idea simply (the subject-matter of logic); in the second stage (of otherness) it becomes nature (philosophy of nature); in the third stage (of return or synthesis) it is spirit (philosophy of spirit—ethics, politics, art, religion, etc.). Applied to the initial idea of absolute Being, the process works out somewhat like this: All conception involves limitation, and limitation is negation; positing or affirming the notion of Being involves its differentiation from non-being and thus implies the negation of being. This negation, however, does not terminate in mere nothingness; it implies a relation of affirmation which leads by synthesis to a richer positive concept than the original one. Thus: absolutely indeterminate being is no less opposed to, than it is identical with, absolutely indeterminate nothing: or BEING-NOTHING; but in the oscillation from the one notion to the other both are merged in the richer synthetic notion, of BECOMING. This is merely an illustration of the a priori dialectic process by which Hegel seeks to show how all the categories of thought and reality (which he identifies) are evolved from pure, indeterminate, absolute, abstractly-conceived Being. It is not an attempt at making his system intelligible. To do so in a few sentences would be impossible, if only for the reason, that Hegel has read into ordinary philosophical terms meanings that are quite new and often sufficiently remote from the currently accepted ones. To this fact especially is due the difficulty experienced by Catholics in deciding with any degree of certitude whether, or how far, the Hegelian Dialectic—and the same in its measure is true of Kant’s critical philosophy also—may be compatible with the profession of the Catholic Faith. That these philosophies have proved dangerous, and have troubled the minds of many, was only to be expected from the novelty of their view-points and the strangeness of their methods of exposition. Whether, in the minds of their leading exponents, they contained much, or little, or anything incompatible with Theism and Christianity, it would be as difficult as it would be perhaps idle to attempt to decide. Be that as it may, the attitude of the Catholic Church towards philosophies that are new and strange in their methods and terminology must needs be an attitude of alertness and vigilance. Conscious of the meaning traditionally attached by her children to the terms in which she has always expounded those ultimate philosophico-religious truths that lie partly along and partly beyond the confines of natural human knowledge, and realizing the danger of their being led astray by novel systems of thought expressed in ambiguous language, she has ever wisely warned them to “beware lest any man cheat [them] by philosophy, and vain deceit” (Coloss., ii, 8).
<urn:uuid:dcde35fa-a385-40ac-b171-539469069644>
CC-MAIN-2021-43
https://www.catholic.com/encyclopedia/dialectic
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00109.warc.gz
en
0.950063
3,743
3.421875
3
Virtual Museum ID: 19-DV18 Gold is a valuable, highly prized mineral used in everything from jewellery to electronics and dentistry. Gold is desirable due to its special properties, such as malleability and resistance to tarnishing. Gold is commonly microscopic or embedded within or around sulphide grains. Free visible gold occurs as disseminated grains, or rarely as crystals. Crystals of gold commonly form within or around quartz. In its natural mineral form, gold is commonly alloyed with silver. Gold is distinguishable by its characteristic golden yellow colour and extreme heaviness. Hematite is an abundant mineral that is found in the shallow crust. It can be found in sedimentary, igneous, and metamorphic rocks throughout the earth. It is an iron oxide. Pure hematite has 70% iron and 30% oxygen by weight. Its luster can range from earthy to metallic and its colour ranges from red, brown to black and silver. Hematite is used to produce pigments, prepare for heavy media separation, ballast, and a lot of other products. The information listed below relates to the current holding location or collection that the sample is from, and whether the item is viewable at that location or is part of a private collection. Coordinates are given as guides, and we remind you that collecting specimens from these locations is not allowed. Caution is advised visiting such sites and Below BC assumes no responsibility for any injuries or trespassing charges that may occur as a result of the viewer entering these sites. Original Collection:Drifter Ventures Ltd. (DV) Virtual Museum ID:19-DV18 Date Added to VM:2019-08-15 Sample Origin:Stewart area, BC Specific Site:Clone property Datum:09 (NAD 83) Primary Features:Au Hematite Primary Mineral Formula:Au, Fe2O3 Primary Category:native element Advanced Geological Information The following section provides geological data relating to the specimen or the site it was collected from, when available. Information has been obtained from various sources including private and government datasets but may not be up to date. Any geological time periods or ages listed often relate to the primary geology of the area, and may not be the actual date of an event such as mineral formation. Geological Formation:Hazelton Group Geological Period:Lower Jurassic Stratigraphic Age:174.1 to 201.3 Million Years Minfile ID:103P 251 The Clone prospect is located about 20 kilometres southeast of Stewart, at the southern end of the Cambria Icefield. Disseminated native gold and minor amounts of chalcopyrite, galena, pyrite and erythrite are hosted by shear-controlled veins and stockworks. Two types of mineralization have been identified along a strike distance of 1.25 kilometres associated with major northwesterly trending (320 degrees) shear zones (both ductile and brittle styles of deformation; i) hematite-cemented, chlorite +/-silica-rich breccia; and ii) semi- to massive sulphide stringer pods/zones. In addition, numerous splays are horsetailed off fault structures. Hostrocks are Lower Jurassic Hazelton Group mega-breccia (debris flow?) and andesitic pyroclastic rocks to the east and argillaceous sediments to the west. Locally, a fine grained dacite porphyry dike intrudes both the hostrocks and the mineralized zones. In H structures, gold mineralization appears to be directly related to the presence of hematite and/or specularite in the hematite-cemented structures. Individual veins range up to 7 metres in width. Chalcopyrite is commonly associated with the gold-bearing zones. In the sulphide-bearing zones, veins range up to 6 metres in width. Cobalt assays up to 0.71 per cent were reported from trenches. Exploration focus is a possible 'elevation' control to dilational-controlled mineralization, with a corresponding increase in sulphides. Chlorite is present throughout. This 'elevation' control is suspected in drillhole 96-18 where a 30-metre intersection assaying 12.34 grams per tonne gold was obtained. Rocks are routinely stained for K-spar alteration; it appears that it is an initial (early), very pervasive phase in the altered andesitic rocks (and confirmed by thin section studies). To date, drilling has tested about a 400 metre strike length of this system; the deepest mineralization section being to 200 metres. The rest of the systems are being sampled by hand-blasted trenches and (planned) drilling. Although some good, high-grade intersections are being reported, it appears there is difficulty correlating between holes (i.e. mineralization is 'dilational' in nature and may require detailed (e.g. 25 metre centre) drilling to define individual ore shoots). Nonetheless, it appears that the Clone property is a significant gold discovery, with very good potential to develop into a major gold mine. The hematite (+chlorite +silica +/-sericite) cemented zones are steeply dipping and contain specularite, chalcopyrite, magnetite and native gold (high purity > 95 per cent, as determined in the Cominco laboratory. The sulphide-dominated mineralization contains auriferous pyrite +/-arsenopyrite, and locally cobalt-bearing minerals(s) (erythrite bloom). Hematitization appears to be pre-introduction of gold; the specularite-bearing veinlets formed later and contain gold. These zones (H1, H2, and H3; S1 and S2) are en echelon over a major northwest trending 'shear' zone for approximately 60 metres in width. In 1995, with Explore BC Program support, Teuton Resources Corporation carried out an integrated grassroots program of prospecting, geological, geochemical and geophysical surveys, trenching and diamond drilling, mostly concentrated on the southwest corner of the large Red property covering the periphery of the southwest Cambria Icefield. This work led to a significant gold discovery on the Clone 1 claim, resulting in an immediate option by Homestake Canada Inc. Teuton Resources Corporation and Minvita Enterprises Ltd. have entered into an agreement with Homestake Canada Inc. and Prime Resources Group Inc. on the Clone property. During 1995, 5.1 line kilometres of magnetic and electromagnetic surveys, 513.8 metres of trenching and 1070 metres of diamond drilling in 13 holes (testing both sulphide-rich and hematite-rich mineralization) were completed and 1542 rock samples were collected and assayed (Explore BC Program 95/96 - G165). In 1996, the property was explored by 1312.8 metres of trenching in 141 trenches, ground geophysics and 11,487.1 metres of drilling in 113 holes. In 1996, drilling traced the hematite-rich H-1 structure over a strike length of 330 metres and a vertical range of 236 metres. A total of 28 holes were drilled on the southeastern end of the zone. The holes intersected rock with grades ranging from 2.85 to 44.23 grams per tonne gold over drill intercepts of 2.2 to 50.9 metres; estimated true width is 36 metres. Cobalt values were as high as 0.13 per cent. Seven holes yielded no significant mineralization. The northern extensions of the H-1 and S-2A were tested by 12 holes. Results ranged from 4.1 metres grading 1.13 grams per tonne gold and 0.06 per cent cobalt (hole 66) to 0.49 metre of 30.51 grams per tonne gold (hole 65) (Northern Miner - November 11, 1996). Another intersection (hole 18) was 61.7 grams per tonne gold and 0.31 per cent cobalt over 5 metres (George Cross News Letter No.192(Oct.6), 1997). As a result of a 17-hole drill program in 1997, Teuton Resources Corp. and Minvita Enterprises Ltd. conclude that cross structures to the sulphide and hematite shear zones control gold-cobalt mineralization. A hornblende granodiorite sill that cuts altered rocks near the main shear/veins was dated at 200.4 +/- 1.3 Ma. The mineralization will predate this 200 Ma date. Given the high closure date for titanite (650 degrees Celsius) and upper crust emplacement of the sill, the date is also the age of crystallization (Fieldwork 2001, pages 135-149). In 2003, Lateegra Resources Corp., under an option agreement with Teuton Resources Corp. and Minveta Enterprises Ltd., drilled nine holes totalling 470.6 metres; four tested the shear zone and three yielded high-grade assays. The most spectacular intersection was in drillhole CL03-2, which twinned a 1996 hole, and intersected 80.80 grams per tonne gold over an apparent width of 8.47 metres (Assessment Report 27297). Late in 2005, Canasia Industries Corp. struck an option agreement with Teuton Resources Corp. on the prospect. In 2006, a helicopter-borne geophysical survey was carried out over the Clone property on behalf of Teuton Resources Corp. and a total of 661 line-kilometres were flown. In 2009, Teuton Resources Corp. conducted a diamond drilling program on the Clone property as part of a larger program covering several Stewart area properties. Thirty-six holes were drilled in two phases, however, Assessment Report 31340 only covers the first five holes (up to August 9, 2009) totalling 337.1 metres from which 210 samples were taken and analyzed. In 2010, Teuton Resources Corp. completed 1354 metres of diamond drilling in 16 holes and took a 34-tonne bulk sample (Information Circular 2011-1, page 20). In 2011, Teuton Resources Corp conducted a bulk sample program on the Clone property where each one-tonne lot returned an average grade of 137.1 grams per tonne gold for the 102 tonnes taken. The samples will be shipped for processing upon completion of metallurgical testing (Information Circular 2012-1, page 16). In 2012, Canasia Industries collected 20 one-tonne bulk samples that yielded an average of 53.1 grams per tonne gold (Information Circular 2013-1, page 12). In 2016, the Clone gold property was owned by Makena Resources Inc., Silver Grail Resources Ltd. and Teuton Resources Corp. where work comprised seven diamond-drill holes with lengths reported to range between 38 and 137 metres. Reported assay results for the first hole included 6.43 metres grading 17.83 grams per tonne gold (Information Circular 2017-1, page 18). In 2017, Sunvest Minerals Corp. collected grab and channel samples and resampled historic drill core at the Clone gold property. Channel sample results included 101 grams per tonne gold over 7.5 metres including 1.5 metres of 245 grams per tonne gold. Grab samples taken near the edge of retreating glaciers assayed 101 and 93.7 grams per tonne gold (Information Circular 2018-1, page 122).
<urn:uuid:caa359ad-1d25-41bb-b808-3f82122ca968>
CC-MAIN-2021-43
https://bbcga.com/project/19-dv18/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00150.warc.gz
en
0.906983
2,374
2.71875
3
Pet Health Tips: Toxicity Chocolate Toxicity in Dogs by Dawn Haas How many times have you been eating that chocolate chip cookie when you look over and see those sad puppy dog eyes staring at you? You remember hearing that chocolate is toxic to dogs. But what makes chocolate toxic to dogs and why is it that some dogs ingest it and don’t get sick? Here are some facts to clear up some of the confusion surrounding chocolate toxicity in dogs. Chocolate can indeed be toxic to dogs. In fact, it is one of the 20 most reported poisonings. The ingredient in chocolate that causes the toxicity is theobromine. The minimum toxic level of theobromine is 100-200mg/kg with 250-500mg/kg being the level at which half of the dogs would die as a result of consuming chocolate. So what does that mean as far as how much chocolate is toxic? The level of theobromine varies depending on the type of chocolate. The levels of theobromine are listed below: |Type of Chocolate||Theobromine level| |Milk Chocolate||60 mg/oz| |Baking Chocolate||450 mg/oz| |Semi-sweet Chocolate||260 mg/oz| |Hot Chocolate||12 mg/oz| |White Chocolate||1 mg/oz| Given these levels, 4 oz of milk chocolate contains about 240mg of theobromine. Considering that the average chocolate bar contains 2-3 oz of milk chocolate, it would take 2-3 candy bars to produce toxicity in a 10 lb dog. However, a single ounce of baking chocolate could produce severe toxicity in the same size dog. So, how does chocolate make dogs sick? Theobromine causes the release of certain substances, norepinephrine and epinephrine, that cause an increase in the dog’s heart rate and can cause arrhythmias. Other signs seen with chocolate toxicity can include increased urination, vomiting, diarrhea or hyperactivity within the first few hours. This can lead to hyperthermia, muscle tremors, seizures, coma and even death. What should be done if a dog does ingest a toxic amount of chocolate? If it has been less than 2 hours, the dog should be made to vomit. Unfortunately, chocolate tends to form a ball in the stomach and may be difficult to remove. Supportive care should be provided for any other signs the dog is exhibiting. Though it may not be harmful to the dog in small quantities, it is safer to avoid giving chocolate to dogs in general. As with everything else, it’s better to be safe than sorry. Ethylene glycol toxicity Ethylene glycol is a highly toxic liquid commonly used as antifreeze in car radiators. Other sources include heat-exchange fluids sometimes used in solar collectors and icerink freezing equipment, and some brake and transmission fluids, as well as diethylene glycol used in color film processing. When the temperature outside begins to drop, not only does it remind us about Christmas but also sets everyone into “winterizing” mood, antifreeze poisoning becomes a common small animal toxicity. As a part of routine vehicle maintenance, we change antifreeze and seldom think how we dispose it. Unfortunately, this poison has a sweet taste and often gets “cleaned up” by our four-legged friends. Poor animals! They have no clue what they are getting into by licking antifreeze spills! Sometimes it takes only 1 or 2 teaspoons of ethylene glycol to be ingested to cause severe illness or even death. How does antifreeze cause intoxication? Under the influence of certain metabolic changes, ethylene glycol is metabolized into glycolic acid, which causes severe, sometime irreversible, kidney damage. As a result, the animal becomes unable to eliminate the toxin from the body, which leads to neurological signs. How do I know that my dog or cat has ingested antifreeze? Most of the time you don’t, because you haven’t witnessed them drinking it. However, you can easily tell that your pet “isn’t doing right”. Peak blood concentration occurs in 1- 3 hrs after ingestion. At this time, you may see signs of alcohol intoxication: depression, knuckling, vomiting or difficulty standing. As time goes by, the animal’s signs may begin to worsen and become totally uncoordinated, or even start to seizure. What do I have to do? It is critical to have your pet checked by a veterinarian as soon as possible! Don’t waste time! There’s medication available to help your pet recover, but it is effective only in early stage of the illness. But best of all, it is always easier to prevent intoxication than treat it. Let’s make sure that our animals have no opportunity to ingest such a lethal poison by keeping all the containers closed when not in use and cleaning up any spills immediately. Tylenol® (Acetaminophen) is Toxic to Cats Tylenol® helps with our aches and pains, so won’t it help Miss Kitty’s? NO!!! Tylenol’s® active ingredient, acetaminophen, is toxic to cats. It only takes one extra strength tablet to be lethal to Miss Kitty. A toxic dose in cats is as little as 50-100 mg/kg. That means 165 mg, which is found in ½ of a regular strength tablet, may be toxic to a 7 lb. cat. Both humans and cats metabolize acetaminophen in the liver. When it is metabolized, there is a pathway that it follows that produces a metabolite that is toxic. Humans have an enzyme called glutathione that joins with this toxic metabolite that makes it non-toxic. Cats are deficient in this glucuronyl transferase so it causes more toxic metabolites to be produced. So to put it simply, cats are deficient in the enzymes necessary for their liver to break down and clear acetaminophen safely. The toxic compound that is left in cats produces free radicals, which are molecules that damage tissue. The liver, kidneys and red blood cells are tissues frequently damaged in animals with acetaminophen toxicity. The molecule within the red blood cells that carries oxygen, hemoglobin, is converted to a molecule that is called methemoglobin, which cannot carry oxygen. This leads to a lack of oxygen to the body showing signs like rapid respiratory rate, brown or muddy pink gums, and weakness. When the liver is damaged, there are signs of vomiting, jaundice and abdominal pain. Other signs that your cat can show includes: swelling of the face and paws as seen in the picture below, and death. It only takes 10 to 60 minutes after ingestion for there to be peak levels of the drug in the bloodstream after ingestion of regular acetaminophen products, and within 60-120 minutes for extended release products. If you suspect that your cat has ingested any acetaminophen, see your vet immediately. If presented within hours of ingestion, your veterinarian can induce vomiting and administer activated charcoal. They then can administer n-acetylcysteine, a drug that provides another substrate for the toxic metabolite to join with. Prognosis can be good if your veterinarian treats soon after ingestion. So you must present your cat early to your veterinarian if you suspect ingestion. Prognosis is poor if signs of liver damage or decreased oxygen are seen. So if you drop any of your medications, make sure you find them and pick them up. Also don’t treat Miss Kitty’s aches and pains with Tylenol®. One lost pill or one dose could mean your pet’s life. Potential household hazards A few plants here and there can add the finishing touches to a room. Although it may have been ‘just what the room needed’ some of these additions can be dangerous for your four legged family members. This article will hit on a few of the common plants that maybe around the house that could cause some problems. This is just a small list of some common toxic houseplants. If you suspect your animal has ingested part of a plant call your veterinarian immediately. If your animal is having serious complications, like breathing difficulties, get them to a clinic as soon as possible. Important information for the veterinarian includes: what kind and what part of the plant was eaten (if possible), how much, and how long ago the plant was eaten. A little bit of knowledge about the plants around us has the potential to prevent a fatal disaster. In the fall, it’s not uncommon for even the best kept house to acquire an unwanted rodent visitor coming in from the cold. Many homeowners find that rodenticide is an easy solution to the problem. However, dogs and cats often find these morsels as tasty as the rodents do. Rodenticides kill by not allowing activation of clotting factors in the blood, causing the mouse or rat to bleed internally. This action is not specific to rodents, but happens in any animal that ingests the bait. Therefore preventing pets from encountering rodenticide is very important. If you feel you must use rodenticide, place it in areas of the house that dogs and cats do not have access to, like the attic or basement. Be careful when putting bait behind the stove or under furniture, cats can squeeze themselves into remarkably small spaces. Finally if your dogs or cats are allowed outside, check to see if neighbors are baiting their sheds or barns. Signs of rodenticide exposure can occur anywhere from one day to one week after ingestion. Signs include not eating, lameness, difficulty breathing, excessive bruising, and bleeding from body openings. If you suspect that your pet has ingested rodenticide, call your veterinarian immediately. It is extremely helpful if you can bring the package from the rodenticide used as there are several different kinds. The prognosis for a poisoned pet can vary greatly depending on the amount ingested and the amount of time before treatment begins. The faster you call your veterinarian, the better chance your pet has for survival. Has your dog or cat just eaten something poisonous? Or was it safe? If you are unsure, read the product label if it is available. Almost all chemicals poisonous to humans are poisonous to our pets as well. Do not attempt to induce vomiting unless the manufacturer suggests it. If you know that your pet drank antifreeze (ethylene glycol), contact your veterinarian, since this is an immediate emergency. There are several drugs that are safe for humans but deadly to our furry friends. These include ibuprofen (Advil, Motrin), naproxen (Aleve), and acetaminophen (Tylenol). Chocolate can be toxic in high doses (more than 4 ounces of milk chocolate, or 1 ounce of baking chocolate per 10 pounds of dog). Many plants are also toxic; however, contrary to popular belief, the poinsettia is not toxic. If you are in doubt if your animal has ingested a poison, contact your veterinarian, or call 1-888-232-8870 or 1-888-426-4435. This is a “pay” service, but they are very helpful and knowledgeable.
<urn:uuid:c149f07a-a478-4849-af36-dec5d4ea3cf4>
CC-MAIN-2021-43
https://vet.purdue.edu/vth/sapc/toxicity-tips.php
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00630.warc.gz
en
0.939758
2,341
2.765625
3
Chequamegon-Nicolet National Forest, Washburn District, December 29th, 2016 Northern Wisconsin remains a battleground for federally protected gray wolves, with two separate fights between hunting hounds and wolves occuring on December 23rd & 24th. While hunting bears with hounds ended in October, the practice of hound hunting in wolf territory continues, only now for coyote and bobcat. National media attention, in part due to Wolf Patrol’s monitoring of hound hunting and bear baiting, has been drawn to the conflict between Wisconsin’s wolves and bear hunting hounds, with local media and anti-wolf supporters quick to frame the conflict as the consequence of a growing wolf population, although evidence suggests, that while wolves act biologically, filling available habitat, its human behavior that has led to the record number of bear hound deaths in 2016. Last week’s wolf conflict left one hunting hound dead, and three injured. Thus, 2016 has seen more fights between free-roaming hunting hounds and wolves, than in any year previous, with the final count being 41 hounds killed by wolves, and 10 injured. To date, the only response from the DNR in regards to these preventable conflicts, is to continue to compensate hound hunters to the tune of $2,500 from the state’s Endangered Species Fund, and continue to allow hound hunters to run their dogs in Wolf Caution Areas, including non-residents with no special license requirements. Since the successful natural return of wolves to Wisconsin in the 1980’s (wolves were not reintroduced, they migrated from surviving populations in northern Minnesota), human hunters have had to come to terms with sharing prey populations with natural predators. As in the Yellowstone ecosystem, (where wolves were reintroduced), previous to the wolf’s return, hunters had a field day regulating prey populations that existed in a vacuum without the natural predation that had existed naturally for centuries. With the return of the wolf, deer and other prey animals have been effected, not only in the number of animals killed, but also through the return of more evasive behavior. That means that herds of deer peacefully grazing in the open where wolves live, is a thing of the past. Deer in Wisconsin previously had only the nine-day deer season to contend with, but now, they are hunted by their natural predator, the gray wolf, year-around. And despite claims that “wolves are eating all the deer” the 2016 deer hunting season in northern Wisconsin saw a record harvest in many counties, reinforcing the scientifically proven fact that predators help, they don’t hurt, prey populations. Wolf Patrol is responding to Wisconsin’s unregulated hound hunting practices in wolf territory, by continuing to monitor both wolf and hound hunting activities in known Wolf Caution Areas throughout the winter, as part of the DNR’s volunteer carnivore tracking program. Last year, Wolf Patrol trackers conducted numerous tracking surveys, that helped contribute to the current estimate of Wisconsin’s wolf population, which is estimated to be 866-897 animals, living in approximately 222 packs. Wolf Patrol conducts its survey in an area of the Chequamegon-Nicolet National Forest that is popular amongst bear baiters and hound hunters, which means it also has a history of multiple conflicts between gray wolves and hunting hounds. In the 2016 bear hound training season and hunting seasons, six bear hounds were killed in the area that Wolf Patrol surveys. As part of a larger investigation, Wolf Patrol is also cataloging bear bait locations in our survey area. We believe as published research suggests, that gray wolves are becoming conditioned to using bait sites as feeding locations, thus setting the stage for many more conflicts with hunting hounds or any dog that trespasses their territory. On December 27th, we began our first tracking survey of the season, concentrating on Wolf Caution Areas within our tracking block, which are established once a depredation has occurred. Wolf Patrol’s trackers covered 35.5 miles of snow-covered roads, in areas where last year we tracked numerous wolves. Only one lone wolf track was found, one bobcat trail, and numerous deer crossings. No other human activity was detected. On December 28th, we carried out a second survey that encompassed 25.5 miles of the Chequamegon-Nicolet National Forest, and this time recorded multiple coyote tracks in addition to deer and rabbit trails. In addition, we followed the trail of three hunting hounds, which were dropped in the heart of a Wolf Caution Area on U.S. Forest Service road 251, and followed the road for 12 miles until they were recovered by hound hunters. No recent wolf tracks were recorded. Currently, hound hunting for coyote and bobcat is allowed in Wisconsin, with there being a year-round open season for coyote, and bobcat hunting allowed from mid-October until the end of January. Most coyote and bobcat hound hunting occurs after the first snowfall, when tracking animals becomes much easier. While the DNR’s large carnivore tracking program is focused on recording the number of gray wolves in the state, Wolf Patrol is also recording hound hunting activity in our tracking block, in part because we suspect illegal killing of wolves is taking place in Wolf Caution Areas throughout northern Wisconsin. We believe this because many hound hunters, especially those who have lost dogs to wolves, have publicly stated their intention to kill every wolf they encounter. On December 14th, I was contacted by a bear hunter who had a hound killed by wolves on September 17th, and others injured outside of Minong, Wisconsin in nearby Washburn County. I was sent a photo of his dead dog and he wanted to let me know that he wished my own pet dog would also be killed by wolves. Wolf Patrol began investigating the depredation incident as well as the bear hunter, and this is what we’ve found. On September 17, 2016, Koty Barth was running his pack of four Plott hounds with his father on a bear’s trail about two miles east of Minong, near Frog Creek, when according to the hound hunters, the dogs were ambushed by a pack of wolves. The wolves killed one hound and injured two others which were left with bite marks, “all over their backs.” The next day Barth was on Facebook making a public proclamation that he was intending to kill the wolves responsible for the depredation. In addition, another hound hunter, Benji Schommer, informed Barth of where he had recently seen wolf sign, and acknowledged that wolves had been visiting his bear baits on multiple occasions. On September 18th, Barth also changed his Facebook profile picture to a graphic that depicts a wolf in the crosshairs of a firearm with the words, “One Shot, One Kill.” Threats like these aren’t being made by wolf haters far away, they are being made by northern Wisconsin residents who regularly run hounds in wolf territory throughout the year, and are adept at taking advantage of the inability of federal and state wildlife officers being able to patrol all the areas used by hound hunters in Wisconsin. These threats are being documented and reported to the federal agency responsible for protecting wolves, the U.S. Fish & Wildlife Service, but no increased enforcement has been reported, nor has there been any communication that these threats are being taken seriously. It appears that federal authorities want to simply wash their hands with the management of gray wolves, now that they have successfully recolonized Wisconsin, and are hoping legal battles will soon return wolves to state management by the DNR. Wolf Patrol believes that the Wisconsin DNR is not capable of responsibly managing the wolf population of Wisconsin. We support DNR conservation officers responsible for the enforcement of wildlife laws, and the many good biologists responsible for providing an accurate assessment of gray wolf populations in the state, but the agency’s lack of regulations governing bear baiting and hound hunting is creating conditions that will continue to lead to many more fights between hunting hounds wolves. Wolf Patrol believes it is the intention of Wisconsin DNR to address the hound/wolf conflict once state management of wolves is returned, by not only re-instating the hunting, trapping and hounding for gray wolves, but also increasing the yearly quota, in an attempt to drive down wolf numbers. The DNR’s hands are tied by legislative action that mandates that they must enact a wolf hunt, including allowing for the use of dogs. In addition, both the DNR’s Wolf & Bear Advisory Councils are filled with members who have publicly stated that unless a wolf hunt is allowed, wolves will overrun the state and destroy the deer population. There has also been talk that unless state control of wolves is given to Wisconsin, frustrated anti-wolf advocates will take matters into their own hands and increase illegal poisoning and killing of federally protected wolves. Wolf Patrol is calling on wolf advocates from not just Wisconsin, but all over the world to contact the U.S. Fish & Wildlife Service and respectfully request that they address the DNR’s hunting practices that are creating a record number of conflicts with federally protected gray wolves and that such practices as bear baiting, hound training and hunting constitute harassment of an endangered species and should be banned on all federal public lands. Wolf Patrol will be continuing its tracking surveys and monitoring of hound hunting activity in Wisconsin throughout the winter and want to thank all of our supporters who helped make 2016 another year that we were able to provide additional protection to the returning wolves of Wisconsin. Let’s make 2017 the last year bear baiting, hound training and hunting is legal on our national forest lands, and lets continue to fight for the return of wolves to suitable habitat, not only in Wisconsin, but the entirety of their historic range. To send your comment to the U.S. Fish & Wildlife Service, please visit: “I would like to know what is being done in the state of Wisconsin to address the growing number of death threats being made on gray wolves by hound hunters, especially those who have had dogs killed by wolves due to the state’s liberal hunting policies that allow for dogs to be run throughout the state, including by non-residents without any permit requirements. The growing number of hound hunters flocking to Wisconsin now that the state’s Department of Natural Resources does not require permits for bear baiting and/or hound training means many more fights between wolves and hunting hounds will continue. I believe such hunting practices constitute the harassment of federally protected endangered species, and should not be allowed on federal lands.” Please be respectful! This is the agency responsible for gray wolves, not the agency responsible for Wisconsin’s unregulated bear hunting practices.
<urn:uuid:aff6bb8a-7ece-46dc-9f5c-17af1cb777da>
CC-MAIN-2021-43
https://wolfpatrol.org/2017/01/01/its-time-to-end-wisconsins-war-on-wolves/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00270.warc.gz
en
0.968271
2,248
2.890625
3
“The greatest threat to our planet is the belief that someone else will save it.” – Robert Swan, Author Many people assume that someone else will take care of the forests. It’s with this mindset that nothing gets done. Forests cover about 30.8% of the world’s land, totaling 10.03 billion acres—our forests are in trouble. We are losing them and continue to do so, given the human ways of life. It is critical to call out that forests are not just many trees patched together on a piece of land. Forests represent an ecosystem where plants, animals, and micro-organisms coexist. About 80% of the world’s land animals and plants live in forests. Forests also make a massive contribution to the environment. The trees and plants absorb carbon dioxide, which can contribute to climate change when released. Plants, checked. Animals, checked. Environment, checked. But hey, what about humans? Billions of people depend on forests for their livelihood. They sell food, timber, wood, medicinal plants, construction material, and several other things. More than 2 billion people use wood-based energy for cooking. Unfortunately, despite all this, we don’t do enough for forests. A part of the problem is that we assume it to be someone else’s problem to solve. As you read this, you might be thinking that you don’t even live near a forest; what can you do? Well, there is plenty you can do. It is about your life’s choices and taking a stand. In this article, we will explore what you can do to help our forests. It is critical to first learn about the biggest threat to our forests, deforestation. Photo by Lauren McConachie Deforestation refers to the cutting, clearing, and permanent removal of a large number of trees from a forest. Deforestation is a constant process and happening even now. While you read this, some trees will be eliminated. And with each forest we lose, several species of plants, animals, and insects vanish. But why is deforestation happening? You cannot always put the finger on the one thing that causes deforestation. There can be a major cause of deforestation but often a few causes working in tandem. The causes of deforestation can be natural or human-driven. The natural causes include hurricanes, climate change, floods, and fires. They are beyond our control. The human-driven causes fall into many categories. A recent study revealed the following causes of deforestation: If you look carefully, there is an underlying theme in the causes of deforestation. The world’s population is growing, and we need resources to meet their needs. Unfortunately, our forests bear the brunt of this growing need. As it stands, it seems to be about choosing between destroying forests and keeping people hungry. Before we look at how to tackle this situation, let's first understand what happens when we destroy forests. One of the most telling effects of deforestation is on biodiversity. About 70% of land, animals, and plant species live in forests. Their survival depends entirely on the well-being of the forest. Plants and animals aren’t the only ones who lose their homes. Forests are homes to the Indigenous Communities and other human inhabitants as well. These communities depend on the forest to sustain their way of life, including hunting animals and gathering food. When exposed, these communities face additional risks because they lack immunity to the outside world's diseases. As mentioned earlier, plants and trees take in Carbon Dioxide and release oxygen. As a result, they store large quantities of carbon dioxide. When trees are cut, the carbon dioxide is released into the environment (known as greenhouse emission), which then adds to global warming. So, the forests are helping keep climate change in check. Trees play a critical role in regulating the water cycle; they absorb water from the soil, use what they need, and release the excess water back into the air. The absence of trees results in less water in the air, resulting in less water reaching back to the soil. If the soil does not get water, it becomes dry and loses fertility to grow crops. Trees help the land to retain water and topsoil, which provides rich nutrients to sustain forest life. When trees are removed, the soil erodes and washes away. The barren land that is left behind is then susceptible to flooding, specifically in coastal regions. Forest wood gets exported the world over. Several other products such as oils, nuts, and resins get shipped too. Many medicines find their origins in forests. Because of deforestation, we stand to lose everything. We learned that it eventually comes down to choosing between protecting forests and feeding the hungry. Indeed, we cannot choose one over the other. But still, there are ways to be smart about the situation. And they start with YOU. At a high level, you should spread awareness about the main driver of deforestation, agriculture. Encourage everyone to move away from habits that result in inappropriate agricultural practices, which then drive large-scale conversion of forests to agricultural production. One way to begin spreading awareness is by sharing this article on Social Media. Let’s now talk about a few specific steps that you can take to protect the forests. Be it on live or social media, support the voices that speak for indigenous peoples' rights. According to estimates from the World Bank, the population of Indigenous People is about 476 million, spread across 90 countries. Indigenous communities have a natural right to call the forests their home. When forests are cleared, the communities are either forced to move to a new location in the forest or outside the forest. In a new place within the forest, they start putting strains on a previously unused forest area. Outside the forest, they are then forced to find ways of earning money to meet the food and shelter they lost. Imagine that you have to suddenly pack up and move to a new location or, for lack of a better example, move to a forest (which in theory is equal to indigenous people moving to cities). How would you feel? It is critical to call out that indigenous people are the real protectors of forests. They hold vital knowledge on how to conserve the forests and use the resources wisely. We need to get behind these protectors whenever there is an opportunity. And be vocal about it! “When you put the whole picture together, recycling is the right thing to do.” – Pam Shoemaker, Author Recycling ensures that instead of becoming waste and a burden on nature, a product is re-manufactured and given a new life. As a result, all products made up of recycled material help save natural resources (including raw material procured from forests) that would have gone into making a new product. You have the right to choose what you use. So, use products that are made up of recycled material. The good news is that it’s not hard to find these products anymore. From products made up of recycled paper to products made up of recycled aluminum, everything has found its way into the markets. You can slowly introduce those products into your life and preach to others what you practice. Our shirts are made up of organic cotton and recycled materials. The other end of the deal is that you must avoid single-use products as much as possible. Any product that you throw away after using once falls under this category. A few examples that immediately come to mind are plastic bags, water bottles, disposables, straws, needles, toilet paper. All these products are not sustainable. But the good part is that many of these products are now available in sustainable versions. Switching to sustainable items can make a big difference for the environment. Psst. Do you know about the power of consumers? The manufacturers adjust to what the customers need. If there is a demand for recycled material products, they will be produced in large numbers. Move away from these brands. Period! Encourage others to do so as well. Did you know that many organizations pledged to stop deforestation by 2020? Most of them have failed or are struggling to do so. Many prominent retailers, fashion brands, and manufacturers figure on this list. By continuing to buy their products, we are directly sponsoring the deforestation that they cause. It’s time to make such brands accountable. Our message has to be loud and clear. “If you kill OUR forests, we don't buy YOUR products.” If you are interested in finding out more about these organizations, look up “fast fashion” on the Internet. There is enough information publicly available. For now, you must also know that large areas of forests are being cleared to make space for producing palm oil, soy, beef, leather, timber, and paper. Several producers of these commodities are linked with deforestation. We consume these commodities in everything. Think about the beef in burgers, palm oil in biscuits, and soy in bread. For now, we, the consumers, can take a stand! We must pressurize brands to source their products from deforestation-free regions. There is a battle that you need to fight with your fork. You must choose what kind of food comes to your plate. Stay away from meat and dairy and eat plants to save forests. Yes, you read it right. Do you recall the leading causes of deforestation? Clearing of space for agriculture and raising cattle. Currently, large areas of land are used for livestock grazing. There are known instances of more than 80% of forests being cleared away to make space for cattle ranching. As the demand for meat increases with the growing population, the 'large' areas will only become 'larger'. Here’s an astonishing fact. We have already identified that soy production is leading to deforestation. Did you know that most soy is produced for feeding animals? You connect the dots now. On the other hand, plant-based foods require significantly less space for production. So, the alternative is clear. Switch over to a plant-based diet and reduce the demand for meat. That is the only way to meet the growing population's needs and yet keep our forests intact. According to the Food Organization of the United Nations (FAO), livestock production is the cause of almost all of the environment's most pressing problems. Well, livestock production is only the symptom. The cause is human greed for meat. Vote for local, state, and national governments that prioritize and put the environment in their manifesto. The governments often fail to check the large corporates and end up giving them a free pass. This results in large-scale deforestation happening in the supply chains of these corporates. The need for the hour is for governments to work with large corporations and commit to saving forests. Governments must also put strict compliance regulations in place, and noncompliant manufacturers must face the harshest of penalties. While large corporations pressure governments to ease environmental regulations, we must pressure the governments to tighten their laws. After reading all this, do we need to discuss the importance of planting trees? Just plant them whenever you have the opportunity. You don’t have to physically plant trees. Instead, you can invest in organizations who’d plant trees on your behalf. clikkacastello plants ten trees for every purchase you make. Creating ideas is easy. Creating forests is not. We need to slow down or completely stop deforestation. As mentioned in this article, there are great benefits of forests and more significant consequences of their absence. Whether you adopt a plant-based diet or choose to plant trees, you can make a positive impact on our Earth. We challenge you to choose any of these options and make a positive impact today. In what condition will you leave our forests for future generations? Comments will be approved before showing up.
<urn:uuid:8520f5fb-9025-4e7f-ae25-c7e0c28ff663>
CC-MAIN-2021-43
http://clikkacastello.it/blogs/articles/how-to-stop-deforestation
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00270.warc.gz
en
0.953356
2,483
3.21875
3
Was it Robert Peary or his largely forgotten partner, Matthew Henson, who first reached the Geographic North Pole? For over a century, polar historians have generally agreed that American Navy engineer Robert Peary was the first person to reach the Geographic North Pole. But studies made over the last several decades assert that it was actually Peary’s African American associate, Matthew Henson, who got there ahead of him – despite losing eight of his toes to frostbite. Peary’s much-studied and much-contested 1909 expedition to the North Pole was the last of eight and the only one to achieve its ultimate objective. And though the man’s claim to have made it to the pole first (or at all) was disputed from the start, it was only more recently that polar scholars began sliding Henson into his place. Some of these scholars, such as British explorer Wally Herbert, science journalist John Noble Wilford, and City Journal editor John Tierney, focused chiefly on the veracity of Peary’s claim to have reached the pole, citing the lack of essential data in his notebook. But some subsequent studies indicate Peary knowingly stole the credit from Henson. These studies, which include a 2014 book entitled The Adventure Gap by James Mills and a National Geographic article by the same author, put stock in Henson’s account over Peary’s. According to Henson, he overshot the area Peary later identified as the North Pole while on a scouting mission during the final stage of their journey. When he and Peary came back to that area and verified it was their goal, they saw Henson’s footprints already there. Nonetheless, Peary returned home to take full credit for being the first man to the North Pole. And while he was awarded medals, promotions, and a generous Navy pension, Henson faded into relative obscurity. In fact, he would only receive the recognition he deserved in the final years of his life, more than three decades after their partnership ended. As outrageous as this outcome is to our modern sensibilities, it was unfortunately commonplace throughout the heyday of polar exploration and discovery. To better understand the historical context of the Peary-Henson expedition, as well as the complicated dynamic between the two men, it helps to look further back. Image by National Archives at College Park Matthew Alexander Henson was born on August 8, 1866, in Charles county, Maryland, just one year after the end of the Civil War and the enforcement of the Emancipation Proclamation. He was orphaned as a child and went to sea at the age of twelve, becoming a capable cabin boy aboard the three-masted sailing ship, Katie Hines. He would remain on the ship for the next six years, sharpening his sailing skills, receiving an education from the captain, and visiting such distant areas as North Africa, the Black Sea, and various regions of Asia. When his captain died in 1887, Henson took a job as a clerk in a fur store in Washington, DC. It was here that he met Robert Edwin Peary. Impressed by Henson’s nautical knowledge and sense of adventure, Peary immediately hired him as a personal valet on his 1888 Nicaragua expedition, bringing Henson into the Navy Corps of Civil Engineers. Henson’s core duties involved mapping the Nicaraguan jungle with Peary, who was trying to survey the route for a canal that could connect the Pacific Ocean with the Atlantic. But this canal was never built, and after two years of scouring the Central American rainforests, the partnership between Peary and Henson temporarily ended. The moment Peary got financing for another expedition, however, Henson was his first hire. But the new expedition would take place in a much different area of the world than Nicaragua, venturing into the farthest sweeps of the Arctic with the goal of reaching the Geographic North Pole. Spanning from 1891 to 1909, this multi-stage mission would represent the pinnacle of their careers and remain a matter of contention for years to come. Image by Bain News Service A trend developed over the course of their eight Arctic expeditions that saw Henson leading in the field while Peary lead in public. Unlike Peary, Henson was fluent in the language of their Inuit associates, among whom he was known as “Matthew the Kind One.” Also unlike Peary, Henson was nearly as good as the Inuit at building, maintaining, and driving the company’s sledges, their primary means of travel over the Arctic pack ice. Henson learned and adopted various Inuit skills in order to manage the harsh Arctic conditions, becoming a skilled dog handler, fisherman, and hunter. Eventually, he came to train even Peary’s most experienced crew members, and even Peary would later submit that a great deal of his expeditions’ overall success was due to Henson. After seven previous attempts to reach the North Pole, nearly all of which got them a little closer to their goal, the final push came when both men were well into their forties. The strain of the task ahead and the toll their previous expeditions had taken on them compelled Henson and Peary to agree that this attempt would be their last. Image by Frederick Cook & The Smithsonian They sailed the Roosevelt out of New York Harbor on July 6, 1908 with a carefully hand-picked team. By September 5, 1908, they had arrived in Cape Sheridan, after which they spent the long, dark Arctic winter storing meat supplies while the wives of their Inuit companions sewed clothing. In February, they moved to their forward base camp at Cape Columbia. The official trek to the pole began on March 1, 1909, when Henson lead the first team of sledges across the ice. Over the next five weeks, the race was on. To say the explorers met with brutal conditions is an understatement. Temperatures frequently dropped to 65°F (54°C) below freezing, and the pack ice below their sledges drifted and cracked, creating treacherous patches of open water called leads that threatened to block their way ahead and behind. Most of the Arctic, we must remember, is simply sea water covered with moving ice, and the North Pole lies right in the center of it. Henson and Peary were essentially sledding across miles of black, pitiless an ocean. Henson’s account of their final trek is detailed and unambiguous. With Peary and four Inuit associates named Seegloo, Ootah, Ooqueah, and Egingwah, he drove their sledges at a grueling pace in stretches of 12 to 14 hours per day. Afraid that leads might open that would trap them on the ice, they moved quickly, navigating by dead-reckoning and sexton. On the evening April 5th, after more than 170 miles (275 km) of backbreaking travel, they stopped to build their igloos amid a deep fog. According his summary, Henson was the lead sledge that day and had scouted far ahead of Peary. But as the team lay to sleep, the fog was too thick to reckon their location. They did not know that they – that is, Henson – had already reached the Geographic North Pole, and had in fact gone past it. The following morning, April 6th, Peary rose early, and without waking his partner as was their usual custom, hurried out of camp with at least one of their Inuit companions, determined to reach the pole first. When Henson woke up, he was heartbroken. But he soon caught up with Peary, and in a newspaper article later reported, “I was in the lead that had overshot the mark by a couple of miles…and I could see that my footprints were the first at the spot.” That spot was a block of ice 413 nautical miles off the coast of Greenland. Peary, who had been so tense leading up to that moment that he had barely spoken to Henson, reportedly all but disowned him after their objective was reached. Saddened that twenty-two years of friendship could so quickly evaporate, Henson was even more crushed to return home to find Peary receiving all the credit for their joint effort. While Peary was heralded as a hero, Henson was relegated to the role of loyal sidekick. Peary received a promotion to rear admiral, a comfortable pension, and numerous recognitions and awards. Henson, on the other hand, was all but forgotten, receiving a minor post as a clerk in the US Customs House in New York City on President Taft’s recommendation and giving occasional small lectures about his experiences. Image by Bain News Service Scholars have disputed Peary’s claims to have reached the North Pole for a few core reasons. First, nobody who accompanied him during the final stage of his expedition was trained in navigation, so no one could confirm his claims to have reached the pole. Second, his reports as to the speeds and distances accomplished after his support group doubled back to camp were nearly three times what he had achieved until that point, completely defying belief. Third, Peary’s account of a direct-line trek to the pole is contradicted by Henson’s account of numerous detours around open leads and pressure ridges. In his book, Ninety Degrees North: The Quest for the North Pole, author Fergus Fleming writes of Peary’s towering egotism, his need to triumph over those around him, and his unwillingness to share the credit of his expedition with a black man – even one who had saved his life on a previous expedition and who had, despite Peary’s infamous arrogance, remained by his side while many of his other associates had abandoned him. It was not until after Henson’s retirement as a customs clerk, a post he held for 23 years, that his long-overdue recognition arrived. In 1944 he was awarded the Congressional Silver Medal, the same medal Peary had received more than thirty years earlier. And in 1947, Henson published a book about his expeditions entitled A Negro at the North Pole, which includes a forward by Booker T. Washington. Image by New York World Telegram & The Sun Newspaper Not long after the publication of this book, the Explorers Club of New York made Henson an honorary member. And in 1954, he was invited to the White House by President Eisenhower to receive a special commendation for his work as an explorer. Even after his death, Henson’s recognitions continued. In 1996 an oceanographic ship was named the U.S.N.S Henson, and in 2000 the National Geographic Society posthumously awarded him the Hubbard Medal, their most prestigious award. Any guesses who first won that award when it was created in 1906? Matthew Henson died on March 9th, 1955, in the Bronx. On April 6th, 1988, exactly 79 years after Henson reached the Geographic North Pole, his remains were moved next to Peary’s at Arlington National Cemetery in Washington, D.C. Despite the military honors of the event, it seems fair to wonder how much Henson would agree with this decision. But whoever technically reached the North Pole first, Henson and Peary were both part of the same expedition. If credit belongs to either of them, it belongs to both of them, and it belongs at least as much to their Inuit companions - none of whom, by the way, are said to have received any formal recognition for their hard work and courage. Unfortunately, a lot of the historic expeditions ended like this. Both the fierce competitiveness of these endeavors and the ethnic dynamics of the era in which they occurred largely precluded any fair division of credit. Beyond words of praise or helping their dutiful sidekicks secure a modest job, most white men simply could not tolerate sharing any substantial portion of their glory with people of color. Tragic as that is, we are happy to recognize such overlooked explorers in our own modest way. All those involved in the race to reach the Geographic North Pole overcame enormous obstacles, both external and internal, to achieve their objective. For this and for the inspiration their accomplishments still give to the world of Arctic travel, we feel these brave explorers deserve our remembrance, recognition, and respect to this very day. Maybe some just a little more than others. Main image: © Unknown author - This image is available from the United States Library of Congress's Prints and Photographs division under the digital ID cph.3g07503
<urn:uuid:a46f3c7c-d495-41df-941c-63df07d3a1b1>
CC-MAIN-2021-43
https://oceanwide-expeditions.com/nl/blog/matthew-henson-first-to-the-top-of-the-world
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00230.warc.gz
en
0.984495
2,621
3.46875
3
mexican dwarf crayfish (Cambarellus patzcuarensis) is one of the few crayfish you can keep in a community tank. Why? Because all crayfish are “silent killers”, “cutters” and diggers. They cut tank plants, dig all around the tank bottom and kill fishes. However, mexican dwarf crayfish is not like this, but for one simple reason – it is a very small creature. Habitation in the wild Mexican dwarf crayfish is from Cambaridae family, Cambarellus species. They comes both from Mexico and the USA. Basically it dwells in streams and small rivers, though it can be also encountered in ponds and lakes. It prefers places with slow water flow or standing bodies of water. The mexican dwarf crayfish body is covered with strong chitinous carapace of bright orange or even red color, due to which Patzcuaro got its name. The division between large cephalothorax and pleon is very pronounced. This crayfish has 19 pairs of limbs and all of them has their own tasks. It has 5 pairs of ambulatory legs which gave the name to the unit. However, the first pair is quite indirectly connected with walking, since these are chelae which we all know well. They serve as an additional rest for a mexican dwarf crayfish, help it to collect food from the bottom, to attack others and to protect itself. It uses antennas and antennules to navigate. It also has 3 pairs of maxilliped limbs to carry the food to its mouth. The chelae can be only technically considered as maxilliped: the crayfish doesn’t use them to walk, but the second pair of its ambulatory legs has small chelae as well. There are 4 pairs of setireme on the crayfish pleon and two pairs of uropods and telson on its tail. He walk along the bottom by means of its legs, but in case of emergency it abruptly bends it pleon together with telson and moves backwards very fast. We should mention that at that its path of motion is unpredictable, which means that it is not that easy to catch a crayfish in a tank. Keeping in a tank |Scientific Name||Cambarellus patzcuarensis| |Common Name||Dwarf mexican crayfish, Patzcuaro, mexican dwarf lobster, orange crayfish, dwarf lobster, mexican lobster, mini crayfish| |Tank size||5 gallons (20 liters) and more| |Size||up to 2 inches (5 cm)| Despite its large size this is a territory dependent creature. To avoid undesirable rivalry it’s better not to keep more than four species in a tank of 100 liters capacity. The tank bottom can contain any shelters – tank plants, snags, stones and others – the more, the better. The thing is that mexican dwarf crayfish need places to hide, which is especially important when he is shedding and becomes totally helpless or for their females when they are carrying eggs. Anything will do as a shelter. It can hide right under a tank plant leave, in a snag crack, under a sprayer or between stones. Water temperature appropriate is 80-83F (25-27С). Tank water pH should be equal to 7 and higher, optimal pH value is 7,5-8. Crayfish sheds its skin from time to time and to grow a new exoskeleton it needs calcium. Correspondingly, carbonate hardness of the tank water (kH) should be equal at least 4. Can live in soft and acidic water, though not for a long time. If tap water in your area has such parameters and you are going to keep crayfish in a tank, you can settle the matter by putting a piece of limestone or some coral chips on the tank bottom. These creatures are extremely sensitive to ammonia and nitrites, that’s why it’s not a good idea to put them into a brand new filled tank. Another crucial parameter is an amount of oxygen in the tank water. From all mentioned above we can conclude that good filtration and aeration are important as well as weekly water renew. Firstly, by doing this we decrease nitrates amount in the tank water; secondly, usually mexican dwarf crayfish shed their skin and mate when the water is renewed. As a rule, in 2-3 days after tank water renew your crayfish sheds its skin. Also you shouldn’t use any copper containing chemicals – it is very harmful for any crayfish. Can one keep mexican dwarf crayfish in a community tank together with other fishes? Yes, you can. But you should bare in mind several peculiarities. The mexican dwarf crayfish itself isn’t large. Its chelae are small and it can hardly hurt anyone with them. The scariest thing this crayfish can do, is put up his chelae viciously, but as a rule, it runs away at any sign of danger. Since crayfish prefer water pH value higher than 7, then platy, molly can be perfect tank mates as well as other fishes that feel comfortable at pH about 7,5. There shouldn’t be any large and aggressive cichlids in the tank, because they’ll just eat any crayfish in it. Therefore you can see that the range of tank mates is quite wide. We’d also like to mention that it’s very seldom that you see a mexican dwarf crayfish walking around the tank bottom. They are good at hiding. Sometimes you may not see them for the whole day and sometimes they all come out in the evening. It is quite possible that mexican dwarf crayfish are more active in the evening and night hours. So, you shouldn’t bother if you haven’t seen your crayfish for several days, especially if there are a lot of places to hide in your tank. Dwarf mexican crayfish feeds on any organics, for example, tank plant remnants. Things are even easier if you keep crayfish in a community tank. Any fish food will do for them. The main thing is that this food gets to the tank bottom. Tablets and pellets for catfish are the best in this case. Bloodworm and brine shrimp will also do. If you want, you can feed with some delicacy, such as a piece of cucumber, squash, carrot. Don’t forget that crayfish is a scavenger. So, what if some fish dies while you are away? Here comes a crayfish! They will eat its body in a very short time and won’t let is rot and spoil the tank water. All crustaceans shed their skin from time to time and dwarf crayfish isn’t an exception in this respect. Replacing its old exoskeleton with a new one is the only way for a crayfish to grow. Juveniles shed their skin rather often – once in 7-10 days. Adult species shed more seldom. It is crucial for a crayfish to have a place to hide while shedding. The crayfish stays in its shelter till its new chitinous carapace hardens. A crayfish without its carapace is completely helpless. Besides, shedding is the only way for it to grow its lost limbs. Crayfish loose their legs under various circumstances. Very often their chelae suffer the most, since they are its largest limbs. It can get stuck somewhere, considering where crayfish like hiding. Also they may loose their limbs just when shedding – if the mexican dwarf crayfish can’t take it out from its old carapace, it just cuts or bites it off. In both cases this is called autotomy. However, it is a very rare thing when crayfish harms one of its kind, not to mention killing one while fighting (since fights happen sometimes). So, if you have seen a mexican dwarf crayfish with one chela – don’t worry, a new one will grow soon, during the next shedding period. As well as living with one chela will not affect its quality of life in any way. Gender differences: male vs female Crayfish are diclinous species. Male are smaller than female. They have strong chelae and a copulatory organ (a modified pair of limbs) on their abdomen. Mexican dwarf crayfish lifespan isn’t long, it equals about 1.5-2 years. However, there is some information that these species can live longer. Therefore if you like crayfish, you’ll have to become aware of their breeding process. Breeding as a rule occurs without any participation of the aquarist except the cases when crayfish are kept in a community tank. Juveniles are small and they shed their skin very often. So, even if their tank mates are the most peaceful fish you can find, they still have no chances to survive. Mating occurs right after the shedding process. After this you will see eggs on the abdomen of female. The eggs are quite large, not transparent and they are easily seen. But the problem here is that the female tries not to leave her shelter without any necessity. That’s why if it’s not a crayfish species tank you have, it’s better to prepare a separate tank for juveniles. You should keep in mind, that adult is indifferent both to its or anyone else’s offspring and at that they don’t take care about their juveniles either.
<urn:uuid:e64d67b5-b13f-4492-abd2-ce7f897f67be>
CC-MAIN-2021-43
https://meethepet.com/dwarf-mexican-crayfish/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00590.warc.gz
en
0.959657
2,096
3.0625
3
We often turn to vitamin C whenever we feel we’re coming down with the flu or colds because you want to combat your respiratory problem as soon as possible. This is understandable since vitamin C is known to improve one’s immune system. Since we can’t produce or even store vitamin C on our own, we need to get it from outside sources. Vitamin C is vital for various bodily functions such as bone health, immune function, growth, as well as development just to name a few. Without it, we won’t be able to make collagen which is essential to our skin. However, the type of vitamin C that we often take doesn’t really get into our system which is why you need to check the best liposomal vitamin C instead. Since vitamin C is a type of water-soluble vitamin, it easily dissolves in water with any excess excreted by the body. Most of us take vitamin C in the form of pills, tablets, or capsules even. The only problem with these options is that we don’t really get the right amount of vitamin since they must travel through our digestive system where the most important nutrients do not really get into our system and often end up as waste. With the best liposomal vitamin C, on the other hand, there is a guarantee that you will reap the benefits of this vitamin to your overall health. Who Needs Liposomal Vitamin C? For those whose immune system is compromised, taking the best liposomal vitamin C may just be the one that you need. Vitamin C in this form can be absorbed better by the body because it is delivered to your bloodstream instead of your digestive system so there is less break down happening. With liposomal vitamin C, you will be able to boost your immune system so that you will be able to combat various health issues such as flu, colds, and cough even. You can also make use of the best liposomal vitamin C if you are constantly feeling low in energy. There are many factors that can lead us to fatigue such as stress, poor diet, lack of sleep, and even underlying illness. Taking liposomal vitamin C can be a huge help in making you feel energized since it helps stimulate other bodily functions too. The best liposomal vitamin C is also useful in keeping your hair, skin, and nails in excellent condition because of the nutrients that your body can absorb quickly. Unlike in tablets and pills form, liposomal vitamin C is better absorbed in the cells as well as organs of the body without the need of expending energy in the process. This means that you are getting all the nutritional benefits of vitamin C because your body is getting more of it compared to regular vitamin C products. This is what you need to take into consideration if you are looking for a way to get more vitamin C into your system. What is Liposomal Vitamin C? What exactly is the best liposomal vitamin C all about, anyway? You’ve probably been taking vitamin C supplements in capsules, pills, and tablets but are you really getting enough of this vitamin into your body? As it was mentioned before, vitamin C is water soluble. This means that it gets dissolved immediately in water. Now imagine the vitamin C supplements that you are taking and how far they must go before they are absorbed in the body. Truth be told, when taking this vitamin C in tablet, pill, or capsule form, most of the nutrients they contain are broken down and excreted from our body in the form of urine. The best liposomal vitamin C is the best solution that you can opt for if you want to ensure that you are getting a healthy dose of this vitamin in your system without having to worry about eliminating most of it from your body. This is because this type of vitamin C makes use of liposomal encapsulation where it will be absorbed easily by the cells and even organs so that nothing is wasted. You will find that you are getting a much higher dosage in this form which is important because it can help improve your immune system, digestion, and other bodily functions too compared to other forms of vitamin C supplements. What Are The Benefits of Liposomal Vitamin C? What exactly can you get out of the best liposomal vitamin C? You’re probably wondering why you need to switch to this type of supplement when it comes to your vitamin C needs when there are already vitamins in tablets, pills, and capsules. Well, like it was mentioned before, this type of supplement is better absorbed into the body since it is designed using liposomal encapsulation technology where the organs and cells can integrate it into their form. This way, you will get these benefits that are linked to liposomal vitamin C. Skips digestive process – Unlike your usual tablets, and pills that must go through your digestive system, the best liposomal vitamin C bypass this stage for better bioavailability of the vitamin. This means that your body will be able to absorb more of this vitamin C for better health. Prevents stomach problems – Another benefit to taking liposomal vitamin C is that it won’t disrupt your digestion, unlike the other vitamin C supplements that we take. Since this type of supplement doesn’t have to go through the digestive process, there are no stomach pains, cramps, and gas even to worry about. Improves immune system – The best liposomal vitamin C is known for its ability to support immune system so that you will be able to fight off common infections such as colds, flu, and the like. Getting sufficient amounts of vitamin C in our system can enhance immune function so you won’t be prone to sickness all the time. Fights free radicals – Another advantage to taking liposomal vitamin C is that it can also help fight off free radicals from causing damage to your system. Free radicals are known to damage skin cells thus triggering premature aging and other imbalances in our body. Vitamin C, on the other hand, contains antioxidants that are the ones responsible for getting rid of free radicals, so your body will be in good condition all the time. Delivers nutrients to cells – The best liposomal vitamin C is absorbed easily by cells and organs because it is designed to do so. This ensures that we are getting the most out of this supplement as opposed to those instances when we are taking vitamin C supplements in capsule, tablet, or pill form. The cells, when they get this vitamin, will function better thus boosting our overall health. Protects against diseases – Since our body cannot produce or keep vitamin C in our system, we will have to rely on other sources to help us out. The liposomal vitamin C is beneficial to us since it gives us better protection against various diseases ranging from milder ones such as colds and cough to the more severe such as cancer. What Are The Precautions of Liposomal Vitamin C? Now that you have an idea on the benefits to be gained from taking the best liposomal vitamin C, you’re probably wondering if there are any precautions that you need to consider for this. This is understandable especially when you are looking for ways in which you can get the right amount of vitamin C into your system. Well, if you are looking to use one, you should take the time to read more about the product before getting one. This way, you will know if there are other ingredients used during the manufacturing process and whether there are any side effects that you should know of. This way, you will be able to choose the product better when you have these vital details on hand. You also need to know what the correct dosage for the best liposomal vitamin C to ensure that you do not overdose yourself with it. Too much of a certain type of vitamin in your body is not good for you. Even if you think that taking too much vitamin C can protect you from health issues, you are putting yourself in harm’s way when you ingest too much. Ask your doctor for the recommended dosage as much as possible. What Are The Best Liposomal Vitamin C Supplements for Better Health? You’re probably looking for the best liposomal vitamin C to use instead of your regular tablets and such but with all the products available, how will you know which one to get? We all need this vitamin for our immune system, metabolism, and other bodily functions but choosing regular supplements is not enough. What you need is to get your hands on the best type of liposomal vitamin C, so you will reap its benefits. If you want to shorten your list of products to choose from, we have put together a list of choices for you to consider. When it comes to the best liposomal vitamin C, you should check out Zenwise Liposomal Vitamin C with 1000mg Quali-C Extra Strength Vit- C Ascorbic Acid for Daily Antioxidant Immune Health. This vitamin C makes use of liposomal technology, so you will get the highest bioavailability of vitamin C which means that men and women can get more Ascorbic acid in the body without any digestive problems. It makes use of Quali-C which is the highest form of vitamin C available. With this as its main ingredient, you will get a decent amount of vitamin C to nourish your body from within. You will find that taking this liposomal vitamin C can improve your immune system, so you will feel well protected against various diseases. - Liposomal vitamin C formula that can enhance your immune system. - Contains high-quality vitamin C for more health benefits. - Supports skin, bone, and immune system with regular use. If you want to get your hands on the best liposomal vitamin C, you will find that Lypo-Spheric Vitamin C is a good option to try out. This product makes use of a new technique for better vitamin C delivery, so you will be able to absorb more vitamin in the process. You will find that taking this supplement can help support your immune system since you are getting more of the vitamin C this way. When this happens, it will help repair muscle tissues especially to those who are active. Taking this vitamin C can provide you with a host of powerful antioxidants that can protect you against the damages caused by free radicals. - Makes use of liposomal encapsulation technology for faster absorption. - Supports better immune system when taken regularly. - It delivers plenty of antioxidants to your system to protect you against free radicals. - Contains soy which can disrupt the endocrine process. You should try Dr. Mercola Liposomal Vitamin C if you need the best liposomal vitamin C. This supplement makes use of liposomal technology to provide you with the better bioavailability of vitamin C while providing protection against discomfort in your intestines. You will find that this supplement can deliver higher amounts of vitamin C compared to other oral medications because these are all absorbed into the body quickly. It can also prevent you from having to take vitamin C intravenously especially when you need higher amounts of this vitamin. - Makes use of liposomal technology for better absorption. - Provides protection against intestinal discomfort. - Delivers higher dosage of vitamin C to your system compared to other oral vitamin C supplements. - Can trigger intestinal problems. Optimized Liposomal Vitamin C 1000mg Softgels is worth considering if you want the best liposomal vitamin C for your diet. This product makes use of high-quality vitamin C that is of pure ascorbic acid. In this product, the vitamin C is mixed with non-GMO sunflower so that the vitamin is trapped in tiny phospholipid spheres so that it will be better absorbed by the body without harming the stomach. There are no gluten, soy, or salt even when it comes to this product. Taking this product regularly will give your immune system a healthy boost so you won’t suffer from various sicknesses such as colds, cough, and the like. - Liposomal Vitamin C made from high-quality vitamin C made in Scotland. - Uses liposomal technology for better absorption of the vitamin. - Boosts immune system function to protect you against various health issues. - It can cause an upset stomach You can also try Ultimate Liposomal Vitamin C by Board Room Organics if you want to use the best liposomal vitamin C for your health. This product doesn’t contain any preservatives, so you know you are getting the best value for your money. It can fight off any infection in your body, reduces cholesterol level, improves brain function, and even enhance nutrient absorption. You will find that this vitamin C is easily absorbed and digested so the nutrients will reach your organs and cells easily enough. You can use it along with other supplements for better absorption of these products, so your body will get the most out of it. - All-natural liposomal vitamin C for better absorption of this essential vitamin. - Improves immune system so you will be protected against diseases. - It is absorbed and digested easily and can be combined with other nutrients too. - Has a strong and unpleasant taste. You’re probably eager to get your hands on the best liposomal vitamin C but which one in this list should you start with? From what we have gathered, we think your best bet would be Zenwise’s Liposomal Vitamin C where it makes use of the highest grade of vitamin C, so you will reap a healthy dose of this vitamin in your system using liposomal encapsulation technology. Ingesting this product will help improve your immune system so you will be well protected against diseases like colds, cough, and the like. The liposomal technology allows better absorption of nutrients especially when you combine it with other health supplements. You will find that your overall health will get a serious boost when you add liposomal vitamin C in your diet.
<urn:uuid:d7b79319-ee21-4384-8825-256c63681705>
CC-MAIN-2021-43
https://www.positivehealthwellness.com/product-reviews/whats-the-best-liposomal-vitamin-c-for-better-health/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00350.warc.gz
en
0.929137
2,885
2.53125
3
Sunday, 22 February 2015 The Character Assassination of George Boleyn Above: Negative depictions of George Boleyn were offered in both The Tudors (left) and The Other Boleyn Girl (right). My research focuses on the maligning and misrepresentation of high-status women in late medieval and Tudor England, examining the gendered dimension and motivations for attacking prominent women. However, it was not only women who were defamed and maligned, but men too. One outstanding example of this is George Boleyn, lord Rochford, whose posthumous reputation has been notorious. He is frequently depicted as a rapist, homosexual, heretic, womaniser, abuser or coward - and, often, all of these are mixed together. This has resulted in what could be termed the black legend of George Boleyn. However, this popular understanding of George has little basis in historical evidence. How did this black legend come about? It is difficult to separate the defamation of George from the character assassination of his entire family. The Boleyns have, in popular culture, become synonymous with greed, treachery, political ambition and ruthlessness, although modern historians recognise that they were typical of the period in which they lived in and were no worse than other noble families. But this has not prevented novelists, film-makers, and directors from slandering the Boleyns. Thomas Boleyn, a respected diplomat and talented linguist in his day, has been caricatured as an ambitious, grasping egotist. His daughter Mary has become synonymous with bawdiness and prostitution, although another line of thought has cast her as the innocent, vulnerable victim of her merciless family. Queen Anne Boleyn has suffered the most, being reduced to a scheming manipulator, a nymphomaniac, or a homewrecker - and frequently, the three are mixed together. George Boleyn then is, like other members of his family, a victim of the abuse directed at the Boleyns. Above: Anne (left) and Mary (right), the sisters of George. Modern historians have been fairly appreciative of George Boleyn's significance, particularly in the English Reformation. In 1531, he was one of several crown officials who assisted Henry VIII in his claim to be supreme head of the English Church. Like his sister Anne, George owned a number of French evangelical works. He turned two of them into presentation copies for his sister, which were based on the works of Jacques Lefevre d'Etaples. These texts stressed the necessity of having a living faith in Christ in order to attain salvation, rather than relying on good works and on the rituals of the established Church. George wrote a dedicatory letter to his sister in one of the texts, "The Epistles and Gospels for the Fifty-Two Weeks of the Year", in which he addressed Anne as 'her most loving and friendly brother'. He also assured her that he loved her. These texts testify to both George's devout religion and his close relationship with Anne. Later, George's execution speech in 1536 confirmed his prominent involvement in religious reform at Henry VIII's court. He referred to himself as 'a great reader and mighty debater of the Word of God, and one of those who most favoured the Gospel of Jesus Christ.' The Imperial ambassador Eustace Chapuys frequently accused the Boleyns of being 'more Lutheran than Luther himself', and it seems plausible that, when Anne became queen, her brother assisted her in advancing evangelical reform at court. Their father, also passionate about religious reform, was probably also involved. Joseph S. Block has written that George's interest and passion for religious reform was motivated not only out of desire to assist his sister, but 'was a guiding light in his life'. George's interests, however, extended beyond religion. His biographers Claire Ridgway and Clare Cherry note that he was a talented poet and linguist. George also enjoyed an excellent diplomatic career, a point often overlooked especially in popular depictions of him. For example, in 1529 he was knighted and led an embassy to France at an unusually young age (if he was born around 1504, he would have been only twenty-five or so years of age: a point which indicates that there was already considerable confidence in his abilities). Soon after, George became Viscount Rochford. In 1533, he again travelled to France where he informed the French king about Henry VIII's marriage to Anne and was able to secure Francois I's support in the struggle against the papal denunciation of Henry's annulment of his first marriage. Further embassies followed later that year and in 1534, and in June 1534, George was rewarded for his diplomatic successes: he was promoted to Baron of the Cinque Ports. Around the end of 1524, he married Jane Parker, the daughter of Henry lord Morley. George's marriage to Jane has more often than not been portrayed as a vicious, tempestuous and abusive union. Alison Weir describes their relationship as "unhappy" and asserts that Jane testified against her husband at the time of his downfall in spring 1536 because she was revolted and disgusted by his sexual practices. This idea was originally put forward by academic historian Retha Warnicke, who suggested that not only was George promiscuous, but guilty of sodomy with several men too. Perhaps unsurprisingly, this salacious notion has become enshrined in popular culture and has hugely influenced the prevailing view of George. In Philippa Gregory's novel The Other Boleyn Girl, George Boleyn is a promiscuous, unhinged and sexually disturbed man who not only has an affair with Francis Weston, but is strongly suggested to have slept with his sister Anne, resulting in the birth of a deformed child and accusations of witchcraft and incest that send both of them to the scaffold. In the television series The Tudors, George is portrayed in a darker light as a serial abuser, who sexually assaults his innocent wife on their wedding night. He also enjoys a sexual relationship with Mark Smeaton. The Tudors provided the most manipulative, abusive, violent and cruel portrayal of George Boleyn to date. Hilary Mantel's Wolf Hall and Bring Up the Bodies presents George more as an arrogant, shallow dandy or fop who shamefully cries at his trial and, during his lifetime, is concerned with nothing more than the pursuit of luxury and decadence. The television series Wolf Hall similarly presents his relationship with Jane as abusive, in which he treats her violently and with contempt. Above: George Boleyn in Wolf Hall. There is no evidence for any of this, and we cannot know the truth of George and Jane's marriage. Their childlessness has frequently been cited as conclusive evidence of their marital unhappiness, but it is equally possible that either partner suffered from infertility, or perhaps there were other problems that we just do not know about. In the absence of evidence, it is unfair and contrary to historical practice to speculate negatively about their relationship. Contrary to legend, Jane was not the 'principal accuser' of her husband in the wake of his downfall in 1536, and she may have been coerced into providing testimony, in what must have been a terrifying and traumatic experience for her. Jane has traditionally been presented as jealous and hurt by George's close relationship with his sister Anne, in which he neglected and mistreated his wife in favour of spending time with his more attractive and accomplished sister. Again, there is no evidence for this. Perhaps Jane was jealous of Anne, perhaps she hated her, perhaps she did love George and was hurt by his treatment of her. Alternatively, and equally validly, perhaps she enjoyed a good relationship with her sister-in-law and perhaps she was treated well by George. We cannot say, and as stated, it is unfair and fruitless to speculate. George was implicated in Anne Boleyn's downfall in the spring of 1536, and he was one of five men sentenced to death for committing adultery with her and plotting to murder Henry VIII (and, in his case, incest). While the majority of modern historians reject these charges entirely, and present all six as innocent of adultery and treason, several writers have speculated that George was guilty of 'unnatural' sexual offences, primarily sodomy and buggery. His sexual partners have been identified as Mark Smeaton and, perhaps, Francis Weston, both of whom were also accused of adultery with the queen and executed. As noted above, the notion of George Boleyn as a homosexual has gained credence in popular culture, and it is almost impossible to read a novel, watch a film, or view a play about him that does not subscribe to this prevailing view of him as a lover of men. However, as with his relationship with his wife, there is no evidence for this. George did provide Smeaton with a manuscript attacking the institution of marriage, but it is reading far too much into this to infer from this alone, in the absence of any other evidence, that George and Smeaton were lovers. George Cavendish, who served Cardinal Wolsey, later described George as 'my life not chaste, my living bestial, I forced widows, maidens I did deflower': in other words, identifying him as a serial womaniser, rather than a sodomite. However, Cavendish was a hostile and prejudiced writer with a clear agenda: to blacken the name of the Boleyns, whom he blamed for the downfall of his master. The famed Tudor poet Sir Thomas Wyatt, who knew George personally, lamented that, had George 'not been so proud, for thy great wit each man would thee bemoan'. It is disturbing to realise that George Boleyn's posthumous reputation has been more negative than it was in his own lifetime. Since his death, he has been subjected to vitriolic attack, shameful slander and, in short, character assassination. In his lifetime, observers alleged that his chief vices were pride, arrogance and, from the perspective of religious conservatives, heresy. However, with the exception of Cavendish, none of them accused him of sexual lechery, and his intimate, loving relationship with Anne was cruelly distorted and misrepresented in 1536 as an incestuous relationship in order to get rid of them both. George Boleyn was a talented linguist, a renowned poet, and above all, a principal exponent of religious reform. He occupied a central place in English politics in the late 1520s and most of the 1530s, and was actively involved in several embassies on the Continent. His contemporaries were in awe of his talents and appreciated his significance. Controversy about George centres on his sexual preferences and on his relationship with his wife Jane. In the absence of any evidence that their relationship was unhappy, or that his sexual behaviour was anything but conventional, George should be given the benefit of the doubt, and should instead be admired and respected for his talents and skills.
<urn:uuid:e6078374-f712-46fa-8043-5fdf16160783>
CC-MAIN-2021-43
https://conorbyrnex.blogspot.com/2015/02/the-character-assassination-of-george.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00390.warc.gz
en
0.98901
2,270
2.953125
3
Some of the content on our website may contain affiliate links. You can read our affiliate disclosure here. Color psychology is a study of how color influences decisions. Color influences your perceptions in a subtle way. You won’t notice it happening but colors can influence what you buy, the taste of food, and how you feel about certain things. The influence of color can differ between individuals. Age, gender, and culture can influence how a person perceives colors. Plus, too much of a particular color can draw out the wrong emotions. How Color Influences Decisions Today we will cover the colors red, orange, yellow, green, blue, indigo, and purple. The goal is to understand how color influences decisions. Depending on the person, colors can have positive or negative effects on the mind. But before we dive in, let us look at some real-world experiments that were conducted to see how colors influence us. The scientific studies that science has, are mostly specific to certain topics. Here are some confirmed experiments. As seen on study.com, all credits go to them for gathering these experiments. To expand the screenshot below, click the image to expand and read up on 3 awesome experiments. As you can see, colors have a huge impact on how we perceive things and the actions that we take. After reading the rest of this resource, you will understand why certain businesses operate with certain colors. The 7 Colors of The Rainbow The color red comes with many meanings. You must have seen red used in many places and businesses. Like other colors, red can draw out negative and positive emotions. Red is a warm and positive color associated with our most physical needs, including our will to survive. Red can trigger our fight or flight response. Red is energizing, promotes ambition, determination and action. Red is strong-willed and can inspire confidence to those lacking this. Red is often associated with love, desire, and sex. Although red is often used to represent love, love is best represented by pink. There is often a misconception that red represents anger. However, anger is only one emotion that red can trigger within you. The paler the color red becomes, the more feminine it is. When you add black or grey to red, you create a warm red shade and tone, the resulting reddish brown is very earthy and masculine. All Positive Meanings of Red Action, energy, speed, attention-getting, assertive, confident, energizing, stimulating, exciting, powerful, passionate, stimulating, driven, courageous, strong, spontaneous and determined. All the Negative Meanings of Red Aggressive, domineering, over-bearing, tiring, angry, quick-tempered, ruthless, fearful, intolerant, rebellious, obstinate, resentful, violent and brutal. The color orange combines the cheerfulness of yellow and the physical energy of red. Orange radiates warmth and happiness. Yellow relates to mental reactions and red relates to physical reactions. Orange relates to your gut reactions or gut feeling. Orange is enthusiastic, adventurous, risk-taking, radiates confidence, and independence. The color psychology of orange is rejuvenating, optimistic, and uplifting. Orange can help us bounce back from despair and life disappointments, assisting us with recovery from grief. The optimistic nature of orange makes it the ideal color to use during tough economic times because it promotes motivation and a better outlook on life. Orange promotes social communication and stimulates two-way conversations. Orange can increase your appetite for food. Many restaurants use decor with orange because it is much like red, it stimulates appetite but in a very subtle way. All the Positive Meanings of Orange sociable, optimistic, enthusiastic, cheerful, self-confident, independent, flamboyant, extroverted, uninhibited, adventurous, risk-taker, creative flair, warm-hearted, agreeable and informal. All the Negative Meanings of Orange superficial, insincere, dependent, overbearing, self-indulgent, the exhibitionist, pessimistic, inexpensive, unsociable and overly proud. The color yellow relates to acquired knowledge. Yellow stimulates our mental faculties by resonating with the left or logic side of our brain. Yellow promotes mental agility and perception. Yellow is uplifting and illuminating with hope, happiness, cheerfulness, and fun. Yellow is the color of creativity and new ideas. Yellow is the most visible of all the colors. We use yellow in crosswalks and pedestrian crossings because of how highly noticeable it is. Yellow can produce anxiety in us because it is fast moving. In design, sometimes too much yellow can cause us to be agitated. It is always better to use yellow wisely or sparingly. Positive Meaning of Yellow Optimism, cheerfulness, enthusiasm, fun, good-humored, confidence, originality, creativity, challenging, academic, analytical, wisdom and logic. Negative Meanings of Yellow Being critical, judgmental, being overly analytical, being impatient, impulsive, being egotistical, pessimistic, an inferiority complex, spiteful, cowardly, deceitful, non-emotional and lacking compassion. Green is the color of balance and harmony. It stimulates the balance of the mind and heart. Green is a relaxing color with a strong representation of growth, spring, renewal, and rebirth. If you went to a place with many green trees or plants, you will often feel renewed or replenished. Green is an emotionally positive color that restores life energy, gives us the ability to love, nurture others and ourselves unconditionally. As a combination of yellow and blue, green carries the mental clarity and optimism of yellow with the emotional calm and insight of blue. Green inspires hope and generosity of the spirit. In a negative light, green can be judgmental and overly cautious. However green is best known to signify prosperity and abundance. In finance, green is used to represent safety. Positive Meanings of Green Growth, vitality, renewal, restoration, self-reliance, reliability, dependability, being tactful, emotionally balanced, calm, nature lover, family oriented, practical, down to earth, sympathetic, compassionate, nurturing, generous, kind, loyal with a high moral sense, adaptable, encourages ‘social joining’ of clubs and other groups, a need to belong. Negative Meanings of Green Being possessive, materialistic, indifferent, over-cautious, envious, selfish, greedy, miserly and devious with money, inconsiderate, inexperienced, a hypochondriac and a do-gooder. Blue is the color of trust and responsibility. Blue is often used to represent trustworthiness, honesty, and loyalty. Blue promotes physical and mental relaxation, seeking peace and tranquility above everything else. The paler blue becomes, the more freedom we feel in our life. In the meaning of colors, blue is one-to-one communication, especially voice communication. Blue is an awesome representation of self-expression, higher ideals, and the ability to communicate our needs and wants. Blue also represents strong trusting and lasting relationships. On a negative perspective, blue can signify a conservative nature or resistant to change. Blue also relates everything from the present and future to experiences in the past. Positive Meanings of Blue Loyalty, trust, integrity, tactful, reliability, responsibility, conservatism, perseverance, caring, concern, idealistic, orderly, authority, devotion, contemplation, peaceful and calm. Negative Meanings of Blue Being rigid, deceitful, spiteful, depressed, sad, too passive, self-righteous, superstitious, emotionally unstable, too conservative, old-fashioned, predictable, weak, unforgiving, aloof, and frigid. It can also indicate manipulation, unfaithfulness, and untrustworthiness. Indigo is the color of intuition, perception and higher mind. Indigo represents integrity and deep sincerity, making service to humanity one of the strengths of indigo. Indigo relies on the intuition, rather than gut feelings like the color orange. The color meaning of indigo conveys great devotion, wisdom, justice, fairness, and impartiality. Indigo is a defender of people’s rights to the end. Indigo stimulates your right brain, giving way to creative activities and helps with spatial skills. On a negative perspective, indigo relates to fanaticism and addiction. All the Positive Meanings of Indigo Integrity, sincerity, structure, regulations, highly responsible, idealism, obedience, highly intuitive, practical visionary, faithful, devotion to the truth and selflessness. All the Negative Meanings of Indigo Being fanatical, judgemental, impractical, intolerant, inconsiderate, depressed, fearful, self-righteous, a conformist, addict, bigoted and avoiding conflict. 7. Violet & Purple These colors relate to imagination, spirituality, and royalty. They are introspective colors, allowing us to get in touch with our inner selves and deeper thoughts. Purple is a mix of red and blue, while violet is part of the rainbow colors. This is the only difference between both colors. In color meaning, they both contain the energy and strength of red, as well as the spirituality and integrity of blue. Keeping us grounded, these colors inspire spiritual enlightenment. Violet and purple support the practice of understanding one’s inner self and meditation. Violet promotes unconditional love and selflessness devoid of ego. Violet encourages creative pursuits, inspiration, and originality through real-life creative experiences. Violet encourages positive thinking such as using your better judgment to help other people. All the Positive Meanings of Violet and Purple Unusual, individual, creative, inventive, intuitive, humanitarian, selfless, unlimited, mystery, fantasy and futuristic. All the Negative Meanings of Violet and Purple immaturity, being impractical, cynical, aloof, pompous, arrogant, fraudulent, corrupt, delusions of grandeur and the social climber. Good web designers understand the influence of color. How people feel about the colors that you have chosen to represent your business or brand is extremely important. As an entrepreneur, designer or marketer, you should always think about colors and the emotions that you are trying to draw out. Everyone buys emotionally so it makes sense that colors can influence buying decisions.How #Color Can Influence #Consumer Buying DecisionsClick To Tweet
<urn:uuid:031c2fb5-1b5c-4745-a82c-92fc00a09358>
CC-MAIN-2021-43
https://techhelp.ca/how-color-influences-decisions/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00390.warc.gz
en
0.905816
2,162
3.28125
3
|Infobox on Starch| |Example of Starch| |Stowage factor (in m3/t)||2,12/2,26 (cases)| |Humidity / moisture||-| |Risk factors||See text| Pure starch is a white, tasteless and odourless powder that is insoluble in cold water or alcohol. Starch is a highly organized mixture of two carbohydrate polymers, amylose and amylopectin, which are synthesized by plant enzymes and simultaneously packed into dense water-insoluble granules. Starch granules vary in size (1 to 100 microns [μ m] in diameter) and shape, which are characteristic of their specific plant origin. Starch is the major energy reserve for plants; it is located mainly in the seeds, roots or tubers, stem pith, and fruit. Starch amylose is primarily a linear chain of glucose units. Amylose chains can coil into double helices and become insoluble in cold water. Amylopectin also is composed of chains of glucose units, but the chains are branched. This branched structure renders amylopectin soluble in cold water. The molecular architecture of the amylopectin and amylose within the granules is not entirely understood, but the granules are insoluble in cold water. The functional properties of native starch are determined by the granule structure. Both the appearance of the granules and their functional properties vary with the plant source. Physical and Functional Properties In home cooking and in commercial food processing native starches are used for their thickening properties. Starch granules when heated in water gradually absorb water and swell in size, causing the mixture to thicken. With continued heating however, the swollen granules fragment, the mixture becomes less thick, and the amylose and amylopectin become soluble in the hot mixture. This process of granule swelling and fragmenting is called gelatinization. Once gelatinized the granules cannot be recreated and the starch merely behaves as a mixture of amylose and amylopectin. Because of the larger size of the swollen granules compared to the size of amylose and amylopectin, the viscosity of the swollen granule mixture is much higher than the viscosity (the resistance to flow or a liquid or semi-liquid mixture) of the amylose/amylopectin mixture. Starches from different plant sources vary in their gelatinization temperatures, rate of gelatinization, maximum viscosity, clarity of the gelatinized mixture, and ability to form a solid gel on cooling. The texture of heat-gelatinized starch mixtures is variable. Some gelatinized starch mixtures have a smooth creamy texture, while others are more pastelike. Some starches form gels after cooking and cooling. These starch gels may lack stability and slowly exude water through the gel surface. A similar breakdown of the gelatinized starch occurs in some frozen foods during thawing and refreezing. Although amylose is soluble in the hot gelatinized starch mixture, it tends to become insoluble in the cooled mixture. This phenomenon is called retrogradation and it occurs when the amylose chains bind together in helical and double helical coils. Retrogradation affects the texture of the food product and it also lowers the digestibility of the product. The proper starches must be employed for the different food products to minimize these problems. Certain starches are good film formers and can be used in coatings or as film barriers for protection of the food from oil absorption during frying. Native and Modified Starches The predominant commercial starches are those from field corn (maize), potato, cassava (tapioca), wheat, rice, and arrowroot. Field cornstarch (27% amylose and 73% amylopectin) is the major commercial starch worldwide. Genetic variants of field corn include waxy maize, which produces a starch with 98 to 100% amylopectin, and high-amylose starches, which have amylose contents of 55% , 70%, and higher. Waxy starch does not form gels and does not retrograde readily. High-amylose starches retrograde more extensively than normal starches and are less digestible. Their linear structure enables them to form films. From the 1940s on the demand for convenience foods, dry mixes, and various processed foods has led to the modification of starches for food use and for other commercial products. These modified starches improve the textural properties of food products and may be more suitable for use in modern processing equipment. The Food and Drug Administration regulates use of the various modified food starches by stipulating the types of modification allowed, the degree of modification, and the reagents used in chemical modification. However, the food label is required only to state that "modified starch" is present. Only a small fraction of the sites available for modification of the food starches are actually modified. Although the degree of modification is small, the properties of the starches are significantly improved. This small degree of modification is sufficient to give a more soluble and stable starch after cooking. The clarity of the gelatinized starch as well as the stability of the cooked starch and starch gels are improved. The modification procedures are carried out under mild conditions that do not cause gelatinization of the native starch granules, and therefore the functional properties of the granule are preserved. The emulsifying properties of starch also may be improved by proper modification, improving the stability of salad dressings and certain beverages. Physically modified starches include a pregelatinized starch that is prepared by heat-gelatinization and then dried to a powder. This instant starch is water-soluble and doesn't require further cooking. Because of its lower viscosity resulting from loss of granule structure, the starch can be used at higher concentrations. Certain confectionaries require high levels of starch to give structure to their products. These gelatinized instant starches serve this role. Cold water swelling starches represent a different type of instant starch. They are made by a proprietary process that retains the granule structure but lowers the granule strength. These cold water swelling starches give higher viscosities than the other instant starches. They are used in instant food mixes and for products such as low-fat salad dressings and mayonnaise. Plant breeding has led to specialty starches with atypical proportions of amylose and amylopectin. Waxy maize starch with nearly 100% amylopectin is inherently stable to retrogradation. Chemically cross-linked waxy maize starch is a very high-quality modified starch. High-amylose starches have become available more recently and have led to lower caloric starches. Because of the crystallinity of these starches they are partially resistant to digestion by intestinal amylases and behave as dietary fiber when analyzed by the official methods of analysis for dietary fiber. Some of these high-amylose starches contain as high as 60% dietary fiber when analyzed. The nutritional value of uncooked (ungelatinized) starchy foods (cereal grains, potato, peas, and beans) is relatively poor. The digestive enzymes do not readily convert the native granular starch of uncooked fruits and vegetables into glucose that would be absorbed in the small intestine. Undigested starch passes into the large intestine where, along with dietary fiber, it is broken down to glucose and fermented to short-chain fatty acids. Some of these short-chain acids are absorbed from the large intestine resulting in recovery of some of the caloric value of the native starch. Starch-Derived Dextrins and Corn Syrups Modified starches as described above were developed to improve starch functionality in foods as well as their ability to withstand the physical forces of modern food processing systems. In addition to the food applications of starches and modified starches, the native starches are also converted into other products that serve food and other industries. These products do not require the granular character of native starches, which is lost by chemical or enzymic action during processing of the starch. Dextrinization, a process requiring high temperatures and acid that has been in use since the early 1800s, converts native starch into dextrins that are composed of amylose and amylopectin chains of smaller sizes and altered structure. Consequently, food and nonfood industries have access to a range of dextrins of varying molecular sizes, solubility, and viscosity, but without the granular characteristics described above. Corn syrups are made in the same way as the dextrins, but they are converted to a higher degree such that glucose is a major ingredient. The more recent availability of an enzyme that converts glucose into fructose has led to a new industry in high-fructose corn syrups, which have found a strong market in beverages. Grade: commercial, powdered, pearl, laundry, technical, reagent, edible, USP. Use: adhesive (gummed paper and tapes, cartons, bags, etc.), machine-coated paper, textile filler and sizing agent, beater additive in papermaking, gelling agent and thickener in food products (gravies, custards, confectionery), oil-well drilling fluids, filler in baking powders (cornstarch), fabric stiffener in laundering, urea-formaldehyde resin adhesives for particle board and fibreboard, explosives (nitrostarch), dextrin (starch gum), chelating and sequestering agent in foods, indicator in analytical chemistry, anticaking agent in sugar, face powders, adherent and mold-release agent, polymer base. Shipment / Storage / Risk factors Starch is usually packed in bags or cases. Should be stowed apart from odorous, wet or oily goods. Liable to take taint, heat and cake when wet. See also Flour
<urn:uuid:8adb8376-0847-4735-adf6-af832f58f22a>
CC-MAIN-2021-43
https://www.cargohandbook.com/Starch
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00310.warc.gz
en
0.935809
2,092
3.65625
4
Written by Greg Seitz This article originally appeared on St. Croix 360’s website. You can read it on the site here. A prickly antenna at Carpenter Nature Center along the lower St. Croix River represents a big step in better understanding the wildlife that travel through this area. The device is the first addition in the St. Croix Valley to a global network that lets scientists track birds — or even large insects. The antenna can detect signals more than 12 miles away from animals fitted with “nanotags,” tiny transmitters that scientists affix to small birds, monarchs, and even dragonflies. Carpenter’s new antenna is part of the Motus network, a Canadian effort which boasts nearly 1,000 receivers around the world, providing valuable insights into wildlife and conservation. This web of antennas is letting researchers follow animals in ways that have never before been possible. Any researcher can place nanotags on their study creatures and let them loose again. They pay small fees to the network to register the tags. The animals are then automatically detected if they pass near a Motus tower, and the times and locations are shared. If it passes enough receivers, a detailed picture of a bird’s life can be observed. Carpenter sits in the center of North America next to two major migration routes, meaning the new tower could be a very important part of the network. “Our tower, strategically located at the confluence of the St. Croix River and Mississippi Rivers, is listening for nanotags along one of the busiest migration flyways on our continent,” says Jen Vieth, Carpenter’s executive director. The Motus network is changing the possibilities of tracking wildlife, and is being used by many researchers already. This international network hopes to someday blanket large swaths of the Western Hemisphere with antennae, ensuring almost all tagged creatures that pass through will be detected. With enough of these towers, a bird tagged in its breeding habitat in Canada, for example, could be tracked all the way through the United States and on to its wintering grounds in Costa Rica. This provides invaluable information about migration strategies, what kinds of habitat it depends on throughout its life, and threats to its survival wherever it roams. Obstacles and opportunities Putting a Motus tower at Carpenter took time, technology, funding, and a few headaches. The Minnesota Ornithologists’ Union, a statewide organization dedicated to studying birds, provided a grant to fund much of the project, excited by the possibility of Motus eventually covering the state. “I thought funding would be the biggest challenge,” Vieth says. “But I was very wrong.” While she stresses that any committed group could add an antenna to the network, the technology and components can be daunting. It was only thanks to a dedicated intern (and his father), partners with the Minnesota Department of Natural Resources, the Natural Resources Research Institute, volunteers, and others that the tower was finally finished. The series of challenges began with the complex technology. The intern and his dad were essential to figuring all that out. Once they knew what they needed to buy, they had to order some of it from the United Kingdom — but the charge on the nature center’s credit card was repeatedly misidentified as theft, and rejected. The obstacles were only beginning. When Carpenter finally acquired all the materials, the roof where they wanted to install it started leaking. It needed to be replaced, and then asbestos was discovered in the shingles. An abatement company was called in to safely dispose of the harmful material. Finally, just when the roofing company got started working, the workers were called away to another project. The replacement ultimately took six months. This March, the roof was finally finished and the frame that would support the Motus tower was installed. The team was ready to hook up the antenna — then came the coronavirus pandemic, halting progress. Finally in June, after restrictions were eased, with the help of volunteer Ben Douglas, the tower was installed. Then Douglas spent many hours troubleshooting the system until July, when the tower was confirmed to be operational. Despite the difficulties Carpenter encountered, Vieth says almost any group of committed people can deploy a new antenna and start feeding detections into the database. “What’s amazing about Motus is that even small groups can get involved in installing ‘listening’ towers. You don’t need to have a bird banding license to do it,” Vieth says. “The tech is tricky, but more and more volunteers are learning how to support the system. Schools, nature centers, even individual homeowners could put up a tower if they had the funds and tech support.” Building on banding Volunteers have been banding birds at Carpenter for almost four decades, capturing them with large stationary nets and affixing tiny numbered bands to their legs, then setting them free again. This widespread practice by licensed bird banders has contributed enormous insights to ornithology for more than 50 years. But banding takes a lot of time, and it’s limited in what it can tell researchers. Only a fraction of banded birds are ever documented again. Learning anything about a banded bird’s travels requires recapturing the bird or otherwise observing the band — by another bander, a citizen who knows how to report the band, a or by capturing the bird again at the original banding site. The band number is then reported to a central database. Even then, only a few dates and location for an individual bird can be recorded over its lifetime. “The data gleaned from one bird’s [Motus] track provides much more detail that we’d get from one band recovery,” Vieth says. Nonetheless, banding has revealed fascinating information. Birds banded at Carpenter have been recovered in Alaska, Canada’s eastern Maritime region, Arkansas, and Central and South America. Banding over long time periods can also help track changes in local bird populations. The long history of banding at Carpenter and elsewhere makes Motus even more valuable, as it will build on previous knowledge. Motus in the Midwest The Motus network has the potential to be a revolution in ornithology and other wildlife studies. “The technological advances involved in nanotech will allow very small creatures to be tagged and passively monitored as they pass by towers that are listening 24 hours a day, seven days a week, 365 days a year,” Vieth says. While Motus is already widespread on the East Coast, covering most migration routes, it’s just becoming feasible in the Midwest. As more researchers in the region start using nanotags, and more antenna are installed, more and more migrators will pass by and be detected. The Carpenter tower has detected three passing birds so far. One of the passerine passersby is a good example of the power of Motus, Vieth says. On Sept. 18, the antenna picked up the signal from a blue jay that had been tagged in Grand Marais, along the North Shore of Minnesota, just two weeks earlier. The bird was tagged by the University of Minnesota-Duluth’s Natural Resources Research Institute. Dr. Alexis Grinde has deployed 10 towers along Lake Superior’s western end in 2018 and one more this spring. Now the blue jay was headed south for the winter. Motus is the only way researchers could have known when and where it went. “This bird did not stop at Carpenter, as it simply flew over the site while en route to its winter destination,” Vieth says. “That blue jay wouldn’t have been recaptured in our nets at all, meaning it would have passed by undetected.” Another blue jay tagged in the same study two years ago was tracked for almost the entire month of March by a receiver in Indiana, showing where this bird depended on safety and food to survive between breeding seasons. “Now we know that this jay relies on specific breeding grounds up north, and has a specific wintering site in Indiana,” Vieth says. “If this were an endangered species, we’d now know about two particular sites to focus on in protecting the species.” Vieth hopes that someday, Carpenter will deploy its own nanotags on birds, monarchs, and migratory dragonflies. For now, they’re excited to join the detection effort. A growing network More towers are soon to come in the upper Midwest. Groups across the region, like the U.S. Fish and Wildlife Service, Midwest Migration Network, and Minnesota’s and Wisconsin’s DNRs, are currently deciding how towers should be prioritized in the region. “For example, we know most migrating birds more from north to south and back again, so this group has a goal of a virtual ‘detection wall’ across Minnesota,” Vieth says. “That way migratory species moving through Minnesota would hopefully be detected along this latitudinal line.” Carpenter is currently raising funds to install a second tower along the lower St. Croix, at its Wisconsin campus south of Hudson. The project has been partially funded by Tropical Wings, but needs about $2,000 more. They hope to deploy it be next spring migration. Vieth hopes it will add a lot of detail to how birds move along and around the river — and where they spend the winter. “Adding the Wisconsin tower will help elucidate information on the St Croix River bird movements, which is an exciting prospect when think big picture,” Vieth says. The Saint Croix National Scenic Riverway formed a sister park relationship in 2013 with Costa Rican national parks, based primarily on the connections created by migrating bids that breed along the river and spend the winter in Central America. Vieth says the Motus towers could finally show this relationship in detail. “It would be amazing to be able to detect migratory connectivity between the two sister parks!” Vieth says. That possibility is also why Tropical Wings is supportive, as the organization exists to study, celebrate, and protect the migrants that travel between the St. Croix region and Costa Rica. The more towers, the more scientists will be able to learn about the incredible journeys many wild animals undertake. For now, the antenna on top of Carpenter’s offices will be listening all day and all night, waiting for another signal from creatures carrying tiny transmitters. To learn more, check out Motus’ website by following this link, and watch the video below
<urn:uuid:ae7dfb15-7a42-4b3e-acbc-2cc40063067a>
CC-MAIN-2021-43
https://carpenternaturecenter.org/category/nature-news/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00710.warc.gz
en
0.957714
2,249
3.46875
3
Standalone Audio Packs are also available. The series is suitable for English Language Learners. The Online Teacher Resource Pack is designed to accompany the Student Book and is available as an online, annual subscription. It contains: Exam Practice papers with mark schemes, exam sample answers with commentary and topic tests with answers. These resources have been written to support the Pearson Edexcel International GCSE 9—1 Further Pure Mathematics Specification, a linear qualification which consists of two examinations available at Higher Tier only targeted at grades 9—4, with 3 allowed. Both examinations must be taken in the same series at the end of the course of study. View our qualification. Talk to us Download the full price list. Download sample. Download answer resources. Receive regular email updates on our latest resources and exclusive information on new products and special offers. Subscribe to newsletter. Available in print and digital. Download your free sample. Is it for me? Specifically developed for international learners, with appropriate international content. For the globally recognised 9—1 grading scale which allows learners to achieve their full potential and make more informed decisions about their options for progression. Each Student Book provides 3 year access to an ActiveBook, a digital version of the Student Book, which can be accessed online, anytime, anywhere supporting learning beyond the classroom. The Online Teacher Resource Pack provides further planning, teaching and assessment support. Reviewed by a language specialist to ensure the book is written in a clear and accessible style, including a glossary of specialist vocabulary. Good to know. In more detail. Assessment These resources have been written to support the Pearson Edexcel International GCSE 9—1 Further Pure Mathematics Specification, a linear qualification which consists of two examinations available at Higher Tier only targeted at grades 9—4, with 3 allowed. Pricing and ISBNs.International GCSE is a globally recognised qualification for international learners aged It delivers a consistent learning journey, with world class support services, for students and teachers, everywhere in the world. Download our full list of resources. Learn more. The published resources have been specifically written to support the Pearson Edexcel International GCSE 9—1 specification, a linear qualification with academic content and assessment designed specifically for international learners aged 14—16,which consists of examinations at the end of the course of study. These resources and qualifications, have been written for the new International GCSE 9—1with progression, international relevance, exam practice and support at their core. About our published resources The published resources have been specifically written to support the Pearson Edexcel International GCSE 9—1 specification, a linear qualification with academic content and assessment designed specifically for international learners aged 14—16,which consists of examinations at the end of the course of study. Is it for me?Together, Mathematics A Books 1 and 2 provide comprehensive coverage of the Higher specification. Student Books include a 3-year access to an ActiveBook. An endorsed Revision Guide with App is also available. The series is suitable for English Language Learners. The ActiveLearn Digital Service brings together planning, teaching and assessment across one service, saving you valuable time. Together with its interactive activities, it creates a personalised teaching and independent learning experience both in and outside the classroom. For more information about our ActiveLearn Secondary platform. Together, Student Books 1 and 2 provide comprehensive coverage of the Higher Tier specification. More than just a Guide! Both examinations must be taken in the same series at the end of the course of study.Installe theme fm whatsapp download Assessment consists of tiers of entry Foundation and Higher that allow students to be entered for the appropriate level, with questions designed to be accessible to students of all abilities in that tier and papers that are balanced for topics and difficulty. View our qualification. Step 3: Download and complete the International digital subscription order form. Send this with your order via your usual ordering method. Talk to us. Download the full price list. Download sample. View the guides. Download answer resources. Receive regular email updates on our latest resources and exclusive information on new products and special offers.A member of The Profs team will be in touch to discuss your tuition plan once you've submitted your details. Please answer the following questions so that a customer service representative can start your search for perfect tutor. Which papers should I answer? Need help?W204 front sam diagram We adhere to the GDPR and EU laws and we will not share your personal information with or sell it to third-party marketers. Pearson Edexcel International GCSE (9–1) The qualification supports progression to further study, with up-to-date content reflecting the latest thinking in the subject. Find out more about Pearson Edexcel International qualifications and sign up to receive the latest news. Let us know.Ab inbev gcc india address To support effective classroom delivery, we've developed a range of published resources for the new Pearson Edexcel International GCSE 9—1with progression, relevance and support at their core. Learn more. There's more than one qualification for this subject. Please choose the one you're interested in:. Pearson UK. Specification Download PDF 1. Register your interest Find out more about Pearson Edexcel International qualifications and sign up to receive the latest news. Teaching support and training. Training sessions Results support New grading scale explained. Useful documents. Published resources To support effective classroom delivery, we've developed a range of published resources for the new Pearson Edexcel International GCSE 9—1with progression, relevance and support at their core. Contact us. Twitter :.Laurenzside anime Are you sure you want to exit this session? Yes No.It is important to remain flexible as daily plans may have to be altered due to weather (blizzard)!. Hinrik was exceptional in his quality of service.Meatoplasty ear Very knowledgeable and helpful, and always cheerful, despite a family bereavement, which he didn't disclose until after the tour, obviously as it might have affected our enjoyment of the trip. We enjoyed the holiday so much that we want to return to Iceland to see more of the wonderful country. Nordic Visitors could not have done enough for us. Pearson Edexcel International GCSE (9–1) Mathematics A We arrived in the middle of a storm and our whole holiday had to be re-scheduled but amazingly we only missed off seeing one thing on the itinerary. A fantastic and memorable holiday. Kolbrun was extremely helpful and efficient, answering my queries promptly including a question about our vouchers which arose whilst we were in Norway. The trip was a 40th birthday present for me and my husband, with our children, and we all thoroughly enjoyed it - even more than we had expected to. Hotels and guest houses an interesting mixture of styles. All good in their own way. We liked best the ones "in the middle of no-where". We were delighted to have found your company on the web and were very impressed by the quality of your service. Our hotels and meals were excellent in each instance. We found the hotel staffs to be extremely helpful and welcoming and we all thoroughly enjoyed the accommodations. We had two day tours in and around Reykjavik prior to our self-guided itinerary and both were very informative, well organized, well guided and timely. Our arrangements all went flawlessly. We communicated with Larus several times prior to our arrival and his timely responses were much appreciated. We thoroughly enjoyed our trip and found it very rewarding. My husband and I had a very pleasant experience with Nordic Visitor, our enquiries were responded in a prompt manner, we were able to tailor make our own itinerary according to our need, hotels were great quality and in great locations as well. We would also like to express our gratitude to Bjarni and Helena for their assistance while we had some little issues in Norway. Bjarni was not our travel consultant but he was extremely helpful while Helena was away and managed to organise our lugguage transfer in just a few hours time while we had no idea what we could do in Oslo. Our Norway in a Nutshell trip was cancelled due to a freight train being stuck on the track and we were able to get our refund in a very timely manner. We think the service we received from the beginning to the end of our trip was excellent, Nordic Visitor and their consultants make sure their customers get the best out of it and are being well looked after during their trip. I booked everything through Gudrun at Nordic Visitor for Iceland and it was super easy, they were really flexible with me and even when the weather and my schedule changed, it was never an issue. Always prompt, always polite and always a good price. All the tours that were booked for us were exceptional. All the pick ups were on time and all the guides were very knowledgable. If I ever go back to Iceland, or any other Nordic country, I will certainly be using Nordic Visitor again. It was an amazing experience and I enjoyed it all. The time went too fast. Have to come back to see it all in a different season. It was great to have the support of the amazing people of Nordic Visitor in the background especially as I was on my own. Great service, great value. Sigfus was really helpful.Tv pato 2 apps I would definitely recommend Nordic Visitor to friends. Thanks so much for a wonderful trip.The bottom line is that statistics education can be tailored to your unique path. Read more profiles of statisticians and data scientists, and some of the cool jobs they do. Change the World Statisticians contribute to society in many ways, from protecting endangered species and managing the impacts of climate change to making medicines more effective and reducing hunger and disease. Have Fun Careers in statistics are fun. Satisfy Curiosity Statistics is a science. Make Money Demand for statisticians is growing, and so are their salaries. Statisticians Making A DifferenceOctober 28, 2016Statisticians are making the world a better place. Statistician Megan Price Promotes Social Justice and Human RightsDecember 21, 2015Megan Price uses statistics to answer important questions about social justice and human rights. Deepak Kumar, LinkedIn Principal Data ScientistJuly 9, 2015This video features Deepak Kumar, a principal data scientist at LinkedIn. Census BureauDecember 8, 2014Chandra had her pick of prestigious positions when she graduated with a PhD in statistics from Yale. Roger Peng, Johns Hopkins UniversityJuly 2, 2014What impact will extreme weather eventssuch as droughts, floods and heat waveshave on human health. In November 2017, China's manufacturing purchasing managers index (PMI) was 51. The manufacturing industry continued the momentum of steady rise. Ning Jizhe Visited the German Federal Statistical Office and Signed a Sino-Germany Statistical Cooperation Agreement From November 18 to 19, Mr. Ning Jizhe, Commissioner of the National Bureau of Statistics (NBS) of China, led a delegation to visit the German Federal Statistical Office in Wiesbaden, Germany. Ning Jizhe Attended the 50th Anniversary of the UNIDO and Visited the UNIDO Mr. Ning Jizhe, Commissioner of the National Bureau of Statistics (NBS) of China, attended the 50th anniversary of the United Nations Industrial Development Organization (UNIDO) in Vienna, Austria. Xie Fuzhan Met with FSO Delegation of Price Statistics Mr. Xie Fuzhan, Commissioner of National Bureau of Statistics, met with the delegation of price statistics headed by Ms. Irmtraud Beuerlein, Head of Division VA (Prices), Federal Statistical Office. There is no disputing the importance of statistical analysis in biological research, but too often it is considered only after an experiment is completed, when it may be too late. This collection highlights important statistical issues that biologists should be aware of and provides practical advice to help them improve the rigor of their work. Nature Methods' Points of Significance column on statistics explains many key statistical and experimental design concepts. Other resources include an online plotting tool and links to statistics guides from other publishers. This video is about an Introduction to Statistics. Peak at 60 and two gaps, one between 56 and 58 and the other between 62 and 64. The units would be seconds. Introduction to Statistics (1. And, How Do I Know Which One to Choose. Information on how to apply. New Course Starting Winter 2018, the Department of Statistics will be offering a new course: STAT 180 Introduction to Data Science Faculty Job Opening The Department of Statistics at the University of Washington is offering a full-time Tenure-Track Assistant (0116) or Tenured Associate (0102) Professor position. For application details go here. - Cz red aluminum grips - Download 30000ter betway hacker app - Simple machine examples - Mmo population tracker 2019 - Xfinity moca - Wal katha sagaraya pdf - Mkdocs vs sphinx - Network signal guru cracked - Camsurf pro apk - Windows 8 1 enterprise product key 2020 - Tango diamonds to cash value - Pfsense zeek - Codehs all star - Noncontact infrared thermometer ht668 manual - Rare pewter marks - Gltools root apk - Alliance lite 2 banks - Gigapixel ai reddit
<urn:uuid:28a0cf79-9856-47e1-87a6-f23328b2b713>
CC-MAIN-2021-43
https://mgz.cr929p1259.pw/edexcel-international-gcse-mathematics-a.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00270.warc.gz
en
0.947864
2,776
2.515625
3
In the early hours of April 26, 1986, the world witnessed the worst nuclear catastrophe in history. A reactor at the Chernobyl nuclear plant in northern Ukraine exploded, spreading radioactive clouds all over Europe and a large part of the globe. In all, 50 million curies of radiation were released into the atmosphere—the equivalent of 500 Hiroshima bombs. Here, the author of Chernobyl: The History of a Nuclear Catastrophe describes the dramatic exodus from Prypiat, a city of 50,000 located a few miles from the damaged reactor. The call came around 5:00 a.m. on April 26, awakening the most powerful man in the land, the general secretary of the Communist Party of the Soviet Union, Mikhail Gorbachev. The message: There had been an explosion and fire at the Chernobyl nuclear power plant, but the reactor was intact. “In the first hours and even the first day after the accident there was no understanding that the reactor had exploded and that there had been a huge nuclear emission into the atmosphere,” remembered Gorbachev later. He saw no need to awaken other members of the Soviet leadership or interrupt the weekend by calling an emergency session of the Politburo. Instead, Gorbachev approved the creation of a state commission to look into the causes of the explosion and deal with its consequences. Boris Shcherbina, deputy head of the Soviet government and chairman of the high commission, was summoned from a business trip to Siberia and sent to Ukraine. He arrived in Prypiat, the town that housed the construction workers and operators of the nuclear plant, around 8:00 p.m. on April 26, more than 18 hours after the explosion. By that time very little had been done to deal with the consequences of the disaster, as no one in the local Soviet hierarchy dared to take responsibility for declaring the reactor dead. Shcherbina began a brainstorming session. Only then did everyone accept what had been unthinkable only hours earlier: A meltdown had occurred, and the reactor’s core was damaged, spreading radioactivity all over the place. Radiation levels rose. Bigger blasts loomed. Officials dithered. The question was how to stop it from burning and producing ever more radioactivity. They bounced ideas off one another. Shcherbina wanted to use water, but they explained to him that dowsing a nuclear fire with water could actually intensify the blaze. Someone suggested using sand. But how to bring it to the reactor? Shcherbina had already called military helicopter and chemical units into the area. Their commanders were en route to Prypiat. Soon after 9:00 p.m., while the members of the commission were brainstorming, the reactor suddenly awakened. Three powerful explosions illuminated the dark red sky above the damaged reactor, sending red-hot pieces of fuel rods and graphite into the air. “It was a striking spectacle,” remembered one of the commission’s experts who observed the scene from the third floor of the Prypiat party headquarters, where the high commission was housed. It looked as if the worst-case scenario was now coming to pass. Earlier in the day, experts had predicted a possible chain reaction starting as soon as the reactor emerged from the temporarily disabling iodine well. The explosion might be the first indication of a much bigger blast to come: They had no choice but to wait and see. But even without further explosions, the newest ones put Prypiat citizens in greater danger. The wind suddenly picked up, driving radioactive clouds northward from the damaged reactor and covering parts of the city. Radiation levels increased on the city plaza in front of party headquarters in downtown Prypiat, rising from 40 to 320-330 microroentgens (a legacy unit measuring exposure to electromagnetic radiation) per second, or 1.2 roentgens per hour. Armen Abagian, the director of one of the Moscow nuclear-power research institutes who had been dispatched to Prypiat as a member of the government commission, approached Shcherbina and demanded the city be evacuated. Abagian had just returned from the plant, where the explosions in the reactor had caught him unawares—he and his colleagues had had to seek shelter under a metal bridge. “I told him that children were running in the streets; people were hanging laundered linen out to dry. And the atmosphere was radioactive,” remembered Abegian. But according to government regulations adopted in the Soviet Union back in 1963, evacuation of the civilian population was not necessary unless the radiation dose accumulated by individuals reached the 75-roentgen mark. Calculations had shown that with the existing level of radioactivity, the intake might be about 4.5 roentgens per day. With the official threshold not yet met, Yevgenii Vorobev, the commission’s senior medical officer, was reluctant to take responsibility for ordering an evacuation. Police wore gas masks, but residents only heard rumors. As the commission dithered, people began leaving the city. Intercity telephone networks had been cut and the engineers and workers at the nuclear plant had been prohibited from sharing news of what had happened with their friends or relatives. But the family and informal networks that had always served Soviet citizens better than state-controlled media quickly activated in Prypiat, circulating rumors about the power-plant accident within hours after the explosion. Lidia Romanchenko, an employee of a Chernobyl construction firm, recalled: “Some time around eight in the morning [of April 26] a neighbor called me and said that her neighbor had not returned from the station; an accident had taken place there.” That information was soon confirmed by another source. “Our dentist friend said that they had all been awakened at night because of an emergency and summoned to the clinic, to which people from the station were taken all night.” Romanchenko decided to share the news with her own friends and family. “I got in touch with my neighbors and close friends right away, but they had already ‘packed their bags’ that night: A close friend had called and told them about the accident.” The city of Prypiat was slowly awakening to the reality of the disastrous accident in its backyard. Liudmila Kharitonova, a senior engineer in a construction firm, was on her way to her country house nearby when she and her family were stopped by police. They had to turn back to the city, where Liudmila saw foam on the streets—roads were being treated with a special solution by water trucks. In the afternoon military personnel carriers appeared on the streets, and military planes and helicopters filled the sky. The police and military were wearing respirators and gas masks. Children returned from school, where they had been given iodine tablets, and were advised to stay indoors. “We began to be more alarmed in the evening,” remembered Kharitonova. “It’s hard to say where the alarm came from, perhaps from inside ourselves, perhaps from the air, which by then was beginning to take on a metallic smell.” A rumor began to circulate that those who wanted to leave could do so. Still, there was no official information on what had happened and what to expect. Liudmila and her family went to the Yaniv railway station and got on a train to Moscow. “Soldiers were patrolling the Yaniv station,” she recalled. “There were lots of women with small children. They all looked a bit confused, but they behaved calmly.… But I felt nonetheless that a new age had dawned. And when the train pulled in, it seemed to me so different, as if it had just come from the old, clean world we used to know, into our new poisoned age, the age of Chernobyl.” The exodus had begun. READ MORE: 8 Things You May Not Know About Chernobyl Evacuation occurred with 50 minutes notice. It was close to midnight on April 26 by the time Abegian and other scientists managed to convince Shcherbina to order an evacuation. But Shcherbina’s decision needed approval from above. “They told one [party] secretary, and he said: ‘I can’t give you my agreement to this,’” recalled one of the participants of the meeting. “They got through to another who also expressed sympathy but said that he could not give his assent.” Eventually Shcherbina phoned his boss, Premier Nikolai Ryzhkov. “Shcherbina called me on Saturday evening,” recalled Ryzhkov, “and reported on the situation. ‘We’ve measured the radiation… Prypiat has to be evacuated. Immediately. The station is close by, and it’s emitting radioactive contagion. And people in the city are living it up full blast; weddings are going on…’ I decided: ‘Evacuation tomorrow. Prepare trains and buses today and tell the people to take only the bare necessities.’” By 1:00 a.m. on April 27, local officials in Prypiat had received an urgent order from Shcherbina to prepare lists of citizens for evacuation. They were given two hours to do the job. Recommended for you The columns of buses that had been waiting on the roads between Chernobyl and Prypiat for hours, absorbing high levels of radiation, began to move at 1:30 a.m. on the morning of April 27. Levels of radioactivity in the city were rising quickly. On April 26 it registered in the range of 14-140 milliroentgens per hour, but by about 7:00 a.m. on April 27 it had risen to between 180 and 300 milliroentgens; in some areas close to the nuclear plant, it approached 600. The original plan was to begin evacuation on the morning of April 27, but officials decided too late to meet the deadline. They pushed the evacuation to the early afternoon. To some Prypiat citizens, the evacuation came as a long-awaited relief, to others as surprise. Prypiat city radio transmitted the announcement soon after 1:00 p.m. “Attention! Attention!” came the calm voice of a female announcer speaking Russian with a strong Ukrainian accent. “In connection with the accident at the Chernobyl atomic power station, unfavorable radiation conditions are developing in the city of Prypiat. In order to ensure complete safety for residents, children first and foremost, it has become necessary to carry out a temporary evacuation of the city’s residents to nearby settlements of Kyiv oblast [province]. For that purpose, buses will be provided to every residence today, April 27, beginning at 14:00 hours, under the supervision of police officers and representatives of the city executive committee. It is recommended that people take documents, absolutely necessary items and food products to meet immediate needs. Comrades, on leaving your dwellings, please do not forget to close windows, switch off electrical and gas appliances and turn off water taps. Please remain calm, organized and orderly.” The radio repeated more or less the same announcement four times, but many still did not understand the seriousness of the situation. “Just imagine,” recalled Aneliia Perkovskaia, a city official, “it was only an hour and a half before the evacuation. Our children’s cafeteria in a large shopping center was full of parents and children eating ice cream. It was a weekend day; everything was nice and quiet.” For 36 hours after the explosion, people were given no reliable information about it and left virtually on their own. They never received instructions on how to protect themselves and their children. Radiation levels that according to Soviet laws were supposed to trigger an automatic public warning about the dangers of radiation exposure had already been recorded in the early hours of April 26—but were ignored by one official after another. Finally, people were asked to gather their belongings and wait on the street a mere 50 minutes before the start of the evacuation. They were good citizens and did exactly what they were told to do. READ MORE: History's Worst Nuclear Disasters Last, surreal images of a city Liubov Kovalevska, a local journalist and the author of the recent article about quality-control problems at the construction site of the Chernobyl nuclear plant—which was ignored by the authorities—was among the thousands who boarded buses that afternoon, never to return to their homes. She had spent a good part of the previous night calming her elderly mother, who could not sleep after hearing rumors about imminent evacuation. Now their whole family was ready to leave. They were told that it would only be for three days. “There were already buses at every entrance,” recalled Kovalevska. “Everyone was dressed as if to go camping, people were joking, and everything was rather quiet all around. There was a policeman beside every bus, checking residents according to a list, helping people bring in their belongings, and probably thinking of his family, whom he had not even managed to see in the course of those 24 hours.” A film shot on April 26 and 27 by local filmmakers preserves images of a wedding taking place in the city attacked by radionuclides. It shows young men and women dressed in light summer clothes with their small children, walking the streets, playing soccer on sports grounds and eating ice cream in the open air. These scenes look surreal when juxtaposed with others shot by the same filmmakers: water trucks cleaning the streets, policemen and soldiers in protective gear atop troop carriers patrolling the streets of Prypiat and people waiting for buses that would take them away from their homes. One frame shows a doll on the window sill of an apartment building, seemingly waiting for its owner to return. Sparks and white flashes in the film frames reveal the true meaning of what we see on the screen. These are scars left by radioactive particles attacking the film through the thick lenses of the camera. The Chernobyl filmmakers kept shooting, their last frames coming from the windows of departing buses. Theirs turned out to be the last images of the city still full of people. The accident’s harmful impact is still far from over. By 4:30 p.m., the evacuation was all but complete. The authorities were eager to report their first success to Moscow. “Shcherbina called at lunchtime on Sunday,” recalled Premier Nikolai Ryzhkov. He told the premier: “There are no people left in Prypiat. There are only dogs running around.” (People had not been allowed to take their pets.) A few days later, the police would create special squads to kill stray dogs. But canines were not the only ones remaining in Prypiat. Close to 5,000 workers of the nuclear plant stayed to ensure that the shutdown of the other reactors proceeded as planned. Young lovers took advantage of their parents’ departure to have their apartments to themselves. Finally, there were the elderly who decided to stay behind. They could not understand why they had to leave when the evacuation was for three days only. They did not know they would be leaving forever. The evacuees brought not only their irradiated bodies, but also their contaminated clothes and personal belongings to their temporary homes. The next day, KGB officials informed Ukrainian party authorities that of the nearly 1,000 evacuees who had moved to towns and villages of nearby Chernihiv oblast on their own, 26 had been admitted to hospitals with symptoms of radiation sickness. The KGB was busy curbing the “spread of panicky rumors and unreliable information,” but could do nothing about the diffusion of radiation. With the evacuation of Prypiat and nearby villages complete, the buses returned to Kyiv. They were assigned to their regular routes, where they spread high levels of radiation around the city of 2 million. That was just the beginning. After the central party leadership refused to cancel a huge May Day parade in Kyiv, despite evidence of increasing levels of radiation, hundreds of thousands of schoolchildren were subsequently evacuated from that city as well. Unlike the citizens of Prypiat, they would be allowed to return home in the fall of 1986. While only a few dozen people died as a direct consequence of the explosion and radiation poisoning, the World Health Organization puts the number of cancer deaths related to Chernobyl at 5,000. More than 50,000 square miles of territory were contaminated in Ukraine, Belarus and Russia. In April 2016, when the world marked the 30th anniversary of the disaster, there was a temptation to breathe a sigh of relief. The half-life of cesium-137, one of the most harmful nuclides released during the accident, is approximately 30 years. It is the longest “living” isotope of cesium that can affect the human body through external exposure and ingestion. Other deadly isotopes present in the disaster have long passed their half-life stages. But the accident’s harmful impact is still far from over. With tests revealing that the Cesium-137 around Chernobyl isn’t decaying as quickly as predicted, scholars believe the isotope will continue to harm the environment for at least 180 years—the time required for the cesium to be eliminated. Other radionuclides will perhaps remain in the region forever. The half-life of plutonium-239, traces of which were found as far away as Sweden, is 24,000 years. Serhii Plokhy is the professor of history at Harvard University and the director of the university’s Ukrainian Research Institute. He is the author of numerous books, most recently, Lost Kingdom: The Quest for Empire and the Making of the Russian Nation and Chernobyl: The History of the Nuclear Catastrophe. History Reads features the work of prominent authors and historians.
<urn:uuid:10ed2f45-cd4b-41e6-8fe3-3cdb8d955fc7>
CC-MAIN-2021-43
https://www.history.com/news/chernobyl-disaster-coverup
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00590.warc.gz
en
0.979559
3,734
3.375
3
The Need for Stronger Gun Control Essay In the US, the debate over gun control has weakened over the years, but was stirred by series of episodes including mass killings during random shooting by gunmen in civilian settings. In December 2012, a man took lives of 20 innocent children in a mass shooting with two semiautomatic guns at an elementary school in Connecticut, US. This tragic incident shook the whole world awaking a long lasting gun-control debate again (Ghobashy & Barrett, 2012). It prompted the administration of President Obama to take some firm actions to control the availability of military weapons. In January 2013, the US President Barack Obama proposed a gun control bill to prevent violent use of guns, which implied a total ban on weapons and large capacity magazines of ammunition, improved background check systems, and stricter trafficking laws. A big number of people believe that restriction of gun purchase or strict gun-control laws can be an effective solution to a gun violence problem. However, regardless of massive public support, the bill was rejected by the Senate in April, 2013. Gun ownership in the US far surpasses other countries, which makes the US rank first in firearms per capita in the world (“11 Facts about guns”, 2012). Such high gun ownership has played an important role in increasing gun violence, crime rates, homicides, and suicides in the US. Many studies, surveys, and official reports have confirmed that though gun control cannot curb overall violence rate, it is an effective solution to reduce gun violence rate in the country. Current essay analyzes the effectiveness of gun control and necessity of stronger gun control laws for reducing gun violence in the US. By relying on various statistics, and scholarly reports, current paper tries to present the impact of gun control policies on gun violence rate in the country. Almost 40 to 45% of American families possess guns in their homes. According to the survey by the Harvard School of Public health, it is the highest rate of gun ownership among developed countries in the world. Nearly one third of American adults own some type of firearm. Around 60% of Republican voters confessed in Election Day poll of 2008 that they had a gun in their homes. Among the Democrats, the statistic was 25%. Though gun ownership rates have declined since 1960s, the most significant decline has been recorded among the Democrats (“Guns, violence, and gun control”, 2013). According to the reports of Mayors Against Illegal Guns, an association of more than 1,000 US mayors, there have been 93 mass shootings in 35 states since January 2009 to September 2013. The 35-page report, which is based on FBI and media data, further stated that assault weapons were used in 14 of the 93 shootings, resulting in an average of 63% more deaths (Moya-Smith, 2013). In 2011, the number of people killed with guns was 32,163, including 15,953 homicides, 19,766 suicides, and 851 unintended gun deaths. The number of homicides using military style firearms was 679 in 2011 (Alpers, Rossetti, Wilson, & Royet, 2013). Reports suggest that almost four times more African-American males die due to gun violence compared to white males. The highest rate of gun violence is among 15-24 year olds. Majority of gun homicides, i.e., 79%, involve handguns, and in most cases such gun accidents are assaults without any purpose of theft, robbery, or rape. The number of people becoming victims of gun violence is increasing by almost 200,000 each year. A study about gun-related deaths in 23 developed nations carried out in the year 2003 showed that 80% of deaths occurred in the US, even though other countries had a combined population almost twice than of the US (“Guns, violence, and gun control”, 2013). Guns in the US are regulated by the state, local, and federal authorities. The laws and regulations of guns in the US can be characterized as permissive. Second Amendment was adopted in 1791, which stated, “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed”. Though Second Amendment protects the right to carry firearms, considering the situation of the 18th century, there was necessity to carry weapons to protect families, and civilians from the violence of colonial war. From historical context, Second Amendment was an important law, however, in currant scenario, it cannot be justified and need to terminate. In 1934, National Firearms Act imposed a legal excise tax on the transfer and manufacture of certain firearms, and mandated the registration of it. According to the Brady Handgun Violence Prevention Act, a new National Instant Criminal Background Check System, run by the FBI (Federal Bureau of Investigation), was imposed, restraining illegal and unlicensed purchase of firearms. In 1994, “Assault Weapons Ban” bill prohibited the manufacture, ownership, and importation of military style semiautomatic assault weapons and large-capacity ammunition feeding devices for civilian use (Gettings & McNiff, 2013). Correlation between Guns and Gun Violence Many opponents of gun control deny a relation between guns and crime. One of the leading opponents, Zach Fulkerson, criticizes gun control in his recent article “Gun Control 2013: Guns and Crime is a False Correlation” by providing statistics from the research work of Dr. John Lott. However, in 2005, the National Research Council, including experts like James Q. Wilson, Charles Wellford, Joel Waldfogel, Steven Levitt, and Joel Horowitz, published a wide-ranging report that concluded that the data and facts provided in Lott’s research were not reliable and had numerous flaws. Although gun control laws may not be efficient enough to curb overall crimes, it can effectively restrain crimes related to the use of guns. Guns and gun violence go hand in hand, and various facts and figures have convincingly proved it (McElwee, 2013). Figure 1: A Graph of Rate of Suicide and Proportion of Houses Owning Fire Arms (McElwee, 2013) Reports of Public Health in 2006 found that the number of firearm suicides in the US decreased simultaneously with the number of households owning guns. This finding shows a well-established relationship between the availability of guns at home and the risk of suicide attempts. Also, the research by Mark Dugan shows a positive change in homicide rate with the decreased gun ownership in the society (McElwee, 2013). Correlation between High Levels of Gun Violence and Weak Gun Laws Many factors significantly influence the rate of gun violence in any society. In the US, though the federal law governs few aspects of firearm regulations such as licensing and monitoring of certain categories of firearm ownership, most of the policies related to gun laws and regulations are created by each state. As a result, different states have different approaches to gun control, gun sales, licensing, and gun carrying laws. Recently, the Center for American Progress studied gun ownership laws in 50 states and observed the impact of the adopted laws and policies on each state’s crime rate (Gerney, Parsons, & Posner, 2013). Figure 2: A Graph of Correlation between State Gun Laws and Gun-Violence Outcomes (Gerney, Parsons, & Posner, 2013) The graph above shows that gun laws and gun-violence outcomes are directly proportional to each other. The states with the strictest gun laws have the lowest gun-violence crime rates, while the states with the weakest gun laws show higher rates of gun-violence crimes. According to this report, states like South Dakota, Arizona, and Mississippi, where gun laws are weak, are among the top three states of the US that have the highest rates of gun-violence in the country. On the other hand, the states like California, New Jersey, and Massachusetts where gun laws are strict, are among the top three states with the lowest rates of gun-violence in the country. In a way, these results show that effective gun laws and restrictions on gun ownership can be useful measures to control gun violence in the country (Gerney, Parsons, & Posner, 2013). Correlation between Gun Violence and Mental Health Mental health is one of the serious issues in modern America. According to the National Institute of Mental Health, 26.2% of adults in the US live with some kind of mental illness varying from depression to Schizophrenia and PTSD. Such patients often carry a danger in some way to people around them. The reports of Mayors Against Illegal Guns have strengthened this fact, which stated that, in ten cases of mass shootings, mental illness was identified in the shooter. In 40 of the incidents, the shooter committed suicide after the killings. There are serious issues with screening and access to care for mental illness in the US. Though some of the offenders receive a screening after a crime, some sort of damage has already been done (Kee, 2013). Besides mental illness, violent video games and media contents are stimulating disturbing violent behavior among the users. Research evidences that have been collected during the past few decades constantly warn about the risk of increased violent behavior in youth and adults who are consistently exposed to violence by means of television, news, movies, and video games. Various psychological theories explain the long-term and short-term effects of media violence on an individual. According to these theories, continuous exposure to violence in entertainment and the mass media stimulates specific kinds of aggressive behavior in the viewers; they become less sensitive to the suffering and pain of others; also, they are more prone to psychological disorders, which stimulate criminal behavior in them (Huesmann, 2007, pp. 7-11). Effect of Gun Control on Various Societies According to the reports, countries like Australia, England, Norway, India, and Canada have adopted strict gun control laws, which have remarkably affected the rate of gun violence in these societies. Homicide rate per 100,000 people in these countries is too low compared to the one in the US. Similarly to the US, none of European societies has been able to avoid mass shooting incidents. However, unlike the US, most countries have made their gun laws more strict due to which the rate of gun violence has dropped significantly (Squires, 2013). The US (10.2 per 100,000 population) and South Africa (9.4 per 100,000 population) have extremely high number of firearm-related deaths, whereas countries like Japan and the UK, where gun laws are strict, have extremely low rates of firearm-related deaths, which is 0.06 per 100,000, and 0.25 per 100,000, respectively (Boseley, 2013). Future Gun Control Policies Various facts and statistics have proved that gun control leads to reduction in gun violence. Gun violence is a major issue in the US, which is continuously threatening stability and peace in a society. Various countries have adopted strict gun control laws and due to it they have managed to control gun-related crimes in their regions. Though the US Federal Government has presented gun control laws, they have various flaws and limitations that need to be improved. On September, 2013, Democrats Angela Giron and John Morse demonstrated their support for recently enacted gun-control laws that mandate background checks on private gun sales and restrict magazine clips to 15 rounds. Though, President Obama’s proposals to tighten gun-control laws were rejected by the Senate, it is gaining massive support from the citizens. His future policy plan for gun-control includes universal background checks for gun sales, limiting magazines to a 10-round capacity, strengthening law and enforcement against gun violence and trafficking, imposing strict ban on assault weapons, and ending the prohibition on gun violence research (Gettings & McNiff, 2013). Though, there are many factors associated with a gun-violence issue in the country, various scholarly research and expert studies have proved a direct relationship between the high gun ownership and increased gun violence. The research shows that when easy access for guns is restricted by strict gun control policies, gun-related violence is significantly reduced. Also, the issue of increasing media violence cannot be ignored. In order to control violent behavior among people, it is necessary to take strong actions against growing media violence. Gun control leads to a reduction in gun violence. Therefore, it is important for the US government to promote stronger gun control policies and strengthen law enforcement in order to reduce massive gun-violence rate in the country and create peaceful society. PDF version: The Need for Stronger Gun Control Essay 11 Facts About Guns. (2012). dosomething.org. Retrieved from https://www.dosomething.org/us/facts/11-facts-about-guns Alpers, P., Rossetti, A., Wilson, M., & Royet, Q. (2013). United States- gun facts, figures, and the law. Sydney School of Public Health, The University of Sydney. GunPolicy.org. Retrieved from https://www.gunpolicy.org/firearms/region/united-states Boseley, S. (2013). High gun ownership makes countries less safe, US study finds. The Guardian. Retrieved from https://www.theguardian.com/world/2013/sep/18/gun-ownership-gun-deaths-study El-Ghobashy, T., & Barrett, D. (2012). Dozens killed in Conn. School shooting. The Wall Street Journal. Retrieved from https://www.wsj.com/articles/SB10001424127887323297104578179271453737596 Gerney, A., Parsons, C., & Posner, C. (2013). America under the gun. Center for American Progress. pp. 72, PDF Document. americanprogress.org. Retrieved from https://www.americanprogress.org/wp-content/uploads/2013/04/AmericaUnderTheGun-4.pdf Gettings, J., & McNiff, C. (2013). Milestones in federal gun control legislation. infoplease. Retrieved from https://www.infoplease.com/us/crime/milestones-federal-gun-control-legislation Guns, Violence, and Gun Control. (2013). News-Basics. Huesmann, R. L. (2007). The impact of electronic media violence: Scientific theory and research. Journal of Adolescent Health 41: S6-S13. Retrieved from https://rcgd.isr.umich.edu/aggr/articles/Huesmann/2007.Huesmann.ImpactOfElectronicMediaViol.JofAdolesHealth.pdf Kee, J. (2013). Mental health support, not gun control. The Daily Caller. Retrieved from https://dailycaller.com/2013/10/04/mental-health-support-not-gun-control/ McElwee, S. (2013). Gun control debate 2013: Guns and gun violence go hand in hand. Policymic. Moya-Smith, S. (2013). Nearly two mass shootings per month since 2009, study finds. NBC News Investigation.
<urn:uuid:9f8fb00d-e01b-4ae3-8799-93c09d988134>
CC-MAIN-2021-43
https://essay-writer-help.com/free-essays/gun-control/the-need-for-stronger-gun-control-essay
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00350.warc.gz
en
0.94409
3,128
2.625
3
Authors:R. Prasanna, A. Sood, A. Suresh, S. Nayak, and B. Kaushik A wide variety of pigments, like chlorophyll, carotenoids and phycobiliproteins, which exhibit colours ranging from green, yellow, brown to red are present in algae. Increasing awareness of harmful effects of synthetic dyes and inclination of society towards the usage of natural products, such as plant / microbial based colours in food and cosmetics, has led to the exploitation of microalgae as a source of natural colours. Algal pigments have great commercial value as natural colorants in nutraceutical, cosmetics and pharmaceutical industry, besides their health benefits. capsules are now commonly prescribed health foods for improving vitality and longevity of human beings. This review describes the distribution, structure of these pigments in algae, with emphasis on specific techniques for extraction and purification, along with different methods of biomass production and commercially feasible techniques documented in literature. An overview of the industrial applications of these natural colouring agents in diagnostics, food and cosmetics industry is also provided. Authors:G. Kemény, K. Penksza, Nagy Z., and et al. A neighbourding -quadrate transect study was conducted in order to examine the possibile relationship between small scale topography and coenotaxa occurrence and cover in subassociations of Festucetum vaginataeRapaics ex Soó 1929 sandy grassland plant community near Fülöpháza. These investigations served as a starting point in later soil seed bank studies. Cover of species was recorded in three transects of different exposition starting on the top of different dunes and ending in the depressions. Subassociations and facies forming species of the community occurred in all investigated transects. Parts of the transects could not have been classified unambiguously into any of the coenotaxa mentioned in the literature. In these zones the charactersitic species of the different subbasociations and facies were occurding together. These patches are propbably also the ones where changes in dominance relations and simultaneous spread of a species can relatively easily happen, as it is the case with Cleistogenes serotina. Annual vegetation of the open sandy grassland, ond the other hand, has occured only in the transition zones, between the subassociations or facies. In these transects moss-lichen synusia were peresent usually in the subassociation Festucetum vaginatae pennatae Kerner 1863. Authors:G. Amori, S. Gippoliti, L. Luiselli, and C. Battisti Changes in taxa composition among different communities in a landscape or along an environmental gradient are defined as β-diversity. From a biogeographic point of view, it is interesting to analyse patterns of β-turnover across latitudinal bands, and to understand whether P-diversity is significantly associated with endemism at lower latitudes, as predicted by theory. We inspected these issues by using squirrels (Rodentia, Sciuridae) as a study case. Distribution data for each genus were obtained from literature and mapped. The two hemispheres were subdivided into 23 latitudinal bands of equal area, and we calculated a β-turnover index between latitudinal bands with two formulae: Wilson and Shmida’s (1984) and Lennonetal.’s (2001) indices. We found that the peak of number of Sciuridae genera significantly corresponded to the peak in β-turnover scores at the same latitudes (25–31°N) with Wilson and Shmida’s (1984), but not with Lennon et al.’s (2001) index. We also found that the turnover between ground and tree squirrels corresponded to the grassland vegetation latitudinal bands (around 40° N), and the beginning of the latitudinal bands characterized by tropical and subtropical forests is accomplished with the occurrence of tree and flying squirrels An intense debate is underway on the different approaches to measuring the importance of neighbour interaction. Both the ecological meaning and the statistical suitability of one of the most popular indices have been seriously questioned, but no simpler and practical alternative tools have been proposed up to now. This paper proposes a novel approach based on the use of new normalized indices which scale the effects of neighbours and environment to the maximum target-plant potential. Two indices related to environmental suitability and size-asymmetry are suggested as tools to stratify data in homogeneous subsets before analysis, and an index of normalized neighbour effect (Nn) is proposed to integrate the measuring of neighbour importance and intensity. When tested on literature data, Nn index proves to be very highly correlated to the most currently used importance index. At the same time, it is moderately but significantly correlated to the intensity index. Yet, an accurate reanalysis of three published datasets proves that several detected trends are predictable on the basis of the inherent properties of the used indices. This is inextricably linked to the use of the same phytometers at different productivity levels. Thus, a glimpse is proposed towards the opportunity to use groups of equivalent competitors, each one working at a different point of the gradient, but all in a comparable range of environmental suitability and potential size-asymmetry relative to neighbours. Once defined these equivalence conditions, the normalized Nn metric is suited to measure how the relative weight of neighbour impact changes along the productivity gradient. A preliminary checklist of Tamaricaceae in the Indian subcontinent has been prepared on the basis of primary observations of different taxa belonging to this family in wild habitats and on secondary observations based on examining herbarium specimens and taxonomic literature. On the Indian subcontinent (comprising Bangladesh, Bhutan, Myanmar, Nepal, Pakistan, Sri Lanka and India), the family Tamaricaceae is poorly represented (20% of all species). The present paper deals with a brief review of distribution, endemism, possible fossil ancestry, economic potential and survival threat on existing taxa, etc. The present status of endemism of Tamaricaceae in Indian subcontinent (22.5% in 2002–2007) has been compared to the data of previous investigations (50% in 1939–1940) done in nineteenth century. The decreasing rate of endemism either indicates decreasing number of endemic taxa or increasing span of distribution of pan-endemic taxa belonging to this family. For better understanding of the functional aspects of species dynamics the rate of endemism in percent of a particular group of plants has been used as key index here. Authors:A. Wiater, J. Szczodrak, and M. Pleszczyńska Conidia of TrichodermaharzianumF-340, an active producer of fungal mutanase, were mutagenized with physical and chemical mutagens used separately or in combination. After mutagenesis, the drop in conidia viability ranged from 0.004% to 71%. Among the applied mutagens, nitrosoguanidine gave the highest frequency of cultures with enhanced mutanase activity (98%). In total, 400 clones were isolated, and preliminarily evaluated for mutanase activity in flask microcultures. Eight most productive mutants were then quantified for mutanase production in shake flask cultures. The obtained results fully confirmed a great propensity of all the tested mutants to synthesize mutanase, the activity of which increased from 59 to 107% in relation to the parental T.harzianumculture. The best mutanase-overproducing mutant (T. harzianumF-340-48), obtained with nitrosoguanidine, produced the enzyme activity of 1.36 U/ml (4.5 U/mg protein) after 4 days of incubation in shake flask culture. This productivity was almost twices higher than that achieved by the initial strain F-340, and, at present, is the best reported in the literature. The potential application of mutanase in dentistry is also discussed. This paper describes aspects of the leaf anatomy of two Salvia taxa, Salvia nemorosa L. subsp. tesquicola (Klokov et Pobed.) Soó and Salvia nutans L., as well as their hybrid, Salvia ×dobrogensis Negrean, aiming to highlight common anatomical characteristics and superiority of the hybrid, compared with its parental taxa, less subject to these plants raised in the literature. Differences were found both in the structure of petiole and blade. For the petiole, differences arise concerning the degree of development of the external (collenchyma and chlorenchyma) and inner cortex. The vascular system in all considered taxa, comprises a great number of vascular bundles, with different levels of development of the conductive tissues. The mesophyll is heterogeneous, bifacial in S. nemorosa subsp. tesquicola and the hybrid, and equifacial in S. nutans. The presence and anatomy of numerous glandular and non-glandular trichomes (hairs), different in structure, shape and size, were investigated and evaluated. Stomata are present on both upper and lower epidermis of the blade having diacytic type, impressing, as well, an amphistomatic character. The vascular system of the midrib of the studied Salvia taxa is well developed, in particular those of the hybrid species. The analysis of petiole and blade anatomy of two Salvia taxa and their hybrid reveals common and specific features from which we could conclude that although the hybrid leaf is more developed anatomically than its parental taxa, the petiole has many features similar to that of Salvia nutans and the blade is almost similar to that of Salvia nemorosa species. The importance of accurate species databases is debated in the recent literature of biodiversity assessment, considering that limited resources for conservation could be better allocated to assessment based on cost effective biodiversity features. I aimed to provide an understanding of sampling bias and provide practical advice to minimize bias either before or after data collection. I used 10×10 km UTM grid data for 121 land snail species to account for geographic and taxonomic sampling bias in Hungary. Sampling intensity corrected for species richness varied significantly among regions, although regions were not good predictors of sampling intensity. Residuals were significantly autocorrelated in 15 km distance, indicating small scale heterogeneity in sampling intensity compared to species richness. Sampling coverage and intensity were higher close to human settlements and sampling intensity was higher within protected areas than outside. Commonness of species was positively associated with sampling intensity, while some rare species were over-represented in the records. Sampling intensity of microsnails (<3 mm) was significantly lower than that of the more detectable large species (>15 mm). Systematic effects of the collecting methods used in malacological research may be responsible for these differences. Understanding causes of sampling bias may help to reduce its effects in ecological, biogeographical and conservation biological applications, and help to guide future research. Authors:R. Okada, H. Ikeno, T. Kimura, Mizue Ohashi, H. Aonuma, and E. Ito A honeybee informs her nestmates of the location of a flower by doing a waggle dance. The waggle dance encodes both the direction of and distance to the flower from the hive. To reveal how the waggle dance benefits the colony, we created a Markov model of bee foraging behavior and performed simulation experiments by incorporating the biological parameters that we obtained from our own observations of real bees as well as from the literature. When two feeders were each placed 400 m away from the hive in different directions, a virtual colony in which honeybees danced and correctly transferred information (a normal, real bee colony) made significantly greater numbers of successful visits to the feeders compared to a colony with inaccurate information transfer. Howerer, when five feeders were each located 400 m from the hive, the inaccurate information transfer colony performed better than the normal colony. These results suggest that dancing’s ability to communicate accurate information depends on the number of feeders. Furthermore, because non-dancing colonies always made significantly fewer visits than those two colonies, we concluded that dancing behavior is beneficial for hives’ ability to visit food sources.
<urn:uuid:33bd5bbd-c8c6-406d-8699-91fedaf86830>
CC-MAIN-2021-43
https://akjournals.com/search?access=all&page=4&pageSize=10&q=%22literature%22&sort=relevance&t=Biology
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00630.warc.gz
en
0.942418
2,616
2.671875
3
Over the years, it has become fashion for Western media to “bash” on Indonesia, mainly because of its natural disasters and terroristic attacks, which have damaged the image and general perspective of the country. However, little is known about the great opportunities and potential the country have to offer, which are overshadowed by the occasional disasters. With more than 17,000 islands, the Republic of Indonesia, located in Southeast Asia is the world’s largest archipelago and is ranked as the fourth most populous country in the world with a population of more than 237 million people. The country shares land borders with Papua New Guinea, East Timor and Malaysia. Neighbor countries include Singapore, the Philippines, Australia, and the Indian territory of the Andaman and Nicobar Islands. Indonesia is also the most populous Muslim-majority nation, however, no reference to Islam is made in the Indonesian constitution and most Indonesian Muslims consider themselves as moderate. Indonesia is a republic, with an elected legislature and President. The current President Susilo Bambang Sudhoyono (called SBY for short by the local population), has recently been re-elected to serve the country for another 5-years term. The capital city of Indonesia, where most of the economic activities take place is Jakarta, located on the island of Java. The Indonesian archipelago has a great history of being an important trade region since the seventh century, when the Srivijaya Kingdom started having economic exchange with China and India. After three and a half centuries of Dutch colonialism, Indonesia secured its independence after World War II. Since then, Indonesia had to face challenges posed by natural disasters, corruption, separatism, a democratization process, and periods of rapid economic change. Nowadays, Indonesia is Southeast Asia’s biggest economy and it is experiencing a great economic growth of around 6% annually and offers a lot of business opportunities for foreign investors. Indonesia has a market-based economy in which the government plays a significant role. There are 164 state-owned enterprises and the government administers prices on several basic goods including fuel, rice, and electricity. Jakarta is the country's largest commercial center. The services sector is the largest and represents almost half of the GDP, followed by industry representing 40.7% and agriculture with 14.0%. However, agriculture employs more people than other sectors, accounting for 44.3% of the 95 million-strong workforce, followed by the services sector (36.9%) and industry (18.8%). Major industries include petroleum and natural gas, textiles and mining. Major agricultural products include palm oil, rice, tea, coffee, spices and rubber. Indonesia’s main export market is Japan followed by the United States, China and Singapore. The major suppliers of imported goods are Japan, China and Singapore. The country has extensive natural resources, including crude oil, natural gas, tin, copper, and gold. Indonesia's major imports include machinery and equipment, chemicals, fuels, and foodstuffs. Agriculture, livestock and fisheries According to Mr. Achmad Mangga Barani, Director General for Estate Crops Production of Ministry of Agriculture, Indonesia has some of the best land in the world for plantation crops such as palm oil and rubber. Since 2007, Indonesia became the world’s leading producer of palm oil, while other crops like rubber, sugar, cotton, etc. also benefitted from rising commodity prices and an increase of international interest. “Indonesia is a country that is profitable to invest in the estate crops sector. In the future, estate crops will offer great opportunities, especially for palm oil.” he mentioned in the interview. Agriculture supports the livelihood of millions of Indonesians. Three out of five Indonesians live in rural areas and farming is their main occupation. While Indonesian agriculture has performed well historically and contributed to a significant growth with increased employment and reduction of poverty, productivity gains of most crops have now slowed down significantly and the majority of farmers operate in less than one-half hectare today. Revitalizing the agricultural sector is necessary to underpin renewed and robust growth of the economy and is a key component of the Government’s rural development strategy. Energy & Mining The energy sector is a major source of foreign exchange and one of the most important sectors in Indonesia, generating nearly 30% of the government’s total revenues. The country has the largest natural gas reserves in the Asia-Pacific region and produces about 1.38 million barrels of oil and 190.2 billion cubic feet of natural gas per day. Indonesia was the only Asian member of the Organization of Petroleum Exporting Countries (OPEC) outside of the Middle East until 2008 and is currently a net oil importer. The state owns all petroleum and mineral rights. Foreign firms participate through production-sharing and work contracts. Oil and gas contractors are required to finance all exploration, production and development costs in their contract areas and they are entitled to recover operating, exploration, and development costs out of the oil and gas produced. Indonesia's fuel production has declined significantly over the years, because of the aging of the oil fields and the lack of investment in new equipment. As a result, companies are looking for opportunities in alternative energy resources, such as geothermal. According to Director General for Oil and Gas, Ms. Evita Legowo, Indonesia has about 40% of the world’s geothermal energy, which offers a great potential. Indonesia has one of the world's largest deposits of coal, copper, tin, nickel and gold and wants to earn more from the sector, especially because the strong demand from China and India is increasing prices to record levels. In the last ten years there has been very little foreign investment in the mining sector, especially for hard rock mining. All the new projects undertaken by foreign mining companies in the last ten years have been under the work contract legislation issued before 1998. Indonesian mining sector is in full production and is rapidly expanding. For this reasons there are significant investment opportunities for the supply of mining equipment and technology. The most well-known tourism attraction of Indonesia is the island Bali, which attracts thousands of tourists yearly from Australia, Asia, Europe, the US, Middle-East, etc. However, only few people are aware of the yet untapped and unexplored sightseeing. “Indonesia has more than 17,000 islands and a great variety of cultures and languages. It is our job to improve the accessibility of these destinations and to create awareness”, says Mr. Firmansyah Director General of Destination Development. About 5 million foreign tourists have visited Indonesia annually since 2000. Tourism in Indonesia is currently overseen by the Ministry of Culture and Tourism. International tourist campaigns have been focusing largely the diversity of the country, such as the tropical destinations with diving, cultural and eco-tourism possibilities. Cultural tourism is a growing segment. Yogyakarta and Minangkabau and Toraja, Prambanan and Borobudur temples are popular destinations for cultural tourism, as well as many Hindu festivities in Bali. The fact that Indonesia is a booming country has also been notified by world-class universities. Many universities from oversees come to Indonesia to look for possible partnerships, such as exchange programs, joint studies and other collaborations. Indonesia has almost 3.000 higher institutions out of which 83 are public institutions controlled by the government. Indonesia has some very high quality universities, says Dr. Jasli Falal, the Director General of Higher Education. Some of the best universities include the Bandung Institute of Technology, the Gadjah Mada University and the University of Indonesia. Even though Indonesia has a large Muslim population, there are several universities based on different religions, such as the Parahyangan Catholic University and the Maranatha Christian University, both located in Bandung, West Java. Students and lecturers are totally accepted and welcomed at all universities, regardless their religions. Not only universities from all over the world recognize the opportunities the Indonesian education sector has to offer. When US State Secretary Mrs. Hillary Clinton visited Indonesia in February 2009, she emphasized on the need of more cooperation between American and Indonesian universities by stimulating partnerships and increasing Fulbright scholarships. Just as in the education sector, more and more hospitals and clinics are looking for partnerships and collaborations overseas. The government is doing great efforts in order to make health care more accessible in the remote areas. "Of course we want new partnerships that benefit Indonesia." stated Endang Rahayu Sedyaningsih the new Minister of Health. Foreign investment is fairly common in major Indonesian cities like Jakarta, Surabaya, Java, and Bali. Jakarta has been the center of foreign investment hospitals, as it is the center of economic activities in Indonesia. At least five hospitals are owned or managed by foreign firms and individuals within Jakarta, Indonesia’s capital city. Surabaya and Medan have also attracted foreign investors as the cities have huge upper-class market potential. Bali and West Nusa Tenggara are also objects of interest to investors as those are the areas where tourism plays a great role in the economy. USA – Indonesia Relations The relation between the US and Indonesia is often considered as an example that the West can have a good relation with the Muslim world. Both governments are very keen on improving relations, which is proven by President Obama’s promise to visit Indonesia in the first half of 2010. The United States has important economic and commercial interests in Indonesia. Relations between Indonesia and the U.S. are positive and have advanced since the election of President Yudhoyono in October 2004. The U.S. played a role in Indonesian independence in the late 1940s and appreciated Indonesia's role as an anti-communist bulwark during the Cold War. Cooperative relations are maintained today, although no formal security treaties bind the two countries. The United States and Indonesia share the common goal of combating terrorism, maintaining peace and stability in the region. Cooperation between the U.S. and Indonesia on counter-terrorism has increased steadily since 2002, as terrorist attacks in Bali (October 2002 and October 2005), Jakarta (August 2003 and September 2004) and other regional locations demonstrated the presence of terrorist organizations, principally Jemaah Islamiyah, in Indonesia. The United States has welcomed Indonesia's contributions to regional security, especially its leading role in helping restore democracy in Cambodia and in mediating territorial disputes in the South China Sea. What is new in the Asian policy of U.S. President Barack Obama is the recognition of Indonesia as a fresh player in the U.S. global strategy. This democratic nation with the world’s largest Muslim’s population will play a crucial role in the continued stability and prosperity of the ASEAN region.
<urn:uuid:645881e6-9469-4a15-bfca-8a95d5863755>
CC-MAIN-2021-43
https://www.winne.com/id/publications/indonesia-asia-s-best-kept-secret
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585280.84/warc/CC-MAIN-20211019171139-20211019201139-00470.warc.gz
en
0.955241
2,222
2.609375
3
Suffering from Cramps, Gas, Diarrhea, or Constipation? You just had a stressful day at work. You need to relax so you go out with your friends for dinner. You eat too much of "the wrong thing" and you end up rushing to the restroom. While your friends are having fun, you sit on the toilet clutching your stomach in pain having terrible diarrhea. How embarrassing... Living with Irritable Bowel Syndrome (IBS) is no fun! Read this page in detail and discover how to get rid of IBS symptoms - THE NATURAL WAY! What is Irritable Bowel Syndrome (IBS)? Irritable Bowel Syndrome (IBS) is a functional disorder usually associated with stress or anxiety . IBS develops because the nerves and muscles in your intestines become extra sensitive and contract faster or slower than normal. This causes stomach pain, cramping, gas, sudden bouts of diarrhea, or constipation Click to read more about IBS causes and treatments » There are three types of IBS: - IBS-D - Dominant symptom is diarrhea - IBS-C - Dominant symptom is constipation - IBS-A - Symptoms alternate between diarrhea and constipation Roughly 15% of the U.S. population has IBS at some point in their lives. It often starts in adolescents or young adults. It affects almost twice as many women as men. IBS is usually associated with stress. IBS can be painful, but it does not damage the bowel or cause cancer. However, long term IBS can cause depression or hemorrhoids. What Causes Irritable Bowel Syndrome? Doctors think that IBS might be caused by a neurological problem. Signals are sent between the brain and intestines. If you are under stress your brain fires either too many or too few signals to intestine muscles. This problem causes the intestine muscles to move food through the intestines too quickly or too slowly. That results in painful cramps and diarrhea or bloating and constipation. IBS is strongly associated with anxiety, stress, and sleeping disorders. IBS attacks are triggered by one or more factors: - Stress and pressure at school, work, or home - Stressful events or major changes in your life - Anxiety, depression, or panic disorder - Large meals - Consumption of certain trigger foods (this varies from person to person) - Medications such as antibiotics - Alcohol and caffeinated drinks - Hormonal changes in women during their periods Can IBS be Cured? The first thing people usually do is to get over-the-counter medications for diarrhea or constipation. These medications provide only short term relief. It does not solve the underlining problem. Since IBS is strongly related to stress and anxiety a doctor may prescribe an anti-anxiety drug. These drugs may relieve IBS, but there's a risk of becoming dependent on these drugs. Also, these drugs contain harsh synthetic chemicals that may have harmful side effects if taken for long periods. Natural IBS Symptom Relief The key ingredient in Bavolex is a highly concentrated ginger extract. Recent studies confirmed what our grandparents already knew for hundreds of years: Ginger has a calming effect on the bowels. Keep reading and discover more wonderful ingredients in Bavolex.* Bavolex IBS Relief Formula is a dietary supplement formulated with herbs, plants, and enzymes to help reduce IBS symptoms, support healthy digestion and reduce feelings of stress.* - Stop painful cramps and gas * - Stop diarrhea and constipation * - Improve digestion * - Normalize contractions of intestines * - Calm down the nervous system * - Reduce feelings of stress and anxiety * Read about important product limitations before ordering. Heather from Wisconsin "My name is Heather and Bavolex is my miracle cure for me. Three months ago I had extreme abdominal pain every day. My doctor wanted to prescribe antidepressants and steroids. I am not one for prescription drugs. I did not want to take either, so I did my research online and found Bavolex. I ordered it right away and within 2 days I felt great and still do. Bavolex is a miracle for me. I will be taking it for the rest of my life. I would recommend Bavolex to anyone who has any intestinal problems." DISCLAIMER: Repeat customer received a free bottle in exchange for an honest review. Important average results information. What Can You Realistically Expect? When taken as directed and following the advice in our eBook you should notice an improvement within several days. A small percentage of our customers do not respond to our product , for this case we offer 60 days full money back guarantee. Our guarantee is simple: If you don't see an improvement, we do not want your money! Bavolex Side Effects Product safety is our #1 priority. Bavolex™ contains only natural extracts from herbs and plants that are generally considered safe. Our customers have been using Bavolex every day since 2009, and so far no adverse events (side effects) have been reported. As is the case with other products you should not use Bavolex products if you are pregnant or nursing because the effects on fetus have not been evaluated. You should not take Bavolex if you are taking antidepressants or blood thinning medication. "Bavolex IBS Relief Formula is a dietary supplement formulated with natural extracts and enzymes to help reduce IBS symptoms.*" Order Today and Receieve 20 Proven Tips for Treating IBS e-book $19.99 value yours at No Cost In this eBook, you'll learn 20 key tips to stop IBS attacks forever. These tips cover diet, stress reduction, and ways to prevent an IBS attack. This no-nonsense eBook is a lesson on simple lifestyle changes. Valuable advice on - Stop symptoms with diet and exercise - Get rid of stress and anxiety - Discover your IBS triggers - Prepare home remedies for IBS - Know which foods to eat and avoid - Small lifestyle changes that make a big difference A must-read for those who want to put an end to this painful inconvenience today. Order now and receive this free eBook INSTANTLY by email, so that you can start applying the advice and START FEELING BETTER TODAY! Read about important product limitations before ordering. Ingredients in Bavolex IBS Relief Formula Bavolex combines plant extracts and enzymes into a unique proprietary blend to address IBS symptoms in several ways * - Reducing Stress, Anxiety, and Nervous stomach* - lemon balm, 5-HTP, and camomile - Regulating Gut functions, Bloating, and Gas* - ginger, peppermint, and caraway seed - Improving Digestion with Enzymes* - papain, bromelain, and pancreatin is an herb from the mint family, often taken after a meal. It helps reduce indigestion and gas. Lemon balm is used in Europe as a mild sedative and for nervous tension and insomnia. The German Commission E recognizes lemon balm for treating nervous disturbances of sleep and functional gastrointestinal disorders. Studies suggest that lemon balm extract protects the gastrointestinal tract against ulcers. is a natural amino acid extracted from seeds of the African plant Griffonia Simplicifolia. The body converts 5-HTP into a neurotransmitter serotonin. Serotonin is responsible for regulating the speed with which food travels through intestines and is also known for regulating mood. 5-HTP helps reduce anxiety. It’s also used for premenstrual syndrome (PMS). Three clinical studies suggest that 5-HTP helps reduce depression.* has a calming and regulatory effect on the digestive system. It is helpful for nervous digestive upsets and diarrhea. It helps relieve stress and anxiety. Research suggests that chamomile regulates contractions of the smooth muscles of the small intestine, which regulates the speed of digestion. is an herb from Asia that has been used for medicinal purposes for more than 2,500 years. It contains compounds called oleoresins that have anti-inflammatory properties. They are known to have a positive effect on the muscles in the digestive system. Ginger is helpful in stimulating digestion and reducing painful cramps. The British Journal of Anesthesia reviewed six clinical trials and concluded that ginger is beneficial in preventing nausea and vomiting. Ginger also helps stop heartburn.* ||PEPPERMINT helps relieve abdominal pain, diarrhea, and urgency from IBS. Menthol and methyl salicylate are the main active ingredients of peppermint, and have a calming effects on the stomach and intestinal tract. Peppermint reduces gas production, intestinal cramping, and soothes irritation. Peppermint has been reported to relieve symptoms of IBS in two controlled trials. Three double-blind clinical trials further confirmed peppermint is beneficial for IBS.* is commonly used as the seed in rye breads. The combination of peppermint and caraway led to significant reduction in IBS symptoms in two double-blind trials relieves the smooth muscle lining of the digestive tract and helps digestion. It relieves indigestion, gas, dyspepsia, and colic, as well as reducing intestinal spasms. Fennel seed oil has been shown to reduce intestinal spasms and increase motility of the small intestine. Clinical trials showed fennel to be beneficial for reducing abdominal pain in infants. is found in the papaya plant. It aids digestion by converting proteins into amino-acids. It also helps against heartburn and chronic diarrhea. is found in pineapples. It breaks down proteins. Bromelain helps promotes and maintains proper digestion of meat and may relieve symptoms of stomach upset or heartburn. Bromelain helps counteract inflammation of the intestinal lining. is a digestive enzyme that is made up of the enzymes trypsin, amylase, and lipase. Tripsin aids in the digestion of proteins. Amylase aids digest carbohydrates. Lipase aids digest fats. Pancreatin enzyme complex is very helpful in proper digestion of many foods. 100% Satisfaction Guarantee We stand behind our products! All Bavolex products are manufactured in FDA registered facilities that exceed the highest standards for quality control, safety and ingredient purity. Try our product for 60 days. If you don't see a significant improvement, simply send us your unused portion and we'll promptly return EVERY PENNY, including original shipping costs. Money Back Guarantee Try our product for 60 days. If you don't see a significant improvement, simply send us your unused portion and we'll promptly return , including original shipping costs. - 60 Day Money Back Guarantee. - We refund both opened and unopened bottles. - Your order ships the same or next business day. - We never sell your personal information to anyone. Order Bavolex with Confidence with 60 Days Money Back Guarantee Choose Your Package
<urn:uuid:752eb83b-4862-4238-9b8d-86394fddc7bc>
CC-MAIN-2021-43
http://bavolex.com/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00470.warc.gz
en
0.920101
2,334
2.546875
3
The Bible is the sacred text of all Christians. Although there are differences between the bibles of some Christian denominations, essentially all Bibles are divided into two parts – the Old Testament and the New Testament. The Old Testament gives the history of the Israelites, God’s chosen people. It is filled with myths, stories of love and hate, peace and war, adultery, murder, victory and loss. It also includes stories of Prophets, messengers of God, who came to remind the people of how God expected them to act, but more importantly to foretell the coming of a Messiah who would be a savior to the people. After years of compilation these stories and messages of prophets now make up the Old Testament. The New Testament is the story of the growth of Christianity, and the coming of the long awaited Messiah. This covers the time shortly before this coming, the birth of the Messiah, Jesus of Nazareth, as well as his life and the lessons he taught during his time on earth. The New Testament also recounts his death, resurrection, and ascension into heaven. The rest of the New Testament tells how his followers dealt with his absence, how they carried on his work and spread his message, and waited for the promised “Second Coming” of the Lord. The New Testament begins with four books called Gospels, which means “Good News”. They are (in order as in the bible): Matthew, Mark, Luke, and John . Although all four gospels recount events of Jesus’ life, the Gospel according to Mark is unique among these four. It is the shortest of all four gospels; however, one of its most important features is that (according to the Two-Source Hypothesis) it is thought that the gospels of Matthew and Luke took much of their information from Mark (as well as another hypothetical source “Q”). There are large sections from these two gospels that are word-for-word exactly as the same as sections are in Mark. This is significant because Mark was believed to be written first, therefore, it is considered to be a “cornerstone” for which the other gospels were built. Although the book does not officially have an assigned author, and it if officially labeled the “Gospel According to Mark”, the author is traditionally thought to be John Mark, a follower of Jesus some time after Jesus’ death and resurrection (most likely between A.D. 55 and 70, since this is the date that the book is thought to be written). John Mark traveled with Jesus’ apostles Peter as well as worked by his side in Rome. It is because of John Mark’s relationship with the apostle Peter that the gospel of Mark is categorized as having apostolic origins, meaning that it was written by either an apostle of Jesus or someone who had a close connection with an apostle. John Mark is also mentioned in some of Paul’s epistles, because he traveled with Paul and Barnabas (who was his cousin). Because of his close relationship with these influential figures in Christian history, particularly Peter the apostle, it is no wonder that the gospel of Mark is a narrative, and even a lot like a biography of Jesus, recounting very detailed events of his life and exact lessons that he taught. While working with Peter he must have been privy to all kinds of stories of the man whom he was so devoted to and for whom he and all other Christians sacrificed so much. He, of course, also heard many stories of Jesus’ teachings, which he and other apostles, disciples, and missionaries were teaching others. One of those stories of Jesus’ message, recounted in the Gospel According to Mark, 12: 28-34, is commonly referred to as “The First Commandment and Greatest Commandment”. In this narrative gospel of Mark, Jesus is preaching when a scribe ventured to ask him which was commandment was the first, or in other words, which one was most important to follow. His response to “love the Lord with all your heart, with all your soul, with all your mind, and with all your strength” (Mk. 12:30) and to “love your neighbor as yourself” (Mk. 12:31) is what this passage centers around. At hearing Jesus’ response, the scribe who initially questioned him responded by stating that he knew these things were important above all other things, particularly, “burnt offerings and sacrifices required by the law” (Mk. 12:33). The importance that Jesus sees in understanding and abiding by these commandments is emphasized by the author, John Mark, by writing that when Jesus saw that the scribe understood he told him, “You are not far from the kingdom of God”. The location of this passage in the Bible is not surprising because it is surrounded by passages (particularly in chapters 11 and 12) which are similar in that Jesus’ authority to teach and beliefs are being challenged by the authority figures in the Jewish faith, particularly those who run the Jewish Temple. It is important to notice that Jesus answered by stating not one, but two commandments, that had been given to Moses and the Israelites many years ago – found in Deuteronomy 6:5, as well as in Leviticus 19:18. These passages are in not only what Christians refer to as the Old Testament, but are in the Torah (the sacred scripture of the Jewish faith), which Jesus would have been very familiar with as a practicing Jew. Equally important, is the fact that these passages are based on the core idea of love. As a result of these two details which cannot be overlooked, I think that the message of “The First and Greatest Commandment” is to establish Jesus as the new lawgiver with the message to love God and to love others. We must know and understand these commandments, as well as apply them to our lives, and it is when we are able to do these things, that we may fully enter into the kingdom of God. When reflecting on Jesus’ answer to the scribe, one must notice that Jesus states two passages from the Old Testament. This may seem insignificant; however, it is highly significant. Also notice that in the surrounding passages, as well as in the gospels of Matthew and Luke, the books surrounding the Gospel of Mark, Jesus authority is constantly being questioned and he is being put to the test by Scribes and Pharisees. The Jewish leaders were uncomfortable with Jesus’ practices because he did not follow the Mosaic Law, or Covenant (the set of rules and regulations that strictly guided the Jews “religious and community life and acted as their ‘constitution”, which also includes the Ten Commandments) as strictly as they believed he should. Jesus healed the sick on the Sabbath and ate with sinners and lepers, things that the scribes and Pharisees would never dream of doing. In quoting the sacred texts of the Jews, it was established that Jesus was a devoted, and practicing Jew, something the Scribes may have been confused by, because with his teachings Jesus made a statement to the Jews that he was the new covenant, the new lawgiver. The thought of something with more authority than the Mosaic Law of the Old Testament was highly disturbing to the Jewish leaders because they neither knew, nor wanted another way. The Old Testament can also be referred to as the “Law of Fear and Servitude” because it focuses primarily on rules, laws, and punishments. Jesus came to preach a very different message – one of hope and love, which he summed up in two sentences. That is why the New Testament is referred to as the New Law, or the “Law of Love and Liberty”. As Sullivan explains, this is why St. Thomas Aquinas considered the New Law to be infused, to come from within. The Old Testament was about outwardly appearance, while the New Testament was about individual intimate relationships. Although Jesus certainly taught the importance of obedience to God, he taught that instead it is better to do obey the Lord because of love, not fear of punishment. As a result of that love for the Lord, we are inclined from within ourselves to follow the law of God because we love him (thus, the title “Law of Liberty”). And with that same love, it is only logical that we would treat our neighbors with that same love, as we would want to be treated. Because the New Testament is a reflection on Jesus and his teachings, this passage in Mark is a perfect model of Jesus’ different form of teaching, and how he established himself as the new lawgiver, or new covenant to the people with his message to love God and to love others. As previously mentioned, the surrounding Gospels of Matthew and Luke also include this same passage; however, they differ greatly, as Agnes Norfleet notes in Between Text and Sermon. In the other gospels, the environment in which Jesus is questioned is very tense, accusatory, and unreceptive. The individuals questioning (more so challenging) Jesus are not questioning in order to receive answers, they are searching for a way to catch Jesus saying something that could be taken in an offensive way to the Jewish faith and tradition, in hopes of convicting him on a charge of blasphemy or another related crime. After hearing Jesus’ response his questioners are merely more aggravated and set on his conviction than before. The same passage, but in Mark, is a great contrast! The environment in Mark is pleasant and accepting. More importantly, the scribe who questions Jesus reflects on the answer he is given and finds that he agrees. When he states he thinks these commandments must be “more important than the burnt offerings and sacrifices required by the law”, he expresses understanding because he is able to apply Jesus’ message to his own life. Unlike the Jewish leaders in the surrounding books and passages, he is able to see the big picture and look past the “Law of Fear and Punishment” and see the message of “Love and Liberty” that Jesus preaches. This is exactly what Jesus wants all of his followers to do! He wants his followers to take his message and not merely accept it, but to judge for themselves and if in accord, to apply it to their lives! The importance Jesus places on this, as well as the desire he has for us to understand and act on his love is sealed when he tells the wise scribe, “You are not far from the kingdom of God”. The Gospel of Mark 12:2-34 can be interpreted and debated hundreds of ways, but I believe that the theological message of the passage was to establish Jesus as the new lawgiver, as well as to preach his message: to love God and to love others. Once able do this, his followers would be able to realize that they could live out his message by understanding and applying it to their everyday lives. When his followers could fully live out this “First and Greatest Commandment” they, like the scribe, would be in a place in which they longed to be, and Jesus longs for all of humanity to be, and that is “not far from the kingdom of God” (Mark 12:34). Cory, Catherine A. and David Landry. The Christian Theological Tradition. 2nd ed. New Jersey: Pearson Education, Inc., 2003. The International Student Bible for Catholics: New American Bible. Nashville: Thomas Nelson, Inc., 1987. Norfleet, Agnes W. “Mark 12:28-34.” Interpretation: Between Text and Sermon 51, no. 4 (October 1997): 403-406. ATLA Religion Database with ATLASerials, EBSCOhost (accessed March 8, 2008). Sullivan, S.J., John J. The Commandment of Love: The First and Greatest of the Commandments Explained According to the Teachings of St. Thomas Aquinas. First ed. New York: Vantage Press, 1956.
<urn:uuid:2260d37b-3154-4717-b5da-56f8746816e5>
CC-MAIN-2021-43
https://www.freeonlineresearchpapers.com/greatest-commandment/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00470.warc.gz
en
0.980286
2,503
3.609375
4
Cavities most often do not hurt. However, removing a cavity and placing a dental filling may lead to some sensitivity according to your dentist in Lincoln, NE. This is associated with an inflammatory process that is part of healing. Which, is entirely normal. The deeper the original cavity, the higher the chance of developing postoperative sensitivity with any dental filling. In short, a “toothache after fillings” or even a “throbbing tooth pain after fillings,” are both common. Many times, the gum around the tooth can be a little tender. Particularly, when decay has gone between the teeth, necessitating the use of strips or bands during the procedure. Also, it’s not uncommon for a tooth to be sensitive to cold immediately after a restoration. Especially, the placement of a filling for a cavity that is very deep or large. As long as the discomfort is brief, and lessens in severity over a few weeks, the tooth should return to normal without any further concern. Throbbing tooth pain after fillings, is nothing to sweat about. While we understand it is uncomfortable, your body simply needs time. Still wondering about “toothache after fillings” or “throbbing tooth pain after fillings”…Why does this happen? As a rule, when the pulp of the tooth becomes inflamed, pressure then begins to build up within the pulp cavity. Ultimately, this exerts pressure to the surrounding tissues and on the nerve of the tooth. It is pressure from inflammation that can cause discomfort to be anywhere from mild to extreme. The amount of discomfort an individual will experience depends upon the severity of the swelling and inflammation as well as the body’s response to pain. Generally, when we have pressure within other areas of the body, it can diffuse and lessen by moving into the surrounding soft tissue. Unfortunately, this is not the situation with inflammation that occurs in the pulp cavity. Dentin, which is a hard tissue, surrounds the pulp of a tooth. For this reason, discomfort (more specifically, pressure) is not allowed to disperse itself amongst other tissues, leading to increased blood flow. This increase in blood flow is a sure sign of inflammation. This will ultimately cause discomfort. Pulpitis or inflammation of the nerve of a tooth can create such a tremendous amount of pressure on the tooth nerve that an individual will often have trouble locating the actual source of their discomfort. Usually, it can be confused with neighboring teeth. Likewise, this is often called referred pain. The pulp cavity is a system that is closed off and will undoubtedly provide the body with a challenge, an immune system response challenge to be exact. What this means is that with any disruption, the pulp cavity can make it very difficult for our body to rid itself of any inflammation. Because the system is closed off, our body’s immune system is unable to enter the area to fight and or eliminate any inflammation and swelling. Simply stated, this is why patients can experience a throbbing toothache or even have throbbing tooth pain after fillings are placed. Patients who have pre-existing inflammatory medical conditions or autoimmune diseases, unfortunately, will have higher levels of inflammatory chemicals found within the blood. After restorative dental work such as a filling, there is inflammation, part of the tooth’s natural healing process. Furthermore, for patients with chronic medical conditions, this additional inflammation can result in more severe discomfort the “throbbing tooth pain or toothache after fillings.” These same individuals may also experience prolonged post-operative sensitivity. After having a filling placed, it will take time to adjust to the feel of a new bite (this is how your teeth come together). If and when the bite is changed, it can take several days for the brain to pick up on the new position of your teeth and recognize it as normal. Patients who grind their teeth might experience more post-operative sensitivity including aching and even a bruised feeling around the roots of the teeth after dental treatment. Teeth grinding and clenching causes stress to the nerve. If the nerve is already stressed from dental treatment, this added stress from grinding or clenching will increase the inflammation of that nerve. Wearing a nightguard after you complete treatment, will help significantly in reducing the amount of pressure and stress on the nerve and prolong the life of the dental restorations. If I experience postoperative sensitivity, a “toothache after fillings,” or a “throbbing tooth pain after fillings,” does it mean I may need a root canal? Not always. On occasion, after the placement of a restoration, a tooth can become non-vital and require root canal treatment. Every time a filling is placed or replaced, there is trauma to the pulp (nerve and blood supply). It can be a combination of many things that will cause this trauma. Generally, trauma can be caused by drilling, a combination of the toxins released by the bacteria that are responsible for the decay, and or the reaction of the pulp of the tooth to the filling materials. Other factors such as tooth grinding and fracture lines within the teeth can also affect the health of the pulp. Again, any patients with poorly controlled chronic diseases or autoimmune disease are at a much higher risk of developing complications or postoperative sensitivity after dental treatment. Heightened post-operative sensitivity after dental treatment is due to the over-reactive nature of their immune response. Accumulation and repeated trauma of this nature over time can result in a ‘stressed pulp,’ that is in a chronic state of near-death. As a result, a tooth that was seemingly fine before dental treatment may end up requiring root canal treatment because the pulp of the tooth, which had previously been compromised, is now unable to withstand any additional stress. Teeth that are currently non-vital (those that no longer have access to nutrients and or blood flow) or are becoming non-vital, generally tend to become sensitive to hot and cold. These same teeth can also become tender to bite. Discomfort often tends to come on spontaneously and last for long periods. This discomfort can even be constant and will usually be throbbing in nature. If this occurs, please contact us immediately. Patients with compromised immune function should seek immediate care to avoid further health complications. CAQs about a toothache after fillings or a throbbing tooth pain after fillings: I have a slight toothache after dental work, is it normal? Completing dental work of any kind on a tooth is essentially a mini surgery on that tooth. Sensitivity is a typical response to the pain experienced by many patients due to the natural process of healing. Every patient is different, and a toothache or sensitivity can last a few days up to a few weeks. So, a “toothache after fillings” for example, is entirely legitimate. This toothache can be helped by alternating Tylenol and Ibuprofen. Do this until the tooth has fully healed. I now have a severe toothache after my recent dental work. My tooth did not hurt before fixing my cavities. Now what? Any time work, in this case, dental work, is performed on a tooth, the nerve of the tooth becomes irritated and inflamed. While it may seem bizarre, this is an entirely reasonable and natural part of the healing process. As previously discussed, dental work is just like a micro-surgery on a tooth. It is common that after any surgery including dental restorations, patients can expect some tenderness and sensitivity. To help minimize discomfort, take alternating doses of Tylenol and Ibuprofen. Taking either of these medications will help reduce inflammation of the nerve while the tooth is healing. If you find that the discomfort is increasing and it becomes severe and is described as a “throbbing tooth pain after fillings,” patients should always contact their emergency dentist in Lincoln, NE. Be sure to do this as soon as possible. Unfortunately, this is a sure sign that the nerve did not heal normally after recent dental work. It is possible a root canal may be needed. If your Lincoln dentist doesn’t provide emergency services, you can always do an internet search for “emergency dentist near me” to locate a trained professional who may be able to help you. How normal is it for my teeth to be sensitive to hot and cold after new fillings? It is entirely reasonable to have sensitive teeth after recent dental work. Sensitivity is normal. After your Lincoln dentist places a filling, the nerve becomes irritated. Thus, making the tooth sensitive. Over time, this sensitivity will subside. You can take Tylenol and Ibuprofen to help reduce sensitivity during healing. If you feel that your sensitivity has become worse and it is a for sure a “toothache after fillings,” be sure to look up an “emergency dentist near me” to schedule an appointment. They can provide ideas of how to help manage your pain or get you in if necessary to examine the area. If you are unable to locate an emergency dentist, be sure to schedule an appointment with your primary dentist. He or she can then make sure your tooth is healing correctly. If you are experiencing extreme sensitivity, and it is after hours, please be sure to call an emergency dentist in Lincoln, NE. It is important that you do not wait. It’s possible that they will have suggestions for things that you can try at home or depending upon the situation, they may require that you come in so they can take a closer look. I have extreme sensitivity to biting after recent dental work. Is this normal or common? Generally speaking, patients are numb while they have dental work done. For this reason, they are unable to bite and normally chew until after the anesthesia wears off. Sometimes this causes the tooth to feel bruised and sore when a patient occludes or bites down. If your bite feels off, this is something that can be adjusted by your Lincoln, NE dentist after the anesthesia has worn off. What are things I can I do to ease a persistent toothache after fillings? You may have experienced a throbbing tooth pain after fillings or your tooth may be sensitive to hot and cold temperatures after recent dental work. Sensitive teeth after dental work are normal and are the body’s way of healing itself. The discomfort you are feeling is temporary. It will eventually go away. Until the pain has completely subsided, you can take over-the-counter pain relievers and or use a sensitivity toothpaste to help manage the inflammation. For any patient with a compromised immune system, for example, those who may be in treatment for cancer, often suffer from chronic inflammatory medical conditions or autoimmune diseases. Unfortunately, these patients are more susceptible to heightened post-operative sensitivity symptoms. Therefore, these patients may require more time to heal after treatment. If you feel your discomfort is increasing, you should call your Lincoln, NE dentist for an appointment. They will want to make sure the tooth is healing correctly. I just had a filling placed. How long will my tooth hurt afterward or is a “toothache after fillings” or “throbbing tooth pain after fillings” normal? Because everyone is different, there is no clear cut answer. If the tooth required any form of extensive treatment and had a large, deep cavity, your tooth may be sensitive longer. Extended sensitivity compared to having only minor dental work completed or a small filling placed for example, and having short-lived sensitivity (maybe for a day or two). Of course, the extent of the sensitivity can differ between individuals. A delay in healing time for patients who have chronic inflammatory medical conditions or autoimmune diseases is not uncommon. Therefore, if several months have passed and you feel that your discomfort has increased, be sure to talk to an emergency dentist in Lincoln, NE. It is possible that the nerve of your tooth did not recover properly and this is something that can happen after treatment. Consequently, you may need a root canal. Having a “toothache after fillings” or a “throbbing tooth pain after fillings” is no fun. If you read through all of this information and are still feeling frustrated and still have discomfort, don’t be discouraged! Play it safe, and give your dentist a call. They can at least keep you on their radar. If they feel it has been long enough, and by now the discomfort should’ve subsided, they can get you scheduled. If you are unsure of where to locate an emergency dentist, do an internet search for “emergency dentist near me.” That will help you narrow down a list of qualified providers. Doing this will ensure you are able to see someone that is conveniently located. But more importantly, a dentist who is willing to see you when you are in serious discomfort and need relief! Looking for Dental Payment Plans or Apply Online (Click logo above to be directed to website) We have 7 Locations in Lincoln, NE! - Lincoln Family Dentistry – Central Lincoln - South Lincoln Family Dentistry – South Lincoln - Northeast Lincoln Family Dentistry – Northeast Lincoln - Preserve Family Dentistry – East Lincoln - Coddington Dental – West Lincoln - NorthStar Dental – North Lincoln - SouthPointe Dental – Southwest Lincoln - Emergency Dentist NE – Emergency Dental Care - Lincoln Dental Plans – Affordable Dental Options Nebraska Family Dentistry has Lincoln Dental clinics in all parts of Lincoln. Choose a “near me dentist” location that is convenient for you.
<urn:uuid:bd584f56-d394-462e-aab2-03b4fbe617f1>
CC-MAIN-2021-43
https://preservefamilydentistry.com/2019/04/26/toothache-after-filling/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00670.warc.gz
en
0.951778
2,836
2.5625
3
Encrygma Encrypted Phones for sale: Security Through Obscurity is Dangerous Updated: Jul 6 Hiding security vulnerabilities in algorithms, software, and/or hardware decreases the likelihood they will be repaired and increases the likelihood that they can and will be exploited by evil-doers. The long history of cryptography and cryptoanalysis has shown time and time again that open discussion and analysis of algorithms expose weaknesses not thought of by the original authors, and thereby leads to better and more secure algorithms. As Kerckhoff noted about cipher systems in 1883 "the system must not require secrecy and can be stolen by the enemy without causing trouble." Cryptography is the science of secrets. In the distant past, it was simply about scrambling messages so adversaries couldn’t read them. In the modern computing era (a span of time that stretches less than 50 years), cryptography has become a keystone of computer security, encompassing all the ways we hide data, verify identities, communicate privately, and prevent message tampering. “Every secret creates a potential failure point.” — Bruce Schneier One of the most dangerous security mistakes a programmer can make (other than rolling their own crypto) is trusting that the things that are secret during development can stay secret forever. Imagine you write an algorithm to verify promotional codes. As soon as someone discovers its rules of logic — by research, reverse engineering, trial-and-error, or just asking questions — it ceases to be a reliable test for finding fakes. No secret lasts forever, and every secret is just one exploit away from being compromised. This concept can seem confusing at first because computer security does rely on secret ingredients like passwords and keys. But if you look more carefully, you’ll find that these are the exact weak points of a system, to be minimized, managed, or avoided wherever possible. Passwords are a notorious failure point — all it takes is one email spoofing attack or improperly discarded hard drive to pinch one. (Biometric data, which isn’t secret but isn’t easy to acquire, is far more secure.) “A cryptographic system should be secure even if everything about the system, except the key, is public knowledge.” — Auguste Kerckhoffs This applies the same philosophy (there is no security through obscurity) to the cryptographic algorithms we use. Time and time again, it’s been shown that the most reliable encryption comes from heavily explored public algorithms. The least reliable encryption is from secret algorithms that haven’t been tested by the broader community and are almost certainly full of undiscovered vulnerabilities. “Cryptography is typically bypassed, not penetrated.” — Adi Shamir Most cryptography is never broken, and most attacks don’t even try. Instead, cryptography is like a dead-bolted door on a house — once it establishes a moderately high threshold of protection, it simply moves an attack elsewhere (say, to a side window or a neighbor with a spare key). There are many ways to attack a system. Relying on known flaws in hardware or unpatched software is common. But without a doubt, the weakest links in every security system are the human ones. “Cryptography without system integrity is like investing in an armored car to carry money between a customer living in a cardboard box and a person doing business on a park bench.” — Gene Spafford Good programmers already know that if they want to optimize the performance of their code, they need to focus on the bottlenecks. Improvements in other places won’t yield results. The same is true of security systems. You need to improve the weakest areas, and if there’s a backdoor that can evade your security measures, it doesn’t matter how fantastic your cryptographic algorithms are. “Anyone who attempts to generate random numbers by deterministic means is, of course, living in a state of sin.” — John von Neumann As you already know, ordinary attackers rarely bother to attack the cryptography of a system. But there are exceptions. The most common cases are when the value of the encrypted data is very high—for example, it’s protecting trade secrets or the ownership of a block of cryptocurrency. When hackers attack cryptography, they would like to attack the implementation — particularly, the way the cryptography is integrated into the rest of the system. Often, there are gaps or outright sloppiness, information leaking out of overly detailed error messages, defective hardware, or buggy software. But if that doesn’t work, another common way to break encryption is by exploiting poor randomness. It sounds like an edge case, but it’s actually a common tactic behind plenty of legendary exploits, including attacks on slot machines, lotteries, internet games, bitcoin wallets, and the digital signing system used by the PlayStation 4. The problem is well known — computers create random-seeming numbers using algorithms, and if you know the inputs to these algorithms you can regenerate the same “random” numbers. What’s less obvious is that you can choose random-seeming inputs, and still be wide open to attacks. For example, if you seed a basic ordinary random number generator using the current millisecond of the computer clock, you’ve narrowed down the possible random values enough that they can easily be guessed. Even using multiple inputs with one guessable value compromises the whole system, opening the door to relatively easy brute force attacks. And if you can figure out the random numbers that someone else has used, you’re well on your way to decrypting the messages they’ve sent, or even figuring out the private key that they used. “Random numbers should not be generated with a method chosen at random.” — Donald Knuth Humans confuse themselves about randomness all the time because the way we use it in casual conversation (to mean something arbitrary) is different from the way we use it in solid cryptographic programming (to mean something non-deterministic). Here, computer pioneer Donald Knuth plays with this double-meaning. “All the magic crypto fairy dust in the world won’t make you secure.”— Gary McGraw The math, science, and computing power that goes into modern-day encryption is dazzling. It’s hard not to be impressed by shiny things like quantum cryptography. But there is one time that high-grade cryptography can be dangerous to the people using it. That’s when it gives them a false sense of security, and an excuse to ignore more likely attack vectors. The advice is obvious — but often overlooked. “If you think cryptography will solve your problem, either you don’t understand cryptography, or you don’t understand your problem.” — Peter G. Neumann It’s sometimes said that cryptography doesn’t fix problems, it changes them. You start with a data privacy problem, and cryptography replaces it with a key management problem. This quote from Peter G. Neumann has been repeated in slightly different versions by nearly a dozen famous cryptography researchers. The bottom line stays the same. Proper security is not tied up with anyone's technology. Instead, it’s a process that encompasses the design of an entire system. Personal Anti Espionage Communication Systems for CEOs, VIPs, Celebs & Business Oligarchs. The Most Advanced Quantum Encrypted Communication System in the World. Disruptive Offline Communication Tech (No Internet or Cellular Connection) Without any Servers involvement Based on the Secret Tech "White Fog" No data ever registered on the device or elsewhere. Forensic Data Extraction You have two options , either you can buy the “Encrygma “ SuperEncrypted Phone , full details : www.Encrygma.com, at € 18,000 Euros per device or create your own encryption device by installing our SuperEncryption systems on regular Android and Windows devices at € 5000 Euros per license. DigitalBank Vault advantages Vs. SKY ECC, BlackBerry, Phantom Secure, Encrochat and other 'secure communication devices' 1. One-lifetime fee of € 5000 Euro. No annual subscription fees. 2. Encryption Keys generated by the user only. Encryption Keys never stored in the device used or anywhere else. Encryption Keys never exchanged with the communicating parties. 3. No SIM Card needed. 4. Unlimited text messaging, audio and video messaging, audio calls, file transfers, file storage. 5. "Air-Gapped" Offline Encryption System not connected to the Internet. 6. No Servers involved at any given time, completely autonomous system. No third parties involved. 7. No registration of any kind - 100% anonymous without username/password. No online Platform or Interfaces. 8. Unique, Personal, Dedicated Set of Encryption Algorithms for each individual client. Totally Private Encryption System. 9. Air Gap Defense Technology: The Only Offline Communication System in the World. 10. Working cross-platform on Android Smartphones ( No SIM Cards Needed) and Windows PC ( for office work) for additional information at [email protected] Telegram: @timothyweiss WhatsApp: +37257347873 You can buy any Android device and Windows laptops and transform them into a powerful encryption device by installing our set of software. The process is simple , you buy your own phones and laptops devices , choose your most trusted company ( we always advise Samsung phones and Asus laptops , then you buy from us the DigitalBank Vault SuperEncryption System and install it on the devices you bought. If you need the encryption system just for storing and transferring classified files and data, you may need just one license ( it will work for four on both Windows and Androids). If you need to communicate between two people, you will of course need to buy two licenses. If your network of people you need to communicate with is larger , you will have to buy more licenses of course. Each client is receiving a dedicated set of encrypted algorithms that means that each company (client) has a different encryption system, therefore creating a closed private internal network. Each license costs € 5000 Euros. No recurring payments are required. It’s a one-time fee. No monthly payments. Remember that our mission is to help companies achieve total, absolute secrecy over their sensitive data storage, critical file transfers and securing their confidential communications. Feel free to contact us. For more in depth information we can have a voice call or video meeting. Our SuperEncryption systems are needed in case you really need the highest level of secrecy. Our technology is above Governments level , it’s the highest level of anti interception/ anti espionage tech available to the private sector. We sell only and exclusively to reputable companies and individuals that pass our due diligence and KYC procedures. Try for 30 days ( free of charge) the DigitalBank Vault SuperEncryption System. Transform any Android device or Windows Laptop into an Unbreakable Encryption Machine More information? Visit our website at www.DigitalBankVault.com or email us at [email protected]. We will be happy to assist you in achieving total secrecy over your communications. How to buy a DigitalBank Vault SuperEncryption system? https://www.digitalbankvault.com/order-the-digitalbank-vault Why the DBV SuperEncryption system is safer than any other solution available in the market? How does the DigitalBank Vault SuperEncryption technology work?
<urn:uuid:359f1ea2-c180-413f-9ff2-4d559ec0c549>
CC-MAIN-2021-43
https://www.digitalbank.capital/post/adam-adler-security-through-obscurity-is-dangerous
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00029.warc.gz
en
0.890361
2,424
3.203125
3
Gandhiji calling his associate Bibi Amtus Salam in Bombay from the office hut at Satyagraha Ashram, Sevagram, 1940. The Jews [September 1938] Several letters have been received by me asking me to declare my views about the Arab–Jew question in Palestine and the persecution of the Jews in Germany. It is not without hesitation that I venture to offer my views on this very difficult question. My sympathies are all with the Jews. I have known them intimately in South Africa. Some of them became life-long companions. Through these friends I came to learn much of their age-long persecution. They have been the untouchables of Christianity. The parallel between their treatment by Christians and the treatment of untouchables by Hindus is very close. Religious sanction has been invoked in both cases for the justification of the inhuman treatment meted out to them. Apart from the friendships, therefore, there is the more common universal reason for my sympathy for the Jews. But my sympathy does not blind me to the requirements of justice. The cry for the national home for the Jews does not make much appeal to me. The sanction for it is sought in the Bible and the tenacity with which the Jews have hankered after return to Palestine. Why should they not, like other peoples of the earth, make that country their home where they are born and where they earn their livelihood? Palestine belongs to the Arabs in the same sense that England belongs to the English or France to the French. It is wrong and inhuman to impose the Jews on the Arabs. What is going on in Palestine today cannot be justified by any moral code of conduct. The mandates have no sanction but that of the last war. Surely it would be a crime against humanity to reduce the proud Arabs so that Palestine can be restored to the Jews partly or wholly as their national home. The nobler course would be to insist on a just treatment of the Jews wherever they are born and bred. The Jews born in France are French in precisely the same sense that Christians born in France are French. If the Jews have no home but Palestine, will they relish the idea of being forced to leave the other parts of the world in which they are settled? Or do they want a double home where they can remain at will? This cry for the national home affords a colourable justification for the German expulsion of the Jews. But the German persecution of the Jews seems to have no parallel in history. The tyrants of old never went so mad as Hitler seems to have gone. And he is doing it with religious zeal. For he is propounding a new religion of exclusive and militant nationalism in the name of which any inhumanity becomes an act of humanity to be rewarded here and hereafter. The crime of an obviously mad but intrepid youth is being visited upon his whole race with unbelievable ferocity. If there ever could be a justifiable war in the name of and for humanity, a war against Germany, to… [text missing in original] Can the Jews resist this organized […] and prevent the wanton persecution of a whole race, would be completely justified. But I do not believe in any war. A discussion of the pros and cons of such a war is therefore outside my horizon or province. But if there can be no war against Germany, even for such a crime as is being committed against the Jews, surely there can be no alliance with Germany. How can there be alliance between a nation which claims to stand for justice and democracy and one which is the declared enemy of both? Or is England drifting towards armed dictatorship and all it means? Germany is showing to the world how efficiently violence can be worked when it is not hampered by any hypocrisy or weakness masquerading as humanitarianism. It is also showing how hideous, terrible and terrifying it looks in its nakedness [and] shameless persecution? Is there a way to preserve their self-respect, and not to feel helpless, neglected and forlorn? I submit there is. No person who has faith in a living God need feel helpless or forlorn. Jehovah of the Jews is a God more personal than the God of the Christians, the Mussalmans or the Hindus, though, as a matter of fact in essence, He is common to all and one without a second and beyond description. But as the Jews attribute personality to God and believe that He rules every action of theirs, they ought not to feel helpless. If I were a Jew and were born in Germany and earned my livelihood there, I would claim Germany as my home even as the tallest gentile German may, and challenge him to shoot me or cast me in the dungeon; I would refuse to be expelled or to submit to discriminating treatment. And for doing this, I should not wait for the fellow Jews to join me in civil resistance but would have confidence that in the end the rest are bound to follow my example. If one Jew or all the Jews were to accept the prescription here offered, he or they cannot be worse off than now. And suffering voluntarily undergone will bring them an inner strength and joy which no number of resolutions of sympathy passed in the world outside Germany can. Indeed, even if Britain, France and America were to declare hostilities against Germany, they can bring no inner joy, no inner strength. The calculated violence of Hitler may even result in a general massacre of the Jews by way of his first answer to the declaration of such hostilities. But if the Jewish mind could be prepared for voluntary suffering, even the massacre I have imagined could be turned into a day of thanksgiving and joy that Jehovah had wrought deliverance of the race even at the hands of the tyrant. For to the godfearing, death has no terror. It is a joyful sleep to be followed by a waking that would be all the more refreshing for the long sleep. It is hardly necessary for me to point out that it is easier for the Jews than for the Czechs to follow my prescription. And they have in the Indian satyagraha campaign in South Africa an exact parallel. There the Indians occupied precisely the same place that the Jews occupy in Germany. The persecution had also a religious tinge. President Kruger used to say that the white Christians were the chosen of God and Indians were inferior beings created to serve the whites. A fundamental clause in the Transvaal constitution was that there should be no equality between the whites and coloured races including Asiatics. There too the Indians were consigned to ghettos described as locations. The other disabilities were almost of the same type as those of the Jews in Germany. The Indians, a mere handful, resorted to satyagraha without any backing from the world outside or the Indian Government. Indeed the British officials tried to dissuade the satyagrahis from their contemplated step. World opinion and the Indian Government came to their aid after eight years of fighting. And that too was by way of diplomatic pressure not of a threat of war. But the Jews of Germany can offer satyagraha under infinitely better auspices than the Indians of South Africa. The Jews are a compact, homogeneous community in Germany. They are far more gifted than the Indians of South Africa. And they have organized world opinion behind them. I am convinced that if someone with courage and vision can arise among them to lead them in non-violent action, the winter of their despair can in the twinkling of an eye be turned into the summer of hope. And what has today become a degrading man-hunt can be turned into a calm and determined stand offered by unarmed men and women possessing the strength of suffering given to them by Jehovah. It will be then a truly religious resistance offered against the godless fury of dehumanized man. The German Jews will score a lasting victory over the German gentiles in the sense that they will have converted the latter to an appreciation of human dignity. They will have rendered service to fellow-Germans and proved their title to be the real Germans as against those who are today dragging, however unknowingly, the German name into the mire. And now a word to the Jews in Palestine. I have no doubt that they are going about it the wrong way. The Palestine of the Biblical conception is not a geographical tract. It is in their hearts. But if they must look to the Palestine of geography as their national home, it is wrong to enter it under the shadow of the British gun. A religious act cannot be performed with the aid of the bayonet or the bomb. They can settle in Palestine only by the goodwill of the Arabs. They should seek to convert the Arab heart. The same God rules the Arab heart who rules the Jewish heart. They can offer satyagraha in front of the Arabs and offer themselves to be shot or thrown into the Dead Sea without raising a little finger against them. They will find the world opinion in their favour in their religious aspiration. There are hundreds of ways of reasoning with the Arabs, if they will only discard the help of the British bayonet. As it is, they are co-sharers with the British in despoiling a people who have done no wrong to them. I am not defending the Arab excesses. I wish they had chosen the way of non-violence in resisting what they rightly regarded as an unwarrantable encroachment upon their country. But according to the accepted canons of right and wrong, nothing can be said against the Arab resistance in the face of overwhelming odds. Let the Jews who claim to be the chosen race prove their title by choosing the way of non-violence for vindicating their position on earth. Every country is their home including Palestine not by aggression but by loving service. A Jewish friend has sent me a book called The Jewish Contribution to Civilization by Cecil Roth. It gives a record of what the Jews have done to enrich the world’s literature, art, music, drama, science, medicine, agriculture, etc. Given the will, the Jew can refuse to be treated as the outcaste of the West, to be despised or patronized. He can command the attention and respect of the world by being man, the chosen creation of God, instead of being man who is fast sinking to the brute and forsaken by God. They can add to their many contributions the surpassing contribution of non-violent action. Jews and Palestine [May 1946] Hitherto I have refrained practically from saying anything in public regarding the Jew–Arab controversy. I have done so for good reasons. That does not mean any want of interest in the question, but it does mean that I do not consider myself sufficiently equipped with knowledge for the purpose. For the same reason I have tried to evade many world events. Without airing my views on them, I have enough irons in the fire. But four lines of a newspaper column have done the trick and evoked a letter from a friend who has sent me a cutting which I would have missed but for the friend drawing my attention to it. It is true that I did say some such thing in the course of a long conversation with Mr. Louis Fischer on the subject. I do believe that the Jews have been cruelly wronged by the world. “Ghetto” is, so far as I am aware, the name given to Jewish locations in many parts of Europe. But for their heartless persecution, probably no question of return to Palestine would ever have arisen. The world should have been their home, if only for the sake of their distinguished contribution to it. But, in my opinion, they have erred grievously in seeking to impose themselves on Palestine with the aid of America and Britain and now with the aid of naked terrorism. Their citizenship of the world should have and would have made them honoured guests of any country. Their thrift, their varied talent, their great industry should have made them welcome anywhere. It is a blot on the Christian world that they have been singled out, owing to a wrong reading of the New Testament, for prejudice against them.”If an individual Jew does a wrong, the whole Jewish world is to blame for it.” If an individual Jew like Einstein makes a great discovery or another composes unsurpassable music, the merit goes to the authors and not to the community to which they belong. No wonder that my sympathy goes out to the Jews in their unenviably sad plight. But one would have thought adversity would teach them lessons of peace. Why should they depend upon American money or British arms for forcing themselves on an unwelcome land? Why should they resort to terrorism to make good their forcible landing in Palestine? If they were to adopt the matchless weapon of non-violence whose use their best Prophets have taught and which Jesus the Jew who gladly wore the crown of thorns bequeathed to a groaning world, their case would be the world’s, and I have no doubt that among the many things that the Jews have given to the world, this would be the best and the brightest. It is twice blessed. It will make them happy and rich in the true sense of the word and it will be a soothing balm to the aching world.
<urn:uuid:23bd30c2-0e2a-4832-85ef-8d39b65afeff>
CC-MAIN-2021-43
https://minervasperch.com/2019/10/20/gandhiji-on-satyagraha-in-palestine/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00430.warc.gz
en
0.973655
2,702
2.5625
3
Ques 1: Choose the correct option: (i) A ball rolling along the ground gradually slows down and finally comes to rest is an example of (A) Muscular force (B) Magnetic force (C) Frictional force (D) Electrostatic force Ques 2: Choose the correct option: (ii) Sound can travel through (A) Solids only (B) liquids only (C) Gases only (D) solids, liquids and gases. Ques 3: Choose the correct option: (iii) Which of the following is not a way to conserve water? Ques 4: Choose the correct option: (iv) The use of manure (which is not correct): (A) Enhances the water holding capacity of the soil (B) Improves soil texture (C) Increases the number of friendly microbes (D) Also becomes a source of water pollution Ques 5: Choose the correct option: (v) Which is the plant disease caused by micro-organism? (B) Small pox (C) Citrus Canker Ques 6: Choose the correct option: (vi) Rayon is obtained by (A) Petroleum products (B) Fully synthetic method (C) Chemical treatment of wood pulp (D) All methods Ques 7: Choose the correct option: (vii) Sodium metal is stored in Ques 8: Choose the correct option: (viii) The world's first oil well was drilled in: Ques 9: Choose the correct option: (ix) Which among the following is considered as the cleanest fuel? (A) Cow dung cake (D) Hydrogen gas Ques 10: Choose the correct option: (x) Which of the following is not a cell? Ques 11: Give a suitable word for each of the following statements: (i) Chemicals which control changes at adolescence stage. Ans: (i) Hormones Ques 12: Give a suitable word for each of the following statements: (ii) Force exerted by a magnet on a piece of iron. Ans: (ii) Magnetic force Ques 13: Give a suitable word for each of the following statements: (iii) The substance that reduces friction. Ans: (iii) Lubricants Ques 14: Give a suitable word for each of the following statements: (iv) The characteristic of sound that determines loudness. Ans: (iv) Amplitude Ques 15: Give a suitable word for each of the following statements: (v) Device used to check current. Ans: (v) Tester Ques 16: Give a suitable word for each of the following statements: (vi) The Brightest star in the sky located close to Orion. Ans: (vi) Sirius. Ques 17: Who discovered the first antibiotic? Name any two antibiotics. Ans: Alexander Fleming discovered first antibiotic penicillin. Two other common antibiotics are streptomycin and tetracycline. Ques 18: Write any two properties of nylon. Ans: (i) Nylon fibres are strong, elastic and light (ii) They are easy to wash and lustrous. Ques 19: What is carbonization? Ans: As coal contains mainly carbon, the slow process of conversion of dead vegetation into coal is called carbonization. Ques 20: When kerosene oil is heated a little, it will catch fire. But when wood is heated a little, it does not catch fire. Why? Ans: If kerosene oil is heated a little, it catches fire. But if wood is heated a little, it does not catch fire because ignition temperature of kerosene oil is lower than that of wood. Ques 21: What is desertification? Ans: Removal of top layer of soil exposes the lower, hard and rocky layers. This soil has less humus and is less fertile. Gradually the fertile land gets converted into desert. It is called desertification. Ques 22: Write difference between viviparous and oviparous animals. Ans: Difference between viviparous and oviparous animals: |Viviparous Animals||Oviparous Animals| |The animals that give birth to young ones are called viviparous animals. e.g., Human beings, Cats, Dogs.||The animals that lay eggs are called oviparous animals. e.g., Frogs, fishes.| Ques 23: Name the disease or side effects caused by deficiency of following hormones: Ans: (a) Thyroxine is produced by thyroid gland. Its deficiency causes ?goiter? disease. (b) A person suffers from diabetes, if pancreas does not produce the hormone insulin in sufficient quantities. (c) Adrenal glands produce hormone adrenaline, which helps the body to adjust to stress, when a person is very angry, embarrassed or worried. Ques 24: What does frictional force exerted on an object in a fluid depend on? Ans: Frictional force on an object in a fluid depends on the speed with respect to the fluid and the nature of fluid. It also depends on the shape of the object, e.g., all vehicles are designed to have shapes that reduce fluid friction. Ques 25: What happens when electricity is passed through ordinary water? Ans: When the two terminals of battery are combined with negative and positive electrodes which are immersed in water, the water is dissociated into its components oxygen and hydrogen. Oxygen is collected on positive electrode and hydrogen is collected on negative electrode. Ques 26: The oviducts of a woman are blocked. So doctor advised her for IVF. (a) What is the full form of IVF? (b) Why IVF is suggested to the woman? Justify. (c) What do you call these babies born through this technique. Ans: (a) The full form of IVF is in vitro fertilisation (fertilisation outside the body). (b) IVF is suggested to the women because she is unable to bear babies because sperms cannot reach the egg for fertilisation. So, in this case doctors advised the woman for IVF. Where freshly released egg and sperms are kept together for few hours in vitro conditions in a test tube or any other apparatus. (c) If in the case, fertilisation occurs, the zygote is allowed to develop for about a week and then it is placed in the mother?s uterus. Complete development takes place in the uterus and the baby is born like any other baby. This technique is called as test tube babies. This term is actually misleading because babies cannot grow in test tubes. Ques 27: What are the advantages of manure? Ans: Advantages of manure: (i) It enhances the water holding capacity of the soil. (ii) It makes the soil porous due to which exchange of gases becomes easy. (iii) It increases the number of friendly microbes. (iv) It improves the texture of the soil. Ques 28: Explain the various shapes of bacteria. Ans: The bacteria are classified into three types on the basis of their shape: (a) Rod shaped (Bacillus): (b) Round shaped (Coccus): (c) Spiral shaped Ques 29: What is rayon? Why is it called artificial silk? What are the uses of rayon? Ans: Rayon is synthetic fibre having properties similar to those of silk. So, it is called artificial silk. It is obtained by chemical treatment of wood pulp. Rayon is a man-made fibre. It resembles silk, but it is cheaper than silk. It is mixed with cotton to make bed sheets, or mixed with wool to make carpets. Ques 30: Saloni took a piece of burning charcoal and collected the gas evolved in a test tube. (i) How will she find the nature of the gas? (ii) Write down word equations of all reactions taking place in this process. Ans: (i) When charcoal is burnt, carbon dioxide gas is produced. This gas turns lime water milky. The nature of the gas can be tested by using moist red and blue litmus paper. No effect on red litmus but the gas turns blue litmus to red, so it is acidic in nature. (ii) Equations of all the reactions are: Carbon dixide + Lime water → Milky Ques 31: What will happen if: (i) we go on cutting trees? (ii) the habitat of an animals is disturbed? (iii) the top layer of soil is exposed? Ans: (i) If we go on cutting trees, we will face the problem of food, wood, shelter etc. Also, the cutting of trees leads to the decrease in level of oxygen and also causes global warming. (ii) If the habitat of an animal is disturbed, the animal will face extinction and survival becomes very difficult for it (iii) The exposed layer has less humus and is less fertile. Gradually the fertile land gets converted into deserts. It is called deforestation. Ques 32: Explain traditional ways of purifying water to make it fit for drinking. Ans: The traditional ways of purifying water to make it fit for drinking are as follows: (i) By filtering: This is a physical method of removing impurities. A popular household filter is a candle type filter. (ii) By boiling: When water is heated, it boils at a temperature of 1000C. At this high temperature, all the harmful micro-organisms or germs present in water are killed and it becomes absolutely safe for drinking. Many households use boiling as a method for obtaining safe drinking water. (iii) Chlorination: It is commonly used chemical method for purifying water. It is done by adding chlorine tablets or bleaching powder to the water. Ques 33: Describe the 'Green House Effect' in your own words. Ans: After the Sun's rays pass through the atmosphere, they warm the earth's surface. A part of the radiation that falls on the earth is absorbed by it and a part is reflected. The radiations which are trapped by the atmosphere are not allowed to go out of the earth's atmosphere. These trapped radiations further warm the earth. As in a nursery, sun?s heat is allowed to get in but is not allowed to go out of the green house. The trapping of radiations by the earth's atmosphere performs a similar function. That is why it is called the greenhouse effect. Without this process, life would not have been possible on the earth because of the low temperatures. CO2 is one of the gases responsible for this effect. Ques 34: Draw sketches to show the relative positions of prominent stars in (i) Ursa Major and (ii) Orion. Ans: (i) Ursa Major: There are seven prominent stars in this constellation. They appear like a big ladle or a question mark. There are three stars in the handle of the ladle and four in its bowl. (ii) Orion: Orion is another well-known constellation that can be seen during winter in the late evenings. It is one of the most magnificent constellations in the sky. It also has seven or eight bright stars. Orion is also called the hunter. The three middle stars represent the belt of the hunter. The four bright stars appear to be arranged in the form of a quadrilateral. Ques 35: Write Do's and Don'ts during a thunderstorm. Ans: Outside the house: (i) Open vehicles, such as motor bikes, tractors, construction machinery and open cars are not safe. (ii) Open fields, tall trees, shelters in park, elevated place do not protect us from lightning strokes. (iii) Carrying an umbrella is also not safe. (iv) Stay away from poles or other metal objects. Inside the house: (i) During a thunderstorm contact with telephone cords, electrical wires and metal pipes should be avoided. (ii) Bathing should be avoided. (iii) Electrical appliances like computers, TVs etc. should be unplugged. Electrical lights can remain on.
<urn:uuid:c7b4bab3-4c31-42ce-a120-6b0d6466196f>
CC-MAIN-2021-43
https://edurev.in/studytube/Class-8-Science-Sample-Paper-9/4ae62a0f-fe2e-465e-9c77-ad39a1cea43a_t
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00150.warc.gz
en
0.906044
2,650
2.9375
3
Stainless Steel tools have corrosion resistant steels. “Corrosion-resistant”, does not mean “rust-free forever” and a certain amount of care is needed to maintain the tools’ original corrosion –resistant layer. Passivation of stainless steel is a process used that produces a passive (i.e., non –reactive) oxide surface layer which protects the steel against corrosion. Old instruments with carbon, chrome, and nickel plating can rust when the plating wears off exposing the steel underneath. You should read and follow the Care Instructions for maximum performance and extension of use. Back to Top Stainless steel is a low carbon steel which contains chromium at 10% or more by weight. It is this addition of chromium that gives the steel its corrosion resistant properties. The chromium content of the steel allows the formation of an invisible chromium-oxide film on its surface. If oxygen is present, even in small quantities, this film self-repairs if damaged mechanically or chemically. Corrosion resistance of stainless steel is enhanced by increased chromium content and the addition of other elements such as molybdenum, nickel, and nitrogen. The three main classifications of stainless steel are identified by the alloying elements which form their microstructure. Austenitic steels have austenite (face centered cubic crystal) as their primary phase. These are alloys containing chromium and a major proportion of nickel. Austenitic steels are not thermally hardenable but have excellent corrosion resistance. Ferritic steels have ferrite (body centered cubic crystals) as their main phase. These steels have a low carbon content and contain chromium as the main alloying element. Usually between 13% and 17%. Ferritic steel is less ductile than austenitic steel and is not thermally hardenable. Martensitic steels are low carbon steels typically containing 12% chromium, a moderate level of carbon, and very low levels of nickel. Martenitic steels are distinguished from other stainless steels in their ability to achieve high hardness by a heat treatment that products martensite (a supersaturated solid solution of iron characterized by a needle-like microstructure) 410 Stainless Steel is a Martensitic alloy similar to 405 but with a higher carbon content and no aluminum. It is this increase in carbon and absence of aluminum that improve the mechanical properties and strength of 410 by making it hardenable steel to regular carbon and alloy steels. Carbon 0.15% max. Manganese 1.00% max Phosphorus 0.040 % max Sulfur 0.030%max Silicon 1.00%max Chromium 11.5/13.50% 420 stainless steel is a Martensitic alloy that is strengthened by the addition of carbon at a 0.15% minimum (0.30% nominal) compared to the 0.15% maximum for type 410. Along with carbon, chromium content is also slightly increased to offset the tendency of the higher carbon content to lower the alloy’s resistance to corrosion. In the hardened and tempered condition, the alloy’s yield strengths are substantially greater than type 410. Type 420 is used for such applications as surgical and dental instruments, cutlery, scissors, valves, and ball bearings. Carbon 0.15% max. Manganese 1.00% max Phosphorus 0.040 % max Sulfur 0.030%max Silicon 1.00%max Chromium 12.00/14.00% is a thermally hardenable, martensitic stainless steel alloy combining corrosion-resistant properties with maximum hardness. Both carbon (0.95% - 1.2%) and chromium (16-18%) contents are increased substantially to impart hardness. While it is the strongest of all stainless steel alloy, its high carbon content reduces its corrosion resistance. Carbon 0.95/1.20%. Manganese 1.00% max Phosphorus 0.040 % max Sulfur 0.030%max Silicon 1.00%max Chromium 16.00/18.00% Molybdenum 0.75% max Stainless steel is a metal which resists rust, can be ground to a fine point, and retains a sharp edge. Its composition can be altered to enhance certain qualities. For example, a manufacturer can make a scissor of stainless steel with carbon to create a harder cutting edge on a scissor. It is the Carbon in the stainless steel that makes the scissor stronger but the Carbon can cause the instrument to rust and corrode. All stainless steel can stain, pit, and rust if not cared for properly. When manufacturing a stainless steel instrument it is subject to a passivation and polishing process in order to make the steel as stainless as possible. Passivation and Polishing eliminates the carbon molecules form the instrument surface. This forms a layer which acts as a corrosive resistant seal. Passivation is a chemical process that removes carbon molecules from the surface of the instrument. This chemical process can also occur through repeated exposure to oxidizing agents in chemicals, soaps, and the atmosphere Polishing is a process used to achieve a smooth surface on the instrument. It is extremely important to polish an instrument because the passivation process leaves microscopic pits where the carbon molecules were removed. Polishing also builds a layer of chromium oxide on the surface of the instrument. Through regular handling and sterilization the layer of chromium oxide will build up and protect the instrument from corrosion. In some circumstances, that is why you will notice older instruments less corrosive than new ones. The newer instruments have not had the time to build up the chromium oxide layer. However, improper cleaning and sterilization can cause the layer of chromium oxide to disappear or become damaged thus increasing the possibility of corrosion. That is why it is so important to properly clean, sterilize, and store your instruments. For proper cleaning, sterilizing, and storage of surgical instruments please consult our web site under Rinse, Cleaning and Sterilizing. Back to Top Tools are shipped in non-sterile condition and should be cleaned and sterilized before and after use. Distilled water is recommended for cleaning and rinsing as tap water may contain minerals which can stain or discolor the steel. If using tap water you should dry immediately to avoid staining. Do not use high concentration bleach which may cause pitting, or abrasive cleaners which may scratch and remove the passive surface layer. Tools should be arranged by metal type to avoid galvanic corrosion (due to contact of different metals) and stainless steel tools should never be stored with carbon steel tools to avoid ferrous contamination (the transfer of ferrous particles). Use of distilled water and a neutral pH cleaning solution is recommended for all these procedures. Immediately after use, rinse instruments under warm (not hot) water. It may be helpful to use a nylon toothbrush to rinse the lock boxes and joints of the instrument. Be sure to remove all grime, grease and dirt. You should clean instruments immediately after rinsing, Do not place dissimilar metals (stainless, copper, chrome plated etc.) together. - Use stiff plastic cleaning brushes (nylon etc.). Do not use steel wool or wire brushes except specially recommended stainless steel wire brushes for instruments such as bone files, or on stained areas in knurled handles. - Use only neutral pH (7) detergents. If not rinsed off properly after cleaning, low pH detergents will breakdown the stainless protective surface and cause black staining. - High pH detergent will cause surface deposit of brown stain (this deposit may look like rust) which will also interfere with smooth operation of the instrument. - Brush delicate instruments carefully and, if possible, handle them totally separate from general instruments. - Make sure all instrument surfaces are visibly clean and free from stains and dirt. This is a good time to inspect each instrument for proper function and condition. - Check the following: Scissor blades glide smoothly from open to closed (they must not be loose when in closed position). Test scissors by cutting into thin gauze. Three quarters of the length of the blade should cut all the way to the scissor tips, and not hang up. - Blades of all cutting edges should be sharp and undamaged. - After manually scrubbing instruments, rinse them thoroughly under running water (distilled water is best). While rinsing, open and close scissors and other hinged instruments to make sure the hinge areas are rinsed out, as well as the outside of the instruments. If the instruments are to be stored, let them air dry and store them in a clean and dry environment. If instruments are to be reused or autoclaved: Lubricate all instruments which have any metal to metal action such as scissors, nippers and other such instruments. Lubricants such as instrument milk are best. Do not use WD-40 oil or other industrial lubricants. Use disposable paper or plastic pouches to sterilize individual instruments. Make sure you use a wide enough pouch (4” or wider) for instruments with hinges and locks so the instrument can be sterilized in the open and unlocked position. If you are autoclaving instrument sets unlock all instruments and sterilize them in an open position. Place heavy instruments at the bottom of the set (when two layers are required). Never lock an instrument during autoclaving. It will not be sterile as the steam cannot reach the metal to metal surfaces. The instrument might develop cracks in hinged areas caused by the heat expansion during the autoclave cycle. Do not overload the autoclave chamber. Pockets may form that do not permit steam penetration. Place a towel on bottom of pan to absorb excess moisture during autoclaving. This will reduce the chance of getting “wet packs”. Make sure the towels used in sterilization of the instruments have no detergent residue and are neutral pH(7) if immersed in water. The residue of the high-pH (9-13) detergents used by some laundries to clean the towels could cause stains on some instruments. CAUTION: At the end of the autoclave cycle- before the drying cycle- unlock the autoclave door and open it more than a crack about ¾”. Then run the dry cycle for the period recommended by the autoclave manufacturer. If the autoclave door is opened fully before the drying cycle, cold room air will rush into the chamber, causing condensation on the instruments. This will result in water stains on the instruments and cause “wet packs”. Roll-packs should never be used in an autoclave. If you have unusual staining on your instruments during rinsing, cleaning or autoclaving contact us or look in our web site under staining. Important: For instruments with tungsten carbide inserts, tips or blades we do not recommend use of solutions containing Benzyl Ammonium Chloride. This will destroy the tungsten carbide inserts. Back to Top Stains can either be plated or deposited on the surface of an instrument. Stains are discoloration of metal by material being just added to the surface of the metal. Stains are often mistaken for rust- an actual change to the metal material. A brown/orange color stain is the most common and is often mistaken for rust. The brown/orange color stain is usually a phosphate deposit on the instrument. Phosphate can come from traces of minerals in the autoclave water source, a dirty autoclave, high alkaline or acidic detergents, surgical wrappings, and dried blood or tissue. Hot steam in the autoclave deposits the phosphate and produces the stain on the instrument’s surface. Remove this type of stain from the instrument by rubbing with a pencil eraser (rust cannot be removed by an eraser). A brown/orange stain or a blue-black stain can occur from plating during the cleaning or autoclaving process. Through electrolysis when dissimilar metals touch while being autoclaved, ultrasonically cleaned, or sometimes even stored together, plated stains actually bond the stain material to the instrument metal. They do not often change the metal-material except for the discoloration. These stains are very difficult to remove and will probably need refinishing. Acid Reaction Stains Black stains are usually due to an acid reaction. An acidic detergent deposit left on the instrument during autoclaving might cause a black stain. Always use neutral pH detergents and distilled water in your rinsing, cleaning or autoclave process. Excessive Heat Stains Multi-colored stains or chromium oxide stains result from excessive heat. These rainbow colored stains indicate the instrument may have lost some of its original hardness after being heated. Cutting edges loose their sharpness quickly when hardness is reduced. Flash flame decontamination (an instrument is decontaminated by inserting it into a flame for a few seconds) changes the molecular structure of most material adversely, shortening the useful life of instruments. Consider another method of decontamination where lower heat levels can be applied to keep your instruments useful for many more years Pitting can occur on a instrument when it is improperly autoclaved or cleaned. When an instrument is autoclaved using a solution containing chloride or an acid based detergent it can cause pitting. Hydrochloric acid forms in the solution removing the protective chromium oxide layer of the stainless steel. The acid can then attack the unprotected steel and cause pitting. Avoid the problem of pitting by using only pH neutral (7.0) detergents and making sure all instruments placed in the autoclave are thoroughly rinsed before being put in the autoclave. Pitting can also occur if dissimilar metals come in contact with each other in an ultrasonic cleaner or autoclave. Electrolysis from dissimilar metals touching in a solution (the steam in the autoclave acts as a conductive solution that allows electrolysis) transfers metal molecules from one instrument to the other, leaving pits in one instrument. Avoid having any instruments touching during autoclaving, cleaning or storage to eliminate pitting. True rust on a quality stainless steel instrument is very rare. Rusting can be caused by the chromium oxide layer on the instrument coming in contact with very caustic chemicals over a long period of time. Stainless steel may rust if the surface has not been passivated (processed to create a thin oxidation layer) or finished properly. Old instruments with carbon, chrome, and nickel plating can rust when the plating wears off exposing the steel underneath. What is often thought to be rust is actually mineral deposits resulting from improper cleaning or autoclaving procedures. Rubbing the instrument with a pencil eraser will remove mineral deposits but will not remove rust. Back to Top Lubrication is the most important action you can take to extend the life of your instruments. The use of a surgical instrument lubricant, known as "milk" because of the white coloring caused by the emulsion in water, will prevent spotting from mineral deposits left behind by water after cleaning. Corrosion can also be prevented by the application of lubricant. Corrosion starts in the pores of the metal and is often related to improper cleaning. With proper handling and lubrication the surface of your stainless steel instruments will develop a thin hard coating, similar to oxidation, which will help prevent damage from corrosion. Known as the passivation layer, it makes the instrument more resistant to staining and rusting. In addition to stain and corrosion protection, lubrication reduces friction at the joints, keeping the action of the instrument light, delicate and smooth and extending the life of the instrument by reducing wear. Back to Top When storing instruments it is recommended that they never be stacked or piled together. This may cause physical or other damage to instruments, including even the larger ones. Instrument edges, points and finish are best protected by individually laying them in a storage container. It is most important that this area be a dry cabinet or drawer. The use of drying agents such as silica packets or even an open box of baking powder will aid in controlling moisture. When storing instruments re-using the tip guard included with many instruments may reduce damage to instrument tips. As a reminder, do not autoclave an instrument with the tip guard on the instrument. The tip guard might retain moisture that could cause staining, or the tip may not be sufficiently sterilized. The use of disposable instrument pouches is an excellent way to store and autoclave instruments. The pouches keep the instruments from touching each other and the sterilized indicator strip will insure the instruments are ready for use after autoclaving. Make sure all instruments are properly cleaned, sterilized, and lubricated before storing. This is the best way to prevent water spotting, staining, and more serious damage to instruments. Back to Top
<urn:uuid:85ea92e8-34b0-45c0-9fd5-4cb112640088>
CC-MAIN-2021-43
https://www.princesscare.com/care-instructions
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00550.warc.gz
en
0.904343
3,542
3.421875
3
JOB A REAL PERSON.--It has been supposed by some that the book of Job is an allegory, not a real narrative, on account of the artificial character of many of its statements. Thus the sacred numbers, three and seven, often occur. He had seven thousand sheep, seven sons, both before and after his trials; his three friends sit down with him seven days and seven nights; both before and after his trials he had three daughters. So also the number and form of the speeches of the several speakers seem to be artificial. The name of Job, too, is derived from an Arabic word signifying repentance. But Ezekiel 14:14 conjunction with "Noah and Daniel," real persons. St. James ( James 5:11 he would not have been likely to do had Job been only a fictitious person. Also the names of persons and places are specified with a particularity not to be looked for in an allegory. As to the exact doubling of his possessions after his restoration, no doubt the round number is given for the exact number, as the latter approached near the former; this is often done in undoubtedly historical books. As to the studied number and form of the speeches, it seems likely that the arguments were substantially those which appear in the book, but that the studied and poetic form was given by Job himself, guided by the Holy Spirit. He lived one hundred and forty years after his trials, and nothing would be more natural than that he should, at his leisure, mould into a perfect form the arguments used in the momentous debate, for the instruction of the Church in all ages. Probably, too, the debate itself occupied several sittings; and the number of speeches assigned to each was arranged by preconcerted agreement, and each was allowed the interval of a day or more to prepare carefully his speech and replies; this will account for the speakers bringing forward their arguments in regular series, no one speaking out of his turn. As to the name Job--repentance (supposing the derivation correct)--it was common in old times to give a name from circumstances which occurred at an advanced period of life, and this is no argument against the reality of the person. WHERE JOB LIVED.--"Uz," according to GESENIUS, means a light, sandy soil, and was in the north of Arabia-Deserta, between Palestine and the Euphrates, called by PTOLEMY (Geography, 19) Ausitai or Aisitai. In Genesis 10:23 ; 22:21 ; 36:28 ; 1 Chronicles 1:17 1 Chronicles 1:42 a man. In Jeremiah 25:20 ; Lamentations 4:21 in Genesis 22:21 different person from the one mentioned ( Genesis 10:23 Shem. The probability is that the country took its name from the latter of the two; for this one was the son of Aram, from whom the Arameans take their name, and these dwelt in Mesopotamia, between the rivers Euphrates and Tigris. Compare as to the dwelling of the sons of Shem in Genesis 10:30 of the East" ( Job 1:3 Assyrian inscriptions, states that "Uz is the prevailing name of the country at the mouth of the Euphrates." It is probable that Eliphaz the Temanite and the Sabeans dwelt in that quarter; and we know that the Chaldeans resided there, and not near Idumea, which some identify with Uz. The tornado from "the wilderness" ( Job 1:19 view of it being Arabia-Deserta. Job ( Job 1:3 greatest of the men of the East"; but Idumea was not east, but south of Palestine: therefore in Scripture language, the phrase cannot apply to that country, but probably refers to the north of Arabia-Deserta, between Palestine, Idumea, and the Euphrates. So the Arabs still show in the Houran a place called Uz as the residence of Job. THE AGE WHEN JOB LIVED.--EUSEBIUS fixes it two ages before Moses, that is, about the time of Isaac: eighteen hundred years before Christ, and six hundred after the Deluge. Agreeing with this are the following considerations: 1. Job's length of life is patriarchal, two hundred years. 2. He alludes only to the earliest form of idolatry, namely, the worship of the sun, moon, and heavenly hosts (called Saba, whence arises the title "Lord of Sabaoth," as opposed to Sabeanism) ( Job 31:26-28 seven, as in the case of Balaam. God would not have sanctioned this after the giving of the Mosaic law, though He might graciously accommodate Himself to existing customs before the law. 4. The language of Job is Hebrew, interspersed occasionally with Syriac and Arabic expressions, implying a time when all the Shemitic tribes spoke one common tongue and had not branched into different dialects, Hebrew, Syriac, and Arabic. 5. He speaks of the most ancient kind of writing, namely, sculpture. Riches also are reckoned by cattle. The Hebrew word, translated "a piece of money," ought rather be rendered "a lamb." 6. There is no allusion to the exodus from Egypt and to the miracles that accompanied it; nor to the destruction of Sodom and Gomorrah (PATRICK, however, thinks there is); though there is to the Flood ( Job 22:17 and these events, happening in Job's vicinity, would have been striking illustrations of the argument for God's interposition in destroying the wicked and vindicating the righteous, had Job and his friends known of them. Nor is there any undoubted reference to the Jewish law, ritual, and priesthood. 7. The religion of Job is that which prevailed among the patriarchs previous to the law; sacrifices performed by the head of the family; no officiating priesthood, temple, or consecrated altar. THE WRITER.--All the foregoing facts accord with Job himself having been the author. The style of thought, imagery, and manners, are such as we should look for in the work of an Arabian emir. There is precisely that degree of knowledge of primitive tradition (see Job 31:33 days of Noah and Abraham, and which was subsequently embodied in the early chapters of Genesis. Job, in his speeches, shows that he was much more competent to compose the work than Elihu, to whom LIGHTFOOT attributes it. The style forbids its being attributed to Moses, to whom its composition is by some attributed, "whilst he was among the Midianites, about 1520 B.C." But the fact, that it, though not a Jewish book, appears among the Hebrew sacred writings, makes it likely that it came to the knowledge of Moses during the forty years which he passed in parts of Arabia, chiefly near Horeb; and that he, by divine guidance, introduced it as a sacred writing to the Israelites, to whom, in their affliction, the patience and restoration of Job were calculated to be a lesson of especial utility. That it is inspired appears from the fact that Paul ( 1 Corinthians 3:19 with the formula, "It is written." Compare also James 4:10 1 Peter 5:6 is probably the oldest book in the world. It stands among the Hagiographa in the threefold division of Scripture into the Law, the Prophets, and the Hagiographa ("Psalms," Luke 24:44 DESIGN OF THE BOOK.--It is a public debate in poetic form on an important question concerning the divine government; moreover the prologue and epilogue, which are in prose, shed the interest of a living history over the debate, which would otherwise be but a contest of abstract reasonings. To each speaker of the three friends three speeches are assigned. Job having no one to stand by him is allowed to reply to each speech of each of the three. Eliphaz, as the oldest, leads the way. Zophar, at his third turn, failed to speak, thus virtually owning himself overcome ( Job 27:1-23 continued his reply, which forms three speeches ( Job 26:1-14 ; 27:1-23 ; 28:1-28 ; 29:1-31:40 is allowed four speeches. Jehovah makes three addresses ( Job 38:1-41:34 The whole is divided into three parts--the prologue, poem proper, and epilogue. The poem, into three--(1) The dispute of Job and his three friends; (2) The address of Elihu; (3) The address of God. There are three series in the controversy, and in the same order. The epilogue ( Job 42:1-17 reconciliation with his friends, restoration. The speakers also in their successive speeches regularly advance from less to greater vehemence. With all this artificial composition, everything seems easy and natural. The question to be solved, as exemplified in the case of Job, is, Why are the righteous afflicted consistently with God's justice? The doctrine of retribution after death, no doubt, is the great solution of the difficulty. And to it Job plainly refers in Job 14:14 Job 19:25 language on the resurrection in Job is inconsistent with the obscurity on the subject in the early books of the Old Testament, is answered by the fact that Job enjoyed the divine vision ( Job 38:1 ; 42:5 therefore, by inspiration, foretold these truths. Next, the revelations made outside of Israel being few needed to be the more explicit; thus Balaam's prophecy ( Numbers 24:17 lead the wise men of the East by the star ( Matthew 2:2 age before the written law, it was the more needful for God not to leave Himself without witness of the truth. Still Job evidently did not fully realize the significance designed by the Spirit in his own words (compare 1 Peter 1:11 1 Peter 1:12 plainly revealed or at least understood. Hence he does not mainly refer to this solution. Yes, and even now, we need something in addition to this solution. David, who firmly believed in a future retribution ( Psalms 16:10 ; 17:15 entirely solved thereby ( Psalms 83:1-18 Job's or in his three friends' speeches. It must, therefore, be in Elihu's. God will hold a final judgment, no doubt, to clear up all that seems dark in His present dealings; but He also now providentially and morally governs the world and all the events of human life. Even the comparatively righteous are not without sin which needs to be corrected. The justice and love of God administer the altogether deserved and merciful correction. Affliction to the godly is thus mercy and justice in disguise. The afflicted believer on repentance sees this. "Via crucis, via salutis" ["The way of the cross, the way of deliverance"]. Though afflicted, the godly are happier even now than the ungodly, and when affliction has attained its end, it is removed by the Lord. In the Old Testament the consolations are more temporal and outward; in the New Testament, more spiritual; but in neither to the entire exclusion of the other. "Prosperity," says BACON, "is the blessing of the Old Testament; adversity that of the New Testament, which is the mark of God's more especial favor. Yet even in the Old Testament, if you listen to David's harp, you shall hear as many hearse-like airs as carols; and the pencil of the Holy Ghost has labored more in describing the afflictions of Job than the felicities of Solomon. Prosperity is not without many fears and distastes; and adversity is not without comforts and hopes." This solution of Elihu is seconded by the addresses of God, in which it is shown God must be just (because He is God), as Elihu had shown how God can be just, and yet the righteous be afflicted. It is also acquiesced in by Job, who makes no reply. God reprimands the "three" friends, but not Elihu. Job's general course is approved; he is directed to intercede for his friends, and is restored to double his former prosperity. POETRY.--In all countries poetry is the earliest form of composition as being best retained in the memory. In the East especially it was customary for sentiments to be preserved in a terse, proverbial, and poetic form (called maschal). Hebrew poetry is not constituted by the rhythm or meter, but in a form peculiar to itself: 1. In an alphabetical arrangement somewhat like our acrostic. For instance, Lamentations 1:1-22 Psalms 42:1-11 ; 107:1-43 n of the previous verse is resumed and carried forward in the next ( Psalms 121:1-8 characteristic of Hebrew poetry is parallelism, or the correspondence of the same ideas in the parallel clauses. The earliest instance is Enoch's prophecy ( Jude 1:14 ( Genesis 4:23 which the second is a repetition of the first, with or without increase of force ( Psalms 22:27 ; Isaiah 15:1 ( Isaiah 1:15 clause is the converse of that in the first ( Proverbs 10:1 synthetic, where there is a correspondence between different propositions, noun answering to noun, verb to verb, member to member, the sentiment, moreover, being not merely echoed, or put in contrast, but enforced by accessory ideas ( Job 3:3-9 d," that is, desolation by famine, and destruction by the sword. Introverted; where the fourth answers to the first, and the third to the second ( Matthew 7:6 the interpretation. For fuller information, see LOWTH (Introduction to Isaiah, and Lecture on Hebrew Poetry) and HERDER (Spirit of Hebrew Poetry, translated by Marsh). The simpler and less artificial forms of parallelism prevail in Job--a mark of its early age.
<urn:uuid:76d1ee05-a070-4404-8177-52db2ad8a322>
CC-MAIN-2021-43
https://www.biblestudytools.com/commentaries/jamieson-fausset-brown/job/job-introduction.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00110.warc.gz
en
0.969672
2,922
3.125
3
By Abby Ramirez Since the beginning of the COVID-19 pandemic and nationwide quarantine, Asian American communities have been under attack. These acts of hatred have become more and more prominent as the destruction of Asian-owned businesses, murders of elderly people, and hate speech continues to grow. In 2020 alone, acts of hate, discrimination, and racism against Asian Americans and Pacific Islanders in the United States rose 150 percent, according to a recent analysis released by the Center for the Study of Hate and Extremism at California State University, San Bernardino. Unfortunately, cases of racism, shunning, and hate against Asian Americans and Pacific Islanders are nothing new in the United States. America’s history of anti-Asian racism has deep roots, beginning in the 1800s with the emergence of the California Gold Rush. After China experienced extreme crop failure in the mid-19th Century, a mass exodus of almost 300,000 Chinese immigrants came to the United States in search of work and gold. Upon arrival, they worked as fishermen, miners, railroad builders, farmers, and factory hands for incredibly low rates, and by 1970, they made up 20 percent of San Francisco’s labor force. To protect the wages of Caucasian miners, the California Legislature passed the Foreign Miners Tax of 1850, which taxed non-white Californian miners, mosty Latinos and Chinese immigrants, 20 dollars per month for the right to mine in state. Within four years, anti-Asian sentiment continued to sink its teeth into the California state government. In the 1854 California Supreme Court case People v. Hall, a white man was accused of murder based on the testimony of a Chinese man who claimed he was a witness. At the time, a law stated that Hispanic Americans, African Americans, and Native Americans did not have the right to testify against a Caucasian American. After the accused’s attorney used this policy as his defense, California Supreme Court justice John Murray declared the Chinese “a race of people whom nature has marked as inferior, and who are incapable of progress or intellectual development beyond a certain point” and who did not have the right “to swear away the life of a citizen” or participate “with us in administering the affairs of our Government.” Diminished to subhuman status and unable to legally fight the acts of racism against them, the lives of Asian-Americans in the United States continued to grow increasingly difficult as anti-Asian sentiment spread in communities around the state. In 1871, 500 Caucasian men took to the streets to avenge Robert Thomson’s death after he was killed in a shootout between several Chinese men. In the span of one night, the mob killed 18 Chinese residents–10 percent of Los Angeles’ Chinese population. While eight rioters were brought to trial for murder, none of them were sent to jail on a technicality. Six years later in San Francisco, the Workingmen’s Party of California, a white labor union, became fearful of losing their jobs to the increasing number of Chinese immigrants in the country, and launched the “Chinese Must Go” movement. Men and women who participated in the movement attacked and burned down Chinese businesses, homes, and places of work all over the Bay Area. In an attempt to limit any new immigrants from entering San Francisco, they threatened to burn down wharves owned by the Pacific Mail Steamship Company, a line used by Chinese immigrants. While they only burned down one wharf, the movement as a whole was very successful in pushing anti-Asian sentiment throughout the country and to the federal levels of government. After fights began to break out in mines across Rock Springs, Wyoming in 1885, Caucasian miners murdered 28 Chinese coal miners, injured 15, and burned down Chinatown in what would later be known as the Rock Springs Massacre. One year later, the Supreme Court passed the Chinese Exclusion Act: a federal law limiting all immigration of Chinese laborers. This law not only halted Chinese immigration for ten years, but also declared them ineligible for naturalization. It was the first significant law that limited any immigration into the United States. Six years after it was signed into law, the Chinese Exclusion Act was revamped by the Geary Act of 1892, which extended the immigration ban another ten years and forced Chinese residents to carry special documentation with them at all time to avoid deportation or hard manual labor. They would also have to have a credible white witness to bail them out of jail. Another six years later, the United States banned immigration from China altogether. As a new wave of Filipino, Korean, Japanese, and Indian immigrants began flooding into America at the turn of the 20th century, an unfortunately corresponding surge of xenophobia spread across the country as well. Phrases such as “yellow peril,” “Hindu invasion,” and “tide of turbans” were used by anti-Asian groups to incite fear of Asian immigrants all over the country, portraying them as a virus that had to be stopped. The government took this literally. Starting in 1907, the U.S. and Japanese government signed the Gentleman’s Agreement: an agreement where the United States would not limit Japanese immigration in return for the Japanese government halting Japanese emigration by ceasing to issue passports to the continental U.S. Ten years later, the federal government passed the Immigration Act of 1917, which extended the same immigration limitation to a collective “Barred Zone,” consisting of China, Japan, Korea, India, and other countries across South and Southeast Asia. Soon, immigrants originating from these countries would also be denied citizenship, naturalization, and the rights to marry a Caucasian and own land. There was, however, one prominent country excluded from the “Barred Zone:” the Philippines. After the end of the Spanish-American War, the United States annexed the Philippines as an American territory. As a result, Filipinos were considered “U.S. nationalists,” and were allowed to immigrate into the country. Unfortunately, despite their ability to immigrate and classification as nationalists, Filipinos were not exempt from acts of racism once they arrived. Intimidated Americans identified them as a political and medical threat, saying that they would bring “unruliness and tropical diseases” to America. After years of discrimination and independence movements in the Philippines, Congress passed the Tydings-McDuffie Act of 1934, which would establish the country’s independence over a ten year period, but also create an immigration quota of 50 persons per year. After the bombing of Pearl Harbor by the Japanese in December of 1941, Americans across the country became increasingly fearful of the possible threat of another attack on U.S. soil, especially on the West Coast where businesses and communities were thought to be most vulnerable. Influenced by his political and military advisors, President Roosevelt addressed the nation’s fear by passing Executive Order 9066: an order which authorized a forced evacuation and incarceration of any American deemed a threat in hopes of creating “every possible protection against espionage and against sabotage to national-defense material, national-defense premises, and national-defense utilities.” With total disregard for their jobs, familial history in the United States, and the lives they created in their communities, 120,000 Japanese, along with a handful of German and Italian Americans, were given six days notice to pack as many things as they could carry before being forced out of their homes; some had their homes searched by the FBI, who would seize anything they considered to be “contraband.” They would then be relocated to Assembly Centers, and later to permanent Relocation Centers where they would live in poor conditions until the end of the war. Concurrently, President Roosevelt lifted the Chinese Exclusion Act in 1943, letting Chinese citizens immigrate to the United States for the first time in 41 years. Almost 80 years later, the United States as a whole still fails to properly address the unmistakable acts of racism being committed against Asian communities all over the country. By coining the terms “China virus” and “Kung Flu,” former President Trump set a precedent of harsh rhetoric blaming Asian Americans for the spread of the virus in all 50 states. “As I walked out of Eagle Rock Plaza, a woman said, ‘Oh my God! China brought the virus here!’ When I crossed her path to walk toward my car and to confirm if that comment was meant for me, she jumped back and nearly yelled, ‘Oh my God! Please don’t give me the virus!’” an anonymous Los Angeles resident reported to Stop APPI Hate, a report center that tracks acts of hate, discrimination, shunning, child bullying, violence, and harrassment against Asian Americans and Pacific Islanders in the United States. Out of the 2,583 cases reported from around the country to Stop APPI Hate between March 19 to October 8, 70.6 percent were said to be acts of verbal harassment or name calling. Unfortunately, many cases of anti-Asian racism have taken a more extreme turn in the last few months as the anti-Asian climate continues. On January 31, the murder of a 91-year-old man in Oakland’s Chinatown was caught on camera and released to the public. It is believed that the same suspect attacked a 60-year-old man and 60-year-old woman the same day, USA Today reported. Since then, America has continued to witness a rise in violence towards Asian-Americans, including the brutal death of Angelo Quinto in the East Bay of San Francisco. To break the nearly 171 year old cycle of neglect towards acts of anti-Asian racism in America, the country as a collective must choose to change. The government must begin to not only speak out against those who bring about violence and hate, but pass laws that work to make such behaviors unacceptable. If the last year has taught the country anything, it is that actions speak louder than words. With that in mind, Americans at the community, state, and federal levels must begin to actively take steps towards putting an end to the anti-Asian atmosphere that has been created.
<urn:uuid:4920be4c-2aac-4b91-901a-5f5c7addddad>
CC-MAIN-2021-43
https://theplaidpress.com/2021/03/16/a-crash-course-on-americas-171-year-history-of-anti-asian-racism/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00029.warc.gz
en
0.968774
2,103
3.03125
3
With the current health crisis, methods for boosting one’s immune system are more of an interest than ever. While there are plenty of vitamins and supplements that can help in this endeavor, exercise also plays a big role. High-intensity workouts, such as interval running, have become increasingly popular in curbing weight gain and health concerns. But can increasing intensity through interval running improve immune system function? When it comes to this question, the answer isn’t clear cut. Interval running has long-term effects on the body that can be linked to improved immunity over time. Still, in the short-term, some studies suggest that moderate-intensity training might be better for improving immediate immune system function. Let’s dig in. What Is Interval Running? Interval running is a high-intensity workout that, as its name suggests, is broken up into separate parts. The entire point of interval training is to balance the spurts of high-intensity movement with more moderate recovery periods. So, when running, you’d alternate between vigorous sprints and fast-paced walking or jogging. This type of interval training boasts a variety of benefits for those who choose to practice it regularly. Not only does the workout’s increased intensity allow you to burn more calories, but it improves stamina over time. Runners looking to raise their endurance will benefit from interval running as they continue to practice it. By performing these high-intensity regimens regularly, you can strengthen the body and push past limits. Due to its faster speeds, interval training also makes workouts less time-consuming. The ability to burn more calories while dedicating less time to exercise is another reason many are drawn to this type of routine. Of course, there are physical health benefits to consider when it comes to interval training as well. Numerous studies highlight positive long-term effects on the body, many of which suggest this type of exercise may be worthwhile in the long run. How Does HIIT Affect the Body? So how does interval running — and HIIT in general — affect your body? For one, it appears to result in better cardiovascular health over time. A 2015 study from Sports Medicine shows that, for healthy adults, HIIT training leads to improved cardiovascular function in the long-term. It seems the aerobic nature of interval training results in better blood vessel function, as well as improved VO2. According to a study from the British Journal of Sports Medicine, individuals with chronic cardiovascular diseases who practiced HIIT showcased twice the cardiovascular health as their counterparts who engaged in more moderate levels of exercise. Beyond heart health, interval running can also help boost your metabolism. HIIT decreases fasting blood glucose levels and lowers insulin resistance, both factors associated with good metabolic health. Since your metabolic health affects just about every bodily function — and has been closely linked to immune function — this could help boost immunity over time. There have also been studies suggesting that HIIT improves cognitive function, and specifically, cognitive control and memory. With interval training showing long-term effects in both the brain and the body, it’s no wonder it’s become a regularly utilized method of exercise. How Does It Impact Your Immune System? So, how exactly does all of this impact your immune system? Well, many of the physical effects related to interval running have also been linked to better immune health. Specifically, boosted metabolic and cardiovascular health can strengthen your immunity. That’s why, in the long-term, interval running could result in a healthier immune system. There is a difference when you look at the short-term effects, however. Due to the strenuous nature of HIIT, its immediate effects on the body may be less ideal. There’s no denying interval training can take a physical toll on you, especially when done too often. In fact, some studies have shown those partaking in HIIT to have fewer immune cells afterward than those practicing a more moderate workout regimen. One review from the Journal of Applied Physiology even reports that there’s a window following HIIT workouts, during which you’re actually more susceptible to coming down with an illness. This makes sense when you consider the immediate effects HIIT exercise has on the body. Because your body is working at the maximum output during something like interval running, it releases stress hormones during the process. Those hormones can lead to lower immunity, at least until your body recovers. Putting chronic stress on the body has been linked to both decreased cell activity and decreased antibody production. Given the importance of white blood cells and antibodies in fighting unwanted infections, this could be a downside to HIIT. Done too frequently, interval training can also increase inflammation. Chronic inflammation is another physical factor that could impact your immune response negatively. Of course, if you handle HIIT correctly, you can at least avoid some of the fallout. Practicing proper nutrition, giving your body adequate rest, and hydrating regularly should all offset any immediate negative effects of workouts like interval running. Stocking up on carbohydrates may also minimize the impact intense exercise has on immunity. Even with these short-term effects, the long-term benefits of interval running may be worth it. These are, however, details to keep in mind. If you’re counting on HIIT to prevent you from getting sick, it may actually leave you more vulnerable. If that doesn’t sit well with you, a more moderate exercise regimen may be a better match. If you do decide interval running is for you, however, these short-term effects serve as a reminder not to overdo it. Experts recommend keeping interval training to two to three days per week. The remainder of your time should be spent allowing your body to recover. You’ll also want to keep an eye out for signs of burnout, like fatigue or moodiness. How to Decide If Interval Running Is for You Now that you have a better understanding of how interval running affects the body in the short- and long-term, you can decide if it’s right for you. Due to the heavy output required, HIIT isn’t for everyone. You’ll need a good amount of dedication and stamina in order to keep up with such a routine. You’ll also need to ensure it aligns with your goals. If you’re looking to boost your bodily function in a more long-term way, interval running is a solid choice. The same can be said for those looking to lose weight or decrease exercise time. If you have a health condition, it’s also important to do your research and check with your doctor prior to starting an interval-running regimen. Those with chronic muscle and joint problems may worsen the problem with HIIT. It’s also not always safe to partake in interval running with certain cardiovascular ailments. If you’re concerned, consulting a doctor is always the best bet. If you’re unsure if interval running aligns with your goals, you can also create a pro and con list like the one below. |Long-term cardiovascular health||Increased stress hormones| |Improved metabolic health||Immediate vulnerability to sickness| |Faster calorie burn/less time exercising||Could lead to physical burnout| |Increased immunity in time||May not be doable with health conditions| Setting Up Your Regimen If you’ve done the math and decided interval running is something you’d like to try, here are the steps you should take to set up your regimen. |Step One: Start with a warm-up.||Tip: Doing a light jog for about 10 minutes should prepare you for the rest of the workout.| |Step Two: Decide on an interval and run for that amount of time.||Tip: When deciding on your interval pace and time, you should start with something that’s not too intense but remains a step up from what you’re used to.| |Step Three: Following each high-intensity interval, set a rest interval.||Tip: Your rest interval should be a steady jog or fast-paced walk.| |Step Four: Repeat the high-intensity and rest intervals until you feel you’re unable to continue.||Tip: It’s critical you know your limits when repeating intervals. Don’t push too hard or you’ll risk hurting yourself or getting sick.| |Step Five: Hydrate and spend time stretching.||Tip: Look up stretches that will ease the tension you’re putting on the muscles you use while exercising.| |Step Six: Monitor how your body feels following each of your HIIT workouts.||Tip: If you begin feeling fatigued or unwell, make sure to take rest days to recover. Also, never workout when you have cold or flu symptoms. Always wait for symptoms to reside before returning to interval running.| If you’re hoping to strengthen your immune system, the last two steps are critical. Hydrating and stretching will ease the strain you’re putting on your body. Meanwhile, make sure you don’t overdo your HIIT will decrease the likelihood of you getting sick. With luck, it should also help improve your immune function over time. It will take hard work and dedication, but your physical health should improve by leaps and bounds if you stay consistent. In the long run, your body just may thank you for the effort. Some Of Our Favorite Items To Help Your Workouts - Burn One By NutraOne. Fat Burner Energy Boost Looking to supercharge your HIIT workouts? Skip Starbucks and long lines at the coffee shop with a dose of Burn One 250mg of clean caffiene. This supplement is all-natural and made in GMP factory in Quebec Canada. Try a bottle of this supplement risk free and witness how you can get the perfect workout with no before or after jitters. 2. CLA One – Metabolism Booster If you’re looking to boost your immune system and liver efficiency while shedding fat at the same time this is the supplement for you. CLA One is a staple and every bodybuilder and athletes diet, you can promote fat loss rapidly with a few pills in between training and meals each day. 3. Detox One My third pick is Detox One, This supplement will benefit your interval training in a way you wouldn’t even realize. A huge part of training and building a strong immune system is the nutrients you consume, when adding detox one to your regimen you can’t lose. Detox One will flush all the impurities out of your body in safe and natural way and help you absorb the nutrients your body thrives on. I noticed a flatter stomach and increased energy after 3-4 weeks, you will have to try it to see what i’m talking about. My Final Pick For Interval Running – The FDBRO Elevation Mask I think that one thing most athletes would agree is that wearing an elevation mask while doing HIIT or Interval training is one of the hardest exercises to do. Wearing this oxygen cutting training mask takes a lot of experience and mental toughness. It simulates training in high altitudes and may have your body shaking in fear when you feel like you can barely breathe when set on the highest setting. The rewards of wearing one are monumental, Long endurance and strengthening the respiratory system are just a few to mention. You should seriously consider ordering one of these training masks if your interval workout start getting too easy. Leave a comment below and let us know your experience with interval training and how it has improved your health and physique.
<urn:uuid:69cf7575-b9fd-4c30-9a75-5033f4ebdbf7>
CC-MAIN-2021-43
https://www.spectralbody.com/health-wellness/can-interval-running-boost-your-immune-system/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00229.warc.gz
en
0.936527
2,407
2.6875
3
Throughout history and across various cultures, hair and hairstyles have had significance far beyond aesthetic beauty and protection from weather. Hairstyles have symbolised one’s age, tribal affiliations, ethnicity, religion, social status, marital status and more, and have allowed ethnic and cultural groups to define and even reclaim their identities. Many of these styles have stood the test of time and remain prominent today within and without the unique cultures in which they originated. In the age of social media, however, many of these traditional hairstyles have been adopted—or appropriated rather—by celebrities, social media influencers and festival-goers as mere fashion statements without any thought given to their deep cultural significance or painful history. Thus, learning about the history and significance of hair is vital to fostering a more mindful and respectful attitude towards Black, Indigenous and People of Colour. Take a walk through the strands of human history with these five distinct hairstyles from around the world. Cornrows: maps to freedom Cornrows are a type of braided hairstyle in which “the hair is braided very close to the scalp, using an underhand, upward motion to make a continuous, raised row.” Also known as canerows in the Caribbean, they can be styled in straight lines as well as in intricate geometric or curved patterns. While some view cornrows as a modern trend, their history actually dates back millennia. They have been traced back to around 3,000 BCE where they were found in various cultures of West Africa and the Horn of Africa. The earliest documentation has been in Stone Age paintings in the Tassili Plateau of the Sahara. However, in more recent history, cornrows were used as a powerful tool of resistance against slavery and bondage. During the Middle Passage, millions of enslaved Africans were chained and transported across the Atlantic to the Americas. They were made to shave their heads for the purpose of “sanitation” and as a way to strip them of their culture and identity. But this wasn’t adhered to by everyone. Many of those enslaved grew out their hair and braided it into cornrows as an act of resistance, rebellion and reclamation of cultural identity. Cornrows were also used as a clever method to communicate and create maps to escape from the homes of slave-owners. This was especially prevalent in South America. According to a piece by The Washington Post, “In the time of slavery in Colombia, hair braiding was used to relay messages.” Styles such as departes with thick, tight braids were used to signal the desire to escape and other styles like curved braids were used to represent escape routes and roads. Braiding escape maps into cornrows proved an effective method for enslaved people to avoid getting caught with their escape plans. The history of cornrows serves as a testimony to the strength, ingenuity and resilience of Black people; a reminder that hairstyles can often be much more than a fashion statement. Himba dreadlocks: an elegant display of age, status and courtship Dreadlocks are another ancient hairstyle as old as time. These rope-like strands are created by locking or braiding the hair. Contrary to popular belief, dreadlocks were not limited to African cultures; they have also been found in the ancient histories of Greece, Egypt and India, with the earliest examples dating back to 1,500 BCE. However, perhaps the most striking and unique iteration of the dreadlock is the one that adorns the heads of the elegant women of the Himba Tribe, a group of people living in the hot and arid Kunene Region of northern Namibia. Their skin and dreadlocks are coated in a red-tinged paste called 'otjize', which is a mixture of the aromatic resin of the omazumba shrub, animal fat and ground ochre pigment stone. The paste is used to protect their skin and hair from the harsh weather of the desert as well as for aesthetic purposes. The red colour represents the colour of earth and blood—the essence of life. Hair plays a vital role in Himba culture. Right from birth, hairstyles symbolise age, marital status, wealth and rank and the thickness of the hair can also indicate a woman’s fertility. The heads of newborns are kept shaved, leaving only a small tuft of hair on the crown. As they grow older, boys braid their hair into a single plait while girls braid it into two plaits that hang in front of their face. Once a girl reaches marriageable age, a headdress called the Ekori made from tanned goatskin is worn. Women who have been married for a year wear another intricate headdress made from goatskin called the Erembe along with strands coated in the otjize paste, creating the famous and iconic dreadlocks seen in photographs of Himba women. The hair is also extended artificially using goat hairs or hay. Single men wear a single braid plaited at the back of the head and married men cover their heads with a cloth turban. The Himba people have become the subject of many camera lenses due to their beauty but the photographs don’t often tell the intricate details and meanings behind their stunning hairstyles. Mohawk: the hairstyle of warriors In popular culture, the mohawk, also known as the mohican, is a hairstyle that has come to symbolise non-conformity and rebellion, its true history overshadowed by the punk rock subculture. Although the popular version consists of a strip of spiked or non-spiked hair along the middle of the head and shaved on either side, the original style consisted of a square patch of hair on the back of the crown of the head which was created by plucking hair rather than shaving it. The style comes primarily from the Native American Pawnee people as well as from the Kanien’kehá:ka or Iroquois people. It takes its name from the Mohawk Nation, one of the Six Nations confederacy that speak the Iroquois language. The mohawk hairstyle was worn by young warrior men who were in charge of protecting their tribe. It was considered disrespectful for anyone else to wear the hairstyle, although it was a common hairstyle among Pawnee people. Pawnee warriors also wore a headdress called the roach headdress which was attached to a scalp-lock (a single lock of long hair on the head) and made to stand erect on the head by stiffening the hairs of the roach with fat and paint. The headdress was usually made of moose-hair, porcupine-hair and hair off the tail of white deers as well as black turkey beard, and was often dyed red and decorated with feathers, arrows and shells. The mohawk is a symbol of strength and bravery, and while the hairstyle may appeal to some for its rebellious look, we must remember its importance to the Indigenous peoples of America. Queue: hair becomes political In medieval China, hairstyles reflected a people’s affiliation to a dynasty or tribal confederation. The queue is a hairstyle from China in which the front portion of the head is shaved every 10 days while the rest of the hair is grown out and braided in the back. It was brought to mainland China by the Manchu people, an ethnic group of the region of Manchuria in Northeast China, when they invaded the Ming Dynasty and seized Beijing to establish the Qing Dynasty. Under Qing rule, which lasted from 1644 until 1912, all Han Chinese men were ordered to wear the queue as a sign of submission to the ruling Manchurians. This was met with strong resistance, causing protests and rebellions to erupt, since Han Chinese men followed the philosophy of Confucius who said, “We are given our body, skin and hair from our parents; which we ought not to damage. This idea is the quintessence of filial duty.” Furthermore, the order to shave a portion of their hair became a question of giving up their cultural identity and accepting a new national one for the Han Chinese. The refusal to wear the style within 10 days of the emperor issuing his ‘Queue Order’ was treated as treason for which punishment was execution. Most Han men did not object to braiding their hair but fiercely objected to shaving their heads. Hence, the ones who rebelled grew hair in the front as a sign of protest. This caused the Qing rulers to enforce the shaving of the front of the head much more strictly than braiding and it became the main signifier of loyalty to the government. Hair became a trigger for political uprising. In the words of Chinese writer and poet Lu Xun, “In fact, the Chinese people in those days revolted not because the country was on the verge of ruin, but because they had to wear queues." As years passed, most Han men were forced to submit to the order. Even Chinese immigrants to the United States in the 18th century had to keep their queue to avoid being profiled as revolutionaries by the Chinese government. This led to the stereotyping and caricaturisation of Chinese people by White Americans and Europeans who were indifferent to its bloody history. Nihongami: the hallmark of the geisha Nihongami, the literal translation of which is “Japanese hair,” is a term that denotes a range of traditional Japanese hairstyles with distinctive societal roles and styles, most popularly associated with geishas. According to Wikipedia, it consists of “two ‘wings’ at the side of the head, curving upwards towards the back of the head to form a topknot or ponytail, with a long loop of hair below this also drawn into the topknot.” The Edo Period, from 1603 to 1868, was the golden age of nihongami as it was during this time that many intricate variations of it featuring buns and wings were developed and became a trend. The styles differed according to a woman’s age, social status and occupation. For example, the shimada was worn by girls in their late teens, the sakkō by newly married women and the takashimada by brides. Perhaps the most interesting examples are the hairstyles that the geishas and maiko (novice geishas) wear through various stages of their apprenticeship and career. Geishas are professional performing artists trained in singing, dancing, conversation and tea ceremonies. A maiko, on the other hand, is a young woman below the age of 21 who is undergoing training to become a professional geisha. Throughout a maiko's apprenticeship, she wears five different hairstyles at various stages: wareshinobu, ofuku, katsuyama, yakko-shimada and sakkou. The wareshinobu, consisting of two silk red ribbons with white spots, is worn by a junior maiko during the first three years of the apprenticeship, followed by the ofuko after going through a coming of age ceremony on the maiko’s 18th birthday in which she becomes a senior maiko. The katsuyama and yakko-shimada hairstyles can then be worn by the senior maiko on special occasions and festivals. Finally, the sakkou is worn on a maiko’s graduation ceremony during which she officially becomes a geisha. Traditionally, these hairstyles were all created by expert hairstylists known as keppatsu-shi, but after World War II, the number of hairstylists dwindled significantly and geishas began to wear wigs known as katsura. These wigs come in various styles, the most popular ones being geigi shimada, geiko shimada and chū takashimada. So next time you see a geisha in a movie, remember the years of dedicated training that they had to go through to work their way up from a junior maiko all the way to a professional geisha.
<urn:uuid:b056796f-71a8-402d-bc11-a3e1cb899266>
CC-MAIN-2021-43
https://www.thelovepost.global/decolonise-your-mind/photo-essays/hair-power-exploring-history-and-meaning-hairstyles-across-globe
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00270.warc.gz
en
0.976697
2,500
3.703125
4
Over the last few years, China has been increasing its economic and military presence in the Indian Ocean Region (IOR). The China-Pakistan Economic Corridor (CPEC), currently estimated to be worth $87 billion, is a multi-purpose project which includes development of the deep-sea port of Gwadar, and is viewed as an attempt by the authorities in Beijing to increase their strategic footprint in IOR. This increasing influence has often irked countries in the region, including India. India and China, the two Asian giants, are known to have shared a complex relationship, often marred by territorial disputes. The recent face-off along the Line of Actual Control (LAC) in Ladakh has seen various twists and turns in the last few months. In June, satellite imagery confirmed that China had built new structures near the site of border clash with India, escalating the possibility of further conflict — even as the two sides were working towards disengagement. The current border tension has brought to the fore the power of Space technology, especially Earth Observation (EO), which plays a key role in monitoring the situation on the ground, thereby enabling the decision-makers to plan and mobilize resources. No wonder that the two countries have been scaling up their Space/EO programs to increase their surveillance and monitoring capabilities. In 2019, India and China were the top two spenders in the Space sector in Asia. In its budget for 2020-21, India allo-cated INR 13,479.47 crore ($1.9 billion) for the Department of Space (DOS), a 7.5% increase from last year, and a 45.2% rise since the 2015-16 budget. On the other hand, according to the Economic Survey of India 2019-20, China has spent seven times more than India on its Space program. Although opaque, China’s 2017 Space budget was estimated at $8.4 billion by the Organization for Economic Cooperation and Development. If China maintains about 6% per year economic growth to 2030 and scales the space budget in proportion to the overall economy then China would be at about $15 to 20 billion for a 2030 Space budget. India needs to proactively consider utilizing its Space capacities for safeguarding its territorial interests. Although the Indian Space Research Organisation (ISRO), the nodal Space agency, is a civilian agency, it also caters to defense and intelligence requirements. Since its inception in 1965, ISRO has launched 119 spacecraft, which include both military and civilian use satellites. The Cartosat satellite series is part of the Indian Remote Sensing Program, and is used for Earth’s resource management, defense services and monitoring. Cartosat-3 is the ninth satellite in this series, a third-generation advanced satellite having high-resolution imaging capability. Notably, the Cartosat-2 series played a crucial role in the 2016 “surgical strike”. It has a resolution of almost 0.65 meter and a revisit of 90 minutes. This satellite is reportedly to be capable of capturing a minute-long video of a fixed spot through its agile attitude control. ISRO also has a constellation of synthetic aperture radar (SAR) satellites that allow night and cloud viewing — RISAT (Radar Imaging Satellite). RISAT-2BR1 is the second spacecraft in a new series of RISAT-2B satellites which is nearly identical to the first satellite. It is an all-weather radar reconnaissance satellite, which is able to image the Earth regardless of clouds obscuring the surface. RISAT-2B series enhances India’s defense capabilities, and the RISAT constellation complements optical imaging satellites, such as the Cartosat series. It can be used for monitoring troop movement and military build-up. Rightly termed “Spy in the Sky”, India’s first electronic surveillance satellite, EMISAT adds teeth to situational awareness of the armed forces, as it provides location of hostile radars. This Space-based electronic intelligence, or ELINT, can monitor electronic or human activity on the India-China and India-Pakistan border. Hyper Spectral Imaging Satellite (HysIS) is often dubbed as “Sharp Eye”. Central to HysIS is an optical imaging detector chip, which has been indigenously designed by the Space Application Centre (SAC) of ISRO. HysIS is believed to have the capacity to detect and identify hidden targets. This means even if the resolution of an optical device is low, it can be detected and classified if its spectral signature is known. For example, man-made objects on sea surface, tanks in the desert, camouflaged missile launchers, etc. It also has advanced target detection and missile cueing systems for UAVs and manned combat aircraft. China National Space Administration (CNSA), the country’s national Space agency, is responsible for its Space program. Even though it is difficult to determine China’s actual Space capabilities, especially since its programs are cloaked in layers of secrecy, in view of the past and current capacity, it won’t be incorrect to say that China is ahead of India in terms of the overall Space infrastructure. China has recently launched the classified Shiyan-6 (02) satellite, which, according to the State media, is aimed at “Space environment study and related technology experiments”. A similar description is used for Yaogan satellite series that is believed to be used for military reconnaissance. Claimed to be “mainly used for civilian purpose”, the Gaofen 9-03 is the third satellite to be launched under the Gaofen 9-series. Chinese media reports suggest that all payloads carried optical Earth-imaging cameras with a resolution of better than 3.3 feet, or 1 meter. The Gaofen fleet includes satellites with optical, infrared and radar imaging observatories. The HEAD-5 microsatellite can carry out on-orbit information collection, including that on ships and aircraft, and the Internet of Things (IoT). The Gaofen 9-02 and HEAD-4 satellites will be used for the new “EO technology experiment”. The Gaofen 9-02 is an optical Remote Sensing satellite which is capable of providing photographs with a resolution of about one meter and will also support construction of projects under China’s Belt and Road Initiative (BRI). The HEAD-4 will help in collecting information on ships, aircraft and the IoT. China has also launched three Yaogan-30 Group 6 surveillance satellites, which will be used for electromagnetic environment detection and related technological tests. The Yaogan series is believed to be operated by the Chinese military, primarily for intelligence gathering. However, analysts suggest the Yaogan-30 family of satellites could be testing new electronic eavesdrop-ping equipment or helping the Chinese military track US and other foreign naval deployments. Beijing wants to reduce its reliance on foreign technology in topographic mapping, and Gaofen-7, a high-resolution Remote Sensing satellite, is a step in that direction. Capable of providing stereoscopic imagery, Gaofen-7 is a sub-meter resolution optical satellite that can map the entire world’s land stereoscopically, with a margin of error of less than a meter. The Ziyuan III 02 is the second satellite in the Remote Sensing mapping system that China aims to build by 2030. This satellite will join i Ziyuan III 01, to create a network and capture high-defini-tion 3D images and multispectral data. The Ziyuan program seems to cover different civil and military EO programs. Though the new ZiYuan-3 series will be used for stereo mapping, its application does cover national security. The Chinese Space program continues to mature rapidly. For the People’s Liberation Army (PLA), Space is critical to the overall warfare. Although India is trying to catch up, it has to do a great deal of work in terms of number of launches and manufacturing capacity. India needs to make rapid progress in the formalization of definite missions of the Defence Space Agency (DSA) and the Defence Space Research Organisation (DSRO). The Indian government established DSA and DSRO in April 2019 and June 2019, respectively — in an attempt to modernize the armed forces with the integration of Space technology. DSA will develop capabilities to protect India’s interests in outer Space and deal with the threat of a Space wars. DSA will also guide the development of various capabilities and platforms, including co-orbital weapons, to protect the country’s assets in Space. On the other hand, DSRO is tasked to provide technical and research support to DSA. Expand navigation operations: China recently completed Beidou Navigation Satellite System (BDS), which is fully functional now. Beidou provides an accuracy of 3.6 m (public), 2.6 m (Asia Pacific, public) and 10 cm (encrypted). China is promoting its use in the countries signed-up for BRI. China’s push for a new navigation network was driven by a desire to reduce its dependence on America’s GPS, particularly in its armed forces. India’s has its own navigation system, Indian Regional Navigational Satellite System (IRNSS), also known as NavIC. To date, ISRO has built nine satellites in the IRNSS series, of which eight are currently in orbit, though there are plans to increase the constellation size to eleven. NavIC provides a position accuracy of 5 meter to civilians and 10-20 cm for encrypted/military purpose. BDS has an advantage over its Indian counterpart because it is a global system with 35 satellites in LEO operating in the L1 and L2 bands which are common to other global GNSS systems like GPS, GLONASS and Galileo. NavIC will work with eight Geosynchronous satellites and offers regional navigational facility. However, NAVIC has a higher location precision and works better in urban and mountainous regions. Accelerate launch targets: Earlier this year, Beijing released a blue book setting out China’s Space achievements and future missions, and announced that it is going to send more than 60 spacecraft into orbit with over 40 launches in 2020. China will be mainly focusing on the completion of three major missions: Beidou-3 (fully operational), the Lunar exploration and the network of Gaofen observation satellites. India needs to launch more EO satellites to deliver better imaging capabilities. In 2019, India only launched seven spacecraft out of which four were EO satellites, whereas China completed 34 Space launches, more than any other country in the world. ISRO is planning to launch 10 EO satellites by March 2021, along with 26 missions, including Gaganyaan and Chandrayaan. Also in the pipeline is a new series of Remote Sensing satellites, Geo-Imaging Satellite-1 (Gisat-1) and Gisat-2, which will enhance India’s land mapping capabilities. Gisat has both military and civilian use. India lags in terms of building satellites that can be replaced before their expiration. When an old satellite expires, it leads to a data gap that is filled only when a replacement satellite is launched. Operating an updated fleet means precise and real-time information, which further helps in better decision-making. In terms of number of launches per year, China clearly has an edge over India. The lesser number of missions clearly indicates the lack of production capabilities within ISRO to support the growing demand in the country for Space-based services. The electronic components and systems of Indian satellites are generally imported. These components have to meet strict quality standards and have to be reliable and work through the entire mission life of a satellite (15 years). The requirement for such components is only going to increase as the Space agency becomes more aggressive in pursuing new missions. India should, therefore, consider increasing its workforce and in-house capacity to meet the demand, apart from actively involving private players in the Space industry. Technology has made modern warfare almost contactless. Securing borders requires surveillance and Remote Sensing, realtime situational awareness, information processing and communication. Satellites have, therefore, emerged as strategic assets, playing a critical role in this process. For India to catch up with China on this front, it will have to make comprehensive efforts around research, development and innovation.
<urn:uuid:e418edca-e72b-4504-acca-c28adca91548>
CC-MAIN-2021-43
https://www.gwprime.geospatialworld.net/special-feature/catching-up-with-neighbor/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00230.warc.gz
en
0.938713
2,567
3.171875
3
This example shows how to use the fast gradient sign method (FGSM) and the basic iterative method (BIM) to generate adversarial examples for a pretrained neural network. Neural networks can be susceptible to a phenomenon known as adversarial examples , where very small changes to an input can cause the input to be misclassified. These changes are often imperceptible to humans. In this example, you create two types of adversarial examples: Untargeted — Modify an image so that it is misclassified as any incorrect class. Targeted — Modify an image so that it is misclassified as a specific class. Load a network that has been trained on the ImageNet data set and convert it to a net = squeezenet; lgraph = layerGraph(net); lgraph = removeLayers(lgraph,lgraph.Layers(end).Name); dlnet = dlnetwork(lgraph); Extract the class labels. classes = categories(net.Layers(end).Classes); Load an image to use to generate an adversarial example. The image is a picture of a golden retriever. img = imread('sherlock.jpg'); T = "golden retriever"; Resize the image to match the input size of the network. inputSize = dlnet.Layers(1).InputSize; img = imresize(img,inputSize(1:2)); figure imshow(img) title("Ground Truth: " + T) Prepare the image by converting it to a X = dlarray(single(img),"SSCB"); Prepare the label by one-hot encoding it. T = onehotencode(T,1,'ClassNames',classes); T = dlarray(single(T),"CB"); Create an adversarial example using the untargeted FGSM . This method calculates the gradient of the loss function , with respect to the image you want to find an adversarial example for, and the class label . This gradient describes the direction to "push" the image in to increase the chance it is misclassified. You can then add or subtract a small error from each pixel to increase the likelihood the image is misclassified. The adversarial example is calculated as follows: Parameter controls the size of the push. A larger value increases the chance of generating a misclassified image, but makes the change in the image more visible. This method is untargeted, as the aim is to get the image misclassified, regardless of which class. Calculate the gradient of the image with respect to the golden retriever class. gradient = dlfeval(@untargetedGradients,dlnet,X,T); epsilon to 1 and generate the adversarial example. epsilon = 1; XAdv = X + epsilon*sign(gradient); Predict the class of the original image and the adversarial image. YPred = predict(dlnet,X); YPred = onehotdecode(squeeze(YPred),classes,1) YPred = categorical golden retriever YPredAdv = predict(dlnet,XAdv); YPredAdv = onehotdecode(squeeze(YPredAdv),classes,1) YPredAdv = categorical Labrador retriever Display the original image, the perturbation added to the image, and the adversarial image. If the epsilon value is large enough, the adversarial image has a different class label from the original image. The network correctly classifies the unaltered image as a golden retriever. However, because of perturbation, the network misclassifies the adversarial image as a labrador retriever. Once added to the image, the perturbation is imperceptible, demonstrating how adversarial examples can exploit robustness issues within a network. A simple improvement to FGSM is to perform multiple iterations. This approach is known as the basic iterative method (BIM) or projected gradient descent . For the BIM, the size of the perturbation is controlled by parameter representing the step size in each iteration. This is as the BIM usually takes many, smaller, FGSM steps in the direction of the gradient. After each iteration, clip the perturbation to ensure the magnitude does not exceed . This method can yield adversarial examples with less distortion than FGSM. When you use untargeted FGSM, the predicted label of the adversarial example can be very similar to the label of the original image. For example, a dog might be misclassified as a different kind of dog. However, you can easily modify these methods to misclassify an image as a specific class. Instead of maximizing the cross-entropy loss, you can minimize the mean squared error between the output of the network and the desired target output. Generate a targeted adversarial example using the BIM and the great white shark target class. targetClass = "great white shark"; targetClass = onehotencode(targetClass,1,'ClassNames',classes); epsilon value to 5, set the step size alpha to 0.2, and perform 25 iterations. Note that you may have to adjust these settings for other networks. epsilon = 5; alpha = 0.2; numIterations = 25; Keep track of the perturbation and clip any values that exceed delta = zeros(size(X),'like',X); for i = 1:numIterations gradient = dlfeval(@targetedGradients,dlnet,X+delta,targetClass); delta = delta - alpha*sign(gradient); delta(delta > epsilon) = epsilon; delta(delta < -epsilon) = -epsilon; end XAdvTarget = X + delta; Predict the class of the targeted adversarial example. YPredAdvTarget = predict(dlnet,XAdvTarget); YPredAdvTarget = onehotdecode(squeeze(YPredAdvTarget),classes,1) YPredAdvTarget = categorical great white shark Display the original image, the perturbation added to the image, and the targeted adversarial image. Because of imperceptible perturbation, the network classifies the adversarial image as a great white shark. To make the network more robust against adversarial examples, you can use adversarial training. For an example showing how to train a network robust to adversarial examples, see Train Image Classification Network Robust to Adversarial Examples. Calculate the gradient used to create an untargeted adversarial example. This gradient is the gradient of the cross-entropy loss. function gradient = untargetedGradients(dlnet,X,target) Y = predict(dlnet,X); Y = stripdims(squeeze(Y)); loss = crossentropy(Y,target,'DataFormat','CB'); gradient = dlgradient(loss,X); end Calculate the gradient used to create a targeted adversarial example. This gradient is the gradient of the mean squared error. function gradient = targetedGradients(dlnet,X,target) Y = predict(dlnet,X); Y = stripdims(squeeze(Y)); loss = mse(Y,target,'DataFormat','CB'); gradient = dlgradient(loss,X); end Show an image, the corresponding adversarial image, and the difference between the two (perturbation). function showAdversarialImage(image,label,imageAdv,labelAdv,epsilon) figure subplot(1,3,1) imgTrue = uint8(extractdata(image)); imshow(imgTrue) title("Original Image" + newline + "Class: " + string(label)) subplot(1,3,2) perturbation = uint8(extractdata(imageAdv-image+127.5)); imshow(perturbation) title("Perturbation") subplot(1,3,3) advImg = uint8(extractdata(imageAdv)); imshow(advImg) title("Adversarial Image (Epsilon = " + string(epsilon) + ")" + newline + ... "Class: " + string(labelAdv)) end Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. “Explaining and Harnessing Adversarial Examples.” Preprint, submitted March 20, 2015. https://arxiv.org/abs/1412.6572. ImageNet. http://www.image-net.org. Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. “Intriguing Properties of Neural Networks.” Preprint, submitted February 19, 2014. https://arxiv.org/abs/1312.6199. Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. “Adversarial Examples in the Physical World.” Preprint, submitted February 10, 2017. https://arxiv.org/abs/1607.02533. Madry, Aleksander, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. “Towards Deep Learning Models Resistant to Adversarial Attacks.” Preprint, submitted September 4, 2019. https://arxiv.org/abs/1706.06083.
<urn:uuid:fb840630-f2b6-4f8a-b6ba-50a34969bf69>
CC-MAIN-2021-43
https://ch.mathworks.com/help/deeplearning/ug/generate-adversarial-examples.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00230.warc.gz
en
0.780121
2,064
3.265625
3
The term "ready-to-eat food" includes all commercial types of food that are pre-packaged and sold as complete feeds. For rodents, ready-made food is always dry food. For carnivores, wet food can also be purchased as ready-to-eat food. "Finished food" for chinchillas can be: Ready-made food is always a food, which has been strongly processed and at least partially changed in its form, by "In my opinion, many of those responsible for the feed industry belong in prison. 5 years at least and without parole." Source: Prof. O. Wassermann, Toxicology Kiel, 1998 "No industrial feed is so optimal that it can not trigger diseases in the long run due to one-sidedness [...] Industrial feed makes sick in the long Source: Dr. med. vet. Vera Biber Through the processing process, on the one hand the structure of the processed plants is lost and, on the other hand, valuable nutrients and ingredients (vitamins, amino acids, trace elements) are lost through the effect of heat, which therefore have to be artificially added so that the feed is made "valuable" again and can then be sold as complete feed. Without the additives, the feed would be more or less half-dead feed mass, which would not be enough for the animals to survive. So what exactly about ready-made food is unhealthy? I. Instant foods often contain problematic ingredients such as vegetable by-products, where you don't know exactly what you're feeding your pets in the end The term "vegetable by-products" refers to low-quality residues from food production such as: We believe that you should give your animals the best of the best to eat and you should know exactly what you are feeding. Thus, any feed that contains by-products should be rejected. II. Many ingredients in instant food are too rich in content / energy e.g. cereals, seeds, honey/ sugar/ molasses, oils/ fats, protein-rich plants (clover, alfalfa), root vegetables, fruits Cereals are poorly tolerated by chinchillas because they have a high starch content and are rich in easily digestible carbohydrates, which in turn has a bad effect on the intestinal flora. However, most ready-made food consists of a very high proportion of cereals and their by-products. The same applies to seeds, dried fruits, etc. if given too frequently. Clover plants such as alfalfa are high-quality feed plants that provide the chins with protein in particular, but also other important nutrients such as minerals. The problem is, however, that pellets and extrudates consist almost exclusively of alfalfa and cereals (+ other energy-heavy components such as molasses) and are therefore too rich for chins: At least as a staple food. The main feed of these animals naturally consists of many different leafy components and grasses and not of a concentrate feed as pellets and extrudates are. Only green and roughage ensures that chinchillas have to eat and chew long enough and thus the teeth are sufficiently worn - instant food, on the other hand, saturates the animals quickly! "Food from the supermarket is there to keep animals alive for a certain amount of time. Nothing more." Source: Elina Sistonen, Pet Nutritionist III. Instant food has defective structure, ground too small, and highly processed vegetable fiber Chinchillas are herbivorous-folivorous small animals, which are adapted to ingest a lot of coarse fibrous plant (parts) (mainly herbs, leaves, grasses) and to utilize them. Their entire physiology or digestion is adapted to chew up the plants, to absorb necessary nutrients from them and to produce them themselves. If one feeds now too small fibers, as one finds them in pellets & Co., which veterinarians even call "pre-digested", since they are already chopped up, one takes away a part of the digestion task from the rodents, which has as a consequence an insufficient tooth abrasion as well as a bad settlement of the intestinal flora. Here yeasts, bacteria and parasites can multiply undisturbed pathologically, which is accompanied by various digestive disorders. As already written above, the "predigested" plant fibers are brought into shape under the influence of heat, so that pellets or extrudates are subsequently obtained. The chinchilla cracks each pellet and extrudate instead of grinding it thoroughly with its molars as is nutritionally intended. After cracking, the artificial structure of the food disintegrates in combination with the saliva and the animal only has to swallow the already pre-chewed food mush. The unnatural crunching is problematic for the jaw due to the pressure and can promote inflammation and retrograde tooth growth. The lack of thorough chewing reduces the abrasion of the constantly regrowing molars. Both lead to tooth and jaw disease in the long term. "If vitamins from the factory were better than their siblings that mature in the plant cell, nature would grow tablets on trees and bushes." Source: Prof. Dr. Dr. Linus Pauling IV. Instant food is produced with additives, flavorings, preservatives, pressing aids and binders, which have nothing in common with natural and healthy feeding The fact that artificial additives are not really healthy is known from human nutrition at the latest. It is not much different with our pets, which are stuffed with ready-made food and often hardly get natural vitamins and amino acids to eat, but only those from the laboratory. However, it has been proven that these additives lead to health consequences: on the one hand, these can occur in the short term, e.g. when too much has been added (consequences of this occurred several times with at least 2 well-known chinchilla pellet brands, many animals died or became seriously ill from symptoms of poisoning), or when too few additives have been added and the animals suffered from deficiency symptoms on the other hand. And secondly, the consequences can be long-term, including organ damage: liver damage, kidney failure and gastrointestinal disorders, all often accompanied by emaciation. In this context, it is also interesting to know that ready-to-eat food manufacturers advertise that their food provides the chins with all the essential nutrients. But the fact is that no one knows the needs of an average chin AND such an average value would not make much sense, because every animal is different and has an individual nutrient requirement. In addition, vegetable secondary substances are of great importance in the diet and are there to keep our animals healthy, to prevent diseases and to alleviate or heal small aches and pains. Unfortunately, this fact is completely concealed by ready-to-eat food manufacturers and the animal is regarded as an - arbitrary - quantity and number object. And: All primary and secondary substances in a plant act as a whole, they are subject to interactions, cancel each other out, strengthen each other, everything interlocks etc. (to name an example). (to name an example: certain minerals (phosphate) and vitamins (VitD) are needed in certain quantities so that calcium can be utilized by the organism at all) and no ready-made food in the world can reproduce and supply this natural process! Therefore, by the way, vitamin preparations are also not recommended, because most of them cannot be metabolized by the body at all. A deficiency is best remedied by a varied, natural-healthy diet. "Nowadays, complete feeds are offered for many animals. According to the classical definition, complete feeds are bound by the following specification: "Complete feeds are compound feeds intended to meet the nutritional requirements of animals alone." To date, we do not know all the components that a living creature needs to live. We know all the substances that an animal needs to survive, but not all the components and certainly not their quantity to ensure metabolic quality of life (health or protection against disease). For this reason, complete feeds have had to be constantly adapted to the latest scientific findings since their invention. This means that complete feed of 20 years ago does not meet today's requirements, and today's will certainly not meet the requirements in 2027. This fact already leads the term complete feed ad absurdum. However, this does not seem to bother the feed industry [...] Since, according to the German Society for Nutrition, which has the authority to issue guidelines, the requirements for all vitamins and minerals are not known even in the human sector, it is grotesque to pretend to know these requirements in the case of animal feeds with complete feed claims." Quelle: Lüttwitz M. v.; Schulz H.: „LÜGEN, LÜGEN, LÜGEN. Alleinfutterlüge, Vitaminlüge, Darmlüge“, Geflügel-Börse 5/2007 V. Animals fed with instant food do not absorb enough liquid The body consists primarily of liquid and the organism loses water daily, e.g. through defecation and urination. This means that the depot must always be replenished. The dryness of extruded pellets is about 85%. However, studies show that rodents cannot compensate for the lack of liquid caused by pure or main dry food feeding by drinking alone. Indeed, rodents and rabbits are naturally adapted to cover most of their fluid needs through food. If fresh food is given in addition, the problem is reduced. However, chinchillas fed with ready-made food tolerate fresh food worse than animals fed with a varied and natural diet. First signs of fluid deficiency can be constipation. In the long term, there is a risk of inflammation and stone formation in draining organs as well as kidney damage. Too little fluid intake also means that toxins can only be flushed out of the body insufficiently, a possible consequence can be liver problems. An overview of what is unhealthy for chinchillas and should be avoided as/ in food: "According to the official opinion of the ready-made food industry and most veterinarians, the needs of our pets are limited to certain percentages of protein, fats, crude fiber and sounso much "international units" of artificial vitamins and minerals. Chemically in the laboratory together mixed results thus in an artificial product of the industry. And so that our poor four-legged friends eat this dead pan, flavor enhancers are added and sealed with preservatives, so that the whole thing does not spoil. For each age, for each race, for each disposition there are special variants - however these are only minimally different in their composition, from the principle all are the same." Source: Dr. med. vet. Jutta Ziegler, veterinarian and book author
<urn:uuid:b6680229-6cc7-41a8-8e1b-463e35cecd7c>
CC-MAIN-2021-43
https://www.chinchilla-scientia.com/english-corner/nutrition/instant-food-for-chinchillas-pellets-extrudates-compound-food/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00390.warc.gz
en
0.961207
2,257
2.671875
3
A studded metal dog collar illustrates a dramatic story of a dog saving a man during a flash flood in Melbourne in the 1880s. Nelson the Newfoundland helped rescue Thomas Brown, a cab driver who was swept away by flood waters in Swanston Street on the night of 15 November 1881. When the 130-year-old collar was acquired by the National Museum in late 2011, little was known about Bill Higginbotham and his dog, Nelson, before or after the rescue. Bill’s family contacted the Museum and helped to fill in some of the blanks, in response to a call for more information. Flooding from the River Yarra Melbourne's location close to the River Yarra meant that it was prone to flooding. The Illustrated Australian News reported that when it rained the city streets became 'miniature rivers, down which the water rushes at an extraordinary rate, and pedestrians find it exceedingly difficult to cross'. At 8pm on 15 November 1881 a thunderstorm broke over central Melbourne. Heavy rain fell for nearly an hour, and the city was soon flooded. The south ends of Elizabeth Street, and Swanston Street, which runs parallel to it, were worst affected. The Argus reported that 'in both these streets the water was flowing in one broad stream extending from shop door on one side to shop door on the other'. At the intersection of Swanston and Lonsdale streets, Thomas Brown, the driver of a horse-drawn cab, was pulled into the torrent. Fifty years after the event, in 1931, the Melbourne Herald had this account of Brown being swept away: Close to the gutter, which was a torrent five feet deep, seething to a culvert 50 yards down the hill, a cabman was trying to keep his horse still while waiting for his passengers. At length he clambered down to quieten the beast, and at that moment it tossed its head and knocked him insensible into the gutter. In a moment he was being swept down toward the culvert. Brown's cries were heard by Bill Higginbotham and his dog, Nelson. Luckily for Brown, Nelson was a Newfoundland. Bred by fishermen in Canada, Newfoundland dogs have a strong instinct for water rescue and retrieval. With their large, powerful bodies, water-resistant coat and webbed feet, they were often employed to save people and cargo from shipwrecks. Attempts to save drowning man The Argus article reports that Nelson the dog jumped into the stream and caught hold of Brown's clothing, but it gave way. Brown was swept quickly down the block. Near the corner of Little Bourke Street, the dog seized him again, but could not keep his grip. Just as Nelson let go, Mr Higginbotham, who was clinging to a post and leaning out into the water, grabbed hold of Brown. The power of the racing stream was so great that he too lost his grip on the cabman, who disappeared into a covered channel that ran under Little Bourke Street. It seemed almost impossible that Brown could be saved — but undaunted, Nelson managed to catch him as he emerged from the channel on the lower side of the street. Again though, the current was too strong, and Brown was wrenched from Nelson's jaws and pulled at great speed towards the channel that ran under Bourke Street. A last desperate attempt was made to save Thomas. Mr Higginbotham, a Mr Mates and Nelson plunged into the stream and managed to haul Brown out of the water. When they got him safely to the side of the street, they found his clothes had been torn to shreds, probably as a result of Nelson's indefatigable efforts to catch hold of him. The incident was widely reported in the colonial press, and the Illustrated Australian News praised Nelson's 'courage and sagacity', suggesting he would be a 'worthy candidate for the Humane Society's medal'. Higginbotham family history William John Higginbotham, known as Bill, was born in London on 5 August 1832. A family history provided by great grandson Russ Higginbotham says that Bill's father was a hatter who worked in the London theatre district as a wigmaker and hairdresser. In 1853 Bill married Mary Ann Jones and they had two children, William John Junior and Hannah. In 1857 Bill's uncle, Tom Higginbotham returned to London from Melbourne. Tom had lived in Melbourne since 1839 and established a successful painting and glazing business. It is thought Tom might have told a great story of the new colony, and over the next 10 years all his siblings, nieces and nephews moved to Australia. Bill and Mary arrived on the Suffolk in May 1858 with their two children. Six sons — Thomas, John, Arthur, James, Charles and Walter — and three daughters — Charlotte, Mary and Sarah — were born in Victoria. Bill operated salons in Bourke St, Swanston St and for a few years in Bendigo. Most of the sons followed Bill into the hairdressing trade, mainly in the city and near suburbs. Charles and Walter Higginbotham operated a hairdressing salon in Chapel Street, South Yarra. The young Higginbotham family lived in Fitzroy and Collingwood for many years and may have lived above the Swanston Street shop (now 244 Swanston St). Bill's prime trade was in wig making and his salon was well placed to service the theatre industry. A number of advertisements for his business appeared in the Melbourne Argus in the 1870s: Hairdresser wanted: Must be steady. Apply Higginbotham, Wig maker, 237 Bourke St East Mayors fancy ball — A grand assortment of Ladies and Gentlemen's Court and Ringlet wigs on hire. Higginbotham Theatrical Wig Maker, 122 Swanston St. It is believed Bill also made a collection of ornate head pieces for the Indigenous display at the Melbourne Museum, and Russ recalls seeing them as a child. Bill died in 1912 at his daughter Charlotte's home. He is buried in the Melbourne General Cemetery with his wife Mary. Nelson and his dog collar The Museum doesn't know what became of Thomas Brown and until contacted by Russ Higginbotham, knew little about Bill or Nelson, beyond the dog's collar and information from newspaper accounts of the rescue. The Higginbotham family history states that Nelson was born in Bendigo on the same day as Walter Higginbotham, 25 May 1874. Nelson was well known to many inner city Melburnians as he used to sit outside Bill Higginbotham's shop. Made from copper with brass studs, the 16.5cm diameter collar is engraved 'Dog Nelson, W Higginbotham, 122 Swanston St'. The Museum's Conservation team closely analysed the copper collar and found that it was once nickel plated and shone like silver. It was originally thought Nelson may have been wearing the collar when he made his daring rescue, but in 1931 Bill's hairdresser son, Charles, told the Melbourne Herald it was presented to the dog later: Nelson never fully recovered from the effects of the choking struggle in the culvert, although he was able to take part in the annual procession of the Albion Fire Brigade six months later, and half the city turned out to see him presented with a silver collar for his part in the rescue. Russ suspects the collar may actually have been presented by the Union Fire Brigade in Collingwood, where Bill was a volunteer. The brigade met at the Albion Hotel, which might explain the wording of the newspaper report. For many years the collar remained in the Higginbotham family, with Russ' aunt Nina until about 1960. It was offered for auction in 2008 as part of the collection of antique dealer Richard Berry. The collar was auctioned again in 2011 when the Museum was the successful bidder. Dog collars have a long history, with metal or leather being the most popular materials. Metal collars allowed the owner to engrave their name and address, enabling the dog to be returned if it wandered or was stolen. They probably also offered some protection for the dog if it was in a fight — although perhaps Nelson's canine neighbours would have thought twice before taking on an 80kg Newfoundland. Why a Melbourne hairdresser and tobacconist had a Newfoundland dog is a mystery. Russ Higginbotham wrote: The family was certainly not seafaring and it was a large dog to have in a relatively confined suburban space. The photo, from circa 1880 is probably before the flood so he obviously thought highly enough of Nelson to have a studio picture taken. (We don't have any photos of the family!) The first Newfoundland came to Australia with the First Fleet in 1788. In 1900 another Newfoundland named Nelson was the mascot of the first contingent that left South Australia to fight in the Boer War. Newfoundland dogs were exhibited at the Melbourne show from the 1860s. Perhaps Higginbotham had simply seen a Newfoundland dog and been drawn to the gentle, loyal breed. City on a floodplain Thomas Brown was lucky. According to the 1931 Herald report his: 'beard was stained from a gash in his temple but he was not seriously hurt. He was on duty again within a couple of days'. But many Melbournians were killed by floodwaters in the 19th century. Victoria's capital was known as 'marvellous Melbourne' in the 1880s. Flush with wealth from the gold rush, it had grown from a small riverside village to an imposing city in less than 50 years. Still, the geographic advantages that had made the site so attractive for settlers could surprise its inhabitants. Melbourne's founders were drawn to a spot on the banks of the River Yarra where a deep, steady flow of fresh water ran into Port Phillip Bay. In 1837 planner Robert Hoddle laid out a grid of wide streets on a gently sloping valley running down to the river. One of the main thoroughfares, Elizabeth Street, followed the course of a gully. Melbournians soon had cause to regret building their town on the Yarra's floodplain. From 1839 the river inundated central Melbourne about every 10 years, as water from upstream burst its banks and met surges channelled along its streets and drains — many of which followed the valley's natural watercourse down to the Yarra estuary. The constant threat of inundation in Melbourne's central business district led, from the 1860s onwards, to a sequence of works on the Yarra. The wide slow river of today was produced by straightening, widening and deepening its course, removing billabongs and cutting canals. The large, open street drains of 1881, into which people poured their waste and sewage, earned the city a new nickname, 'Smellbourne'. These street drains were replaced by a huge network of underground drains maintained by the City of Melbourne. Hidden beneath skyscrapers and footpaths, stormwater from Melbourne's streets still travels on its ancient course to the river and the sea.
<urn:uuid:c5334718-8cc7-45e4-858a-8c578f4e3f05>
CC-MAIN-2021-43
https://www.nma.gov.au/explore/collection/highlights/nelson-the-newfoundlands-dog-collar
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00350.warc.gz
en
0.981973
2,321
2.90625
3
In the previous blog post (titled, “On the Coattails of Quantum Supremacy“) we started with Google and ended up with molecules! I also mentioned a recent paper by John Preskill, Jake Covey, and myself (see also this videoed talk) where we assume that, somewhere in the (near?) future, experimentalists will be able to construct quantum superpositions of several orientations of molecules or other rigid bodies. Next, I’d like to cover a few more details on how to construct error-correcting codes for anything from classical bits in your phone to those future quantum computers, molecular or otherwise. Classical error correction: the basics Error correction is concerned with the design of an encoding that allows for protection against noise. Let’s say we want to protect one classical bit, which is in either “0” or “1”. If the bit is say in “0”, and the environment (say, the strong magnetic field from a magnet you forgot was laying next to your hard drive) flipped it to “1” without our knowledge, an error would result (e.g., making your phone think you swiped right!) Now let’s encode our single logical bit into three physical bits, whose possible states are represented by the eight corners of the cube below. Let’s encode the logical bit as “0” —> 000 and “1” —> 111, corresponding to the corners of the cube marked by the black and white ball, respectively. For our (local) noise model, we assume that flips of only one of the three physical bits are more likely to occur than flips of two or three at the same time. Error correction is, like many Hollywood movies, an origin story. If, say, the first bit flips in our above code, the 000 state is mapped to 100, and 111 is mapped to 011. Since we have assumed that the most likely error is a flip of one of the bits, we know upon observing that 100 must have come from the clean 000, and 011 from 111. Thus, in either case of the logical bit being “0” or “1”, we can recover the information by simply observing which state the majority of the bits are in. The same things happen when the second or third bits flip. In all three cases, the logical “0” state is mapped to one of its three neighboring points (above, in blue) while the logical “1” is mapped to its own three points, which, crucially, are distinct from the neighbors of “0”. The set of points that are closer to 000 than to 111 is called a Voronoi tile. Now, let’s adapt these ideas to molecules. Consider the rotational states of a dumb-bell molecule consisting of two different atoms. (Let’s assume that we have frozen this molecule to the point that the vibration of the inter-atomic bond is limited, essentially creating a fixed distance between the two atoms.) This molecule can orient itself in any direction, and each such orientation can be represented as a point on the surface of a sphere. Now let us encode a classical bit using the north and south poles of this sphere (represented in the picture below as a black and a white ball, respectively). The north pole of the sphere corresponds to the molecule being parallel to the z-axis, while the south pole corresponds to the molecule being anti-parallel. This time, the noise consists of small shifts in the molecule’s orientation. Clearly, if such shifts are small, the molecule just wiggles a bit around the z-axis. Such wiggles still allow us to infer that the molecule is (mostly) parallel and anti-parallel to the axis, as long as they do not rotate the molecule all the way past the equator. Upon such correctable rotations, the logical “0” state — the north pole — is mapped to a point in the northern hemisphere, while logical “1” — the south pole — is mapped to a point in the southern hemisphere. The northern hemisphere forms a Voronoi tile of the logical “0” state (blue in the picture), which, along with the corresponding tile of the logical “1” state (the southern hemisphere), tiles the entire sphere. Quantum error correction To upgrade these ideas to the quantum realm, recall that this time we have to protect superpositions. This means that, in addition to shifting our quantum logical state to other states as before, noise can also affect the terms in the superposition itself. Namely, if, say, the superposition is equal — with an amplitude of in “0” and in “1” — noise can change the relative sign of the superposition and map one of the amplitudes to . We didn’t have to worry about such sign errors before, because our classical information would always be the definite state of “0” or “1”. Now, there are two effects of noise to worry about, so our task has become twice as hard! Not to worry though. In order to protect against both sources of noise, all we need to do is effectively stagger the above constructions. Now we will need to design a logical “0” state which is itself a superposition of different points, with each point separated from all of the points that are superimposed to make the logical “1” state. Diatomic molecules: For the diatomic molecule example, consider superpositions of all four corners of two antipodal tetrahedra for the two respective logical states. Each orientation (black or white point) present in our logical states rotates under fluctuations in the position of the molecule. However, the entire set of orientations for say logical “0” — the tetrahedron — rotates rigidly under such rotations. Therefore, the region from which we can successfully recover after rotations is fully determined by the Voronoi tile of any one of the corners of the tetrahedron. (Above, we plot the tile for the point at the north pole.) This cell is clearly smaller than the one for classical north-south-pole encoding we used before. However, the tetrahedral code now provides some protection against phase errors — the other type of noise that we need to worry about if we are to protect quantum information. This is an example of the trade-off we must make in order to protect against both types of noise; a licensed quantum mechanic has to live with such trade-offs every day. Oscillators: Another example of a quantum encoding is the GKP encoding in the phase space of the harmonic oscillator. Here, we have at our disposal the entire two-dimensional plane indexing different values of position and momentum. In this case, we can use a checkerboard approach, superimposing all points at the centers of the black squares for the logical “0” state, and similarly all points at the centers of the white squares for the logical “1”. The region depicting correctable momentum and position shifts is then the Voronoi cell of the point at the origin: if a shift takes our central black point to somewhere inside the blue square, we know (most likely) where that point came from! In solid state circles, the blue square is none other than the primitive or unit cell of the lattice consisting of points making up both of the logical states. Asymmetric molecules (a.k.a. rigid rotors): Now let’s briefly return to molecules. Above, we considered diatomic molecules that had a symmetry axis, i.e., that were left unchanged under rotations about the axis that connects the two atoms. There are of course more general molecules out there, including ones that are completely asymmetric under any possible (proper) 3D rotation (see figure below for an example). All of the orientations of the asymmetric molecule, and more generally a rigid body, can no longer be parameterized by the sphere. They can be parameterized by the 3D rotation group : each orientation of an asymmetric molecule is labeled by the 3D rotation necessary to obtain said orientation from a reference state. Such rotations, and in turn the orientations themselves, are parameterized by an axis (around which to rotate) and an angle (by which one rotates). The rotation group luckily can still be viewed by humans on a sheet of paper. Namely, can be thought of as a ball of radius with opposite points identified. The direction of each vector lying inside the ball corresponds to the axis of rotation, while the length corresponds to the angle. This may take some time to digest, but it’s not crucial to the story. So far we’ve looked at codes defined on cubes of bits, spheres, and phase-space lattices. Turns out that even can house similar encodings! In other words, can also be cut up into different Voronoi tiles, which in turn can be staggered to create logical “0” and “1” states consisting of different molecular orientations. There are many ways to pick such states, corresponding to various subgroups of . Below, we sketch two sets of black/white points, along with the Voronoi tile corresponding to the rotations that are corrected by each encoding. Achieving supremacy was a big first step towards making quantum computing a practical and universal tool. However, the largest obstacles still await, namely handling superposition-poisoning noise coming from the ever-curious environment. As quantum technologies advance, other possible routes for error correction are by encoding qubits in harmonic oscillators and molecules, alongside the “traditional” approach of using arrays of physical qubits. Oscillator and molecular qubits possess their own mechanisms for error correction, and could prove useful (granted that the large high-energy space required for the procedures to work can be accessed and controlled). Even though molecular qubits are not yet mature enough to be used in quantum computers, we have at least outlined a blueprint for how some of the required pieces can be built. We are by no means done however: besides an engineering barrier, we need to further develop how to run robust computations on these exotic spaces. Author’s note: I’d like to acknowledge Jose Gonzalez for helping me immensely with the writing of this post, as well as for drawing the comic panels in the previous post. The figures above were made possible by Mathematica 12.
<urn:uuid:edb5ef5a-a845-475a-9687-936340b7c72d>
CC-MAIN-2021-43
https://quantumfrontiers.com/tag/error-correction/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00390.warc.gz
en
0.938965
2,221
2.671875
3
Age Discrimination in the Workplace Ageism, age diversity and age discrimination legislation are now significant aspects of employment, retirement, and life beyond work. - Age diversity offers positive advantages for healthy organisations, just like any other sort of diversity in work and life. - Treating people fairly, regardless of age, is central to the principles of ethical business and ethical organisations. Ageism and related issues are especially relevant in the UK given the 2010 Equality Act, which extended and superseded the Employment Equality (Age) Regulations of 2006. This aspect of age equality at work is consistent with legislation across Europe. Understanding these issues will also be helpful for you as an individual, to understand your rights (for example relating to the behaviour of an employer, or pensions or retirement) and your personal responsibilities. - Responding to discrimination legislation is not difficult for good organisations; the UK regulations do not challenge any employer-organisation already treating its people fairly and ethically. - As such these principles provide a helpful model for adopting age equality provisions for any organisation anywhere in the world. As a worker or employee or manager, etc., you are also affected individually by age discrimination regulations. In addition to giving people protection, the UK discrimination legislation also places certain responsibilities on individuals: - The regulations allow for individuals to be held responsible for certain types of discriminatory behaviour against others (and to be pursued for compensation), aside from the responsibility of the employer or organisation. UK Age Discrimination Regulations Here's a brief practical summary of the UK Age discrimination regulations and their implications, initially effective in 2006, later updated and superseded by the Equality Act of 2010. - The regulations protect employees and other workers (partners, agency staff, etc) from discrimination, harassment and any other unfair treatment (for example relating to recruitment, training, pay, promotion, retirement and pensions) on the basis of age. - Age means any age - not just older people - this includes young people. People protected by these regulations include: - Current employees and workers - Job applicants - Vocational trainees - Vocational training applicants - Under certain circumstances people for whom the working relationship has ended (e.g., in giving references). The regulations apply to: - Employers of all types - Private and public sector vocational training providers - Trade unions - Employer organisations - Trustees and managers of occupational pension schemes - Employees and workers themselves (for example extending to liability for pay compensation in cases of harassment against someone). The regulations make it unlawful on the grounds of age , (unless it can 'objectively justified' - see point 7 below), specifically to: - Discriminate directly against anyone (workers and employees as defined above) - Discriminate indirectly against anyone ('indirectly' covers a very wide range of possibilities, including unintentional ones, such as processes or policies which disadvantage a person because of their age) - Harass or bully anyone, or expose them to harassment or bullying by others, ( harassment as perceived and experienced by the victim ; the perpetrator's views and intentions are not the issue) - Victimise anyone complaining or giving evidence, or intending action in relation to an age discrimination complaint. The implications of the legislation particularly affect and extend to: - Recruitment and interviewing and selection - Pay and benefits - Performance appraisals - Work-related social activities - The general conduct of everyone in work, and their awareness of their responsibilities within the regulations - Therefore all documentation, systems and processes used in the above. - The regulations are not designed to force unreasonable or unsafe changes on people and organisations, and so the rules provide for 'objective justification' to be used where any age discrimination can be proved to be proportionate (appropriate) and legitimate (truly necessary) for the purpose or aim of the organisation. - In such cases, the onus is on the organisation to provide evidence of the 'objective justification', which is capable of withstanding scrutiny at a tribunal. Simply 'saving money' is not generally a legitimate reason for exceptions to the rules. - For discrimination to be lawful the organisation must be able to demonstrate that its actions have been based on proportionate and legitimate reasoning - rather than an arbitrary, unthinking or unfair treatment of a person because of the person's age. Beyond this guideline, absolute interpretation of 'objective justification' is difficult to express in just a few sentences. - Where issues entail such judgements you should seek expert qualified information. ACAS (The Arbitration and Conciliation Advisory Service) is generally a good place to start. Examples of Specific Implications of Age Discrimination and Equality Regulations - Unfair dismissal and redundancy rights are not subject to an upper age limit. All workers regardless of age must be given the same rights and benefits in these matters. - Employers must give employees at least six months' notice of their retirement date. - Employees can request working beyond compulsory retirement, which employers will have a 'duty to consider'. There is no compulsion on employers to extend employment beyond the statutory retirement date (based on 65 years of age), provided a lawful retirement procedure has been properly followed. Employees have the right to appeal a refusal by the employer to extend employment beyond an intended retirement date. - Age discrimination law relating to retirement involves some complexity but basically provides for people to be treated fairly and according to proper legitimate policies, rather than the situation which existed before (UK 2006 legislation) whereby a person older than retirement age effectively had no usual rights to a fair dismissal. Now people do, regardless of how old they are. - Recruiting or rejecting anyone for a job or vocational training on the basis of age is unlawful. This obviously has implications for advertising, job application forms, short-listing, interviewing, selection, training of interviewers, documentation and record-keeping. Implications also extend to the way you brief and manage agencies which provide any of these recruitment services to you. - Failure or refusal to provide training, advancement, an opportunity to anyone on the basis of age - any age - is unlawful. - Asking for details of age on application forms and appraisal forms is not in itself unlawful, but doing so obviously increases the potential for age discrimination to take place. - Official advice (ACAS, etc.) is to remove age and date-of-birth questions and sections from documentation used in recruitment, interviewing and assessing people and instead use a separate 'diversity monitoring' form to gather and record age-related information, which is retained by your HR department as part of overall equality and diversity monitoring processes. - Job applicants who believe they have been rejected because of their age can make an age discrimination claim to an employment tribunal - it is not necessary to have been employed by the organisation to make a claim against an employer. The same principle applies in the case of applicant rejection by a vocational training provider. - When a working relationship has finished, employers and staff of the employer are still liable under the age discrimination legislation for any behaviour that could be deemed discrimination, harassment, or victimization against the departed worker. - For example, in giving a reference which includes any comment which mentions the person's age (directly or indirectly) in an unhelpful way - any age - is unlawful. Such action would be unlawful even though the person is no longer an employee. - Employers have a duty to train all staff in the Age Discrimination legislation and its implications, both from the perspective of people's rights, and especially their responsibilities. - Partners in businesses (for example legal firms and consultancies formed partnerships rather than as limited companies) are covered by the regulations. - Employers should use 'age monitoring', (which is also the alternative instead of visible age and date-of-birth details on recruitment and assessment documents). - Age monitoring is the statistical analysis of workers' ages (by HR department or equivalent), in terms of recruitment, promotion, training, discipline, leavers, etc. This is most easily done in age bands, for example: 16-21/22-30/31-40/51-60/60-65/60+ although you can use any banding system that suits you. - Age monitoring helps to identify potential problems and to highlight and provide evidence for 'objective justification' (exceptions) where discrimination can be justified. It is sensible to incorporate age monitoring within existing equality and diversity monitoring system, assuming you have one. If not, now might be a good time to introduce one. - Employers should have clear transparent policies stating how unlawful discrimination is avoided in the main areas of people management and development, notably recruitment, training, promotions, discipline and retirement. Is Age Discrimination Legislation enough? Note. Age discrimination legislation does not make it impossible to treat people differently because of age-related issues at work , instead, the law makes it unlawful to treat people unfairly purely and arbitrarily because of their age. - This is perhaps best illustrated by the example of an employer which (pre-legislation) could lawfully refuse to hire someone purely on the basis of age, or compulsorily retire someone purely on the basis of age; these actions are no longer likely to be lawful. - Refusal to hire, and compulsory retirement, must now (post-legislation) be based on reasons of (legitimate, proportionate and so legal) organisational policy and/or the candidate's or employee's competence/capability. These principles of equality are consistent with running an ethical organisation. An ethical organisation (amongst other things) does not discriminate directly or indirectly against anyone on grounds of age, gender, race, religion, disability, etc. As with any matters of employment law, it's important to understand the details and to seek appropriately qualified advice to help you interpret the issues for your own situation. ACAS is especially well-positioned and able to provide this (UK) support. Benefits of Diversity Age Diversity represents the range and mixture of ages in workforces and organisations, and the challenges and opportunities that employers face in managing it. - Having equality philosophy and policy in place - and fully understood by all staff - is consistent with ethical business, and good modern ethical organisations. - Equality means treating all people fairly: valuing everyone for their strengths, capabilities, experience and potential. - When an organisation values its people in this way, people respond positively, with loyalty, commitment and enthusiasm. Some organisations regard age diversity, and other aspects of equality, as mostly a difficulty. Where diversity is not embraced by an organisation's leadership, this tends to mean that people are not treated fairly and equally, and then quite understandably they lose faith in the employer. - Poor employers then blame their people for a lack of motivation, but actually the fault lies with the organisation and its leadership, not the staff. - Good organisations regard diversity and equality as huge opportunities to improve and develop organisational quality and performance. - Treating people fairly, and valuing everyone, promotes cohesion, unity and loyalty in a workforce. For an additional and useful perspective on age see Erik Erikson's Psychosocial Theory.
<urn:uuid:ba480e1d-7cfe-4872-a091-ea6434060595>
CC-MAIN-2021-43
https://www.businessballs.com/equality-and-inclusion/age-diversity-and-ageism/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585280.84/warc/CC-MAIN-20211019171139-20211019201139-00470.warc.gz
en
0.941172
2,300
3.03125
3
A Review on Readiness And Implementation Of E- Learning among Academic Institutions Of Higher Education Lina Lafta Jassim College of Art at the University of ThiQar As a result of the tremendous development in using Internet and information technology, the world has become a global village, and accessing information nowadays has become available to almost every one regardless of where he/she is. Moreover, information technology has a dramatic impact on societies (Shoniregun & Gray, 2003). With the ubiquitous services offered by the World Wide Web (WWW) and the fast development of information tools and telecommunications technologies, there is a strong tendency to use information technology (IT) in education sectors (Woodfine & Nunes, 2006). After the emergence of internet services, many educational centers around the world have attempted to make use of these tools for educational purposes. Because of the rapid increase in the use of modern technology, internet has become a key element in many universities because of its importance for administrative, academic staff and students (Lorens & Salanova, 2002). Internet has indeed became one of the most important instructional tools and the most effective means of communication in colleges and universities (Noor & Agboola, 2005). After the upsurge of internet in the mid 1990s, Watkins and Leigh (2003) pointed out that million college students and more universities throughout the world took at least one online course. Additionally, more than half a million of those students were completing their degrees entirely online. So, it is evident that e-Learning can be considered as a very effective learning system (Sun & Cheng, 2007), and it can be exploited and enhanced by the development of technology. It can be applied everywhere, and at any time. By applying e-Learning, there is a possibility for producing new competent generations (Forcier, 1999). On the other hand Wang and Chen’s study (2006) revealed that there is still shortage of effective use of educational technology in the educational process. In the same line, they argue that teachers of certain subjects, such as History and Geography, hardly use information technology in their teaching According to Resta (2006) e-Learning plays an increasingly important role in developing the economic and educational growth of industrialized nations, and it can play a significant role in preparing a new generation of teachers in higher educational establishments. By accepting and adapting the new changes in the learning environment, many educational institutions started using internet to provide access for their students to register, buy books, attend lectures and participate in discussions. This is what can be called the activation of technology in education (Lorens and Salanova, 2002). In this respect, Shoniregun and Gray (2003) argue that: “In today’s rapidly changing electronic world (e-world) the key to maintaining the appropriate impetus and momentum in organizations and academic environments is knowledge. Therefore, continuous, convenient and economical access to training and qualification assumes the highest priority for the ambitious individual or organization. This requirement is met by electronic learning (e-Learning). E-Learning is one of the fastest growing areas of the high technology sector” (p.43). Furthermore, Baptista-Nunes and Mcpherson (2002), (as cited in Rosenberg, 2001) “The biggest growth in the internet, and the area that will prove to be one of the biggest agents of change, will be in e-Learning.” e-Learning has provided more opportunities for sharing information and interaction among individuals and groups”(p.9). E-Learning is a self-learning process that depends on students more than teachers in using modern technology (Goel & Kumar, 2004; Jochems & Merrieboer, 2004). So, with the activation of technology, it is predictable that the role of lecturers will change in education as well. Baptista-Nunes and Mcpherson (2002) pointed out that not long ago, students would sit in lecture halls, use pen and paper to note down what their professors are sayingand writing on the board. With e-Learning the matter is very different; because this system is dependent on the internet, and this indicates that teachers’ role in e-Learning is expected to be more flexible in the sense that they can now tutor from their offices or from their homes, in campus or outside campus, so their teaching isexpected to be less constrained (Keegan, 2002). In the same vein, Rasaratnam (2006) pointed out that educationists should think of new methods to face the evolving challenges of the new situation in a more efficient and expedient way. This system of e-Learning is conducted through educational software called (Instructional Software or Courseware) designed and developed by a competent team to provide the student with the teaching required on a computer screen (Sadik, 2007; Haverila & Barkhi, 2009). So, it is expected that with the application of e-Learning, teaching methods are going to be changed. This implies not only changes in course models, but also in attitudes, in order to know the new challenges posed by e-Learning in general and higher education (HE) in particular (Baptista-Nunes & Mcpherson, 2002). However, one of the challenges that face e-Learning designers is that there is no universally designed product. Akbaba-Altun (2006) explains this point and argues that universal design is a process which yields products (devices, environment, systems, and processes) that are usable by and useful to the widest possible range of people. Consequently, it is not possible to create a product which can be used by everyone or in all circumstances. Joris and Berg (2003) pointed out that there is still a gap between material design and model design. They argued that the material still needs more preparation, more time and efforts to be designed properly in all forms. This is why the design of the format must be provided as a model to introduce material which is easily accessible for all. Additionally, Jochems and Merrieboer (2004), pointed out that there is a massive and huge gap resulting from the recent development in information technology and in order to compensate for this gap, much preparation and training are needed. The preparation process is the important part in the stages for the evaluation and designing of appropriate models, but we must choose the appropriate time and good design, after the preliminary examination in terms of the availability of the necessary infrastructure. This must be a concerted effort for the success of all participants in the process of teaching members of the university staff and students. The responsibility is borne by the university, because a significant change in the methods of education will occur in the university (Vooi & Dahalin, 2004). In spite of the great expectations from applying and implementing e-Learning services, it is evident that there are many factors that can affect either positively or negatively the success of this new application (Ataizi, 2006). One of the factors that a researcher wants to explore is staff’s readiness which can affect positively or negatively the application of e-Learning. According to So and Keung (2005) staff readiness in using the technology will determine the success of e-Learning implementation. This study investigates e-Learning readiness and implementation in Jordanian universities, explores e-Learning implementation and provides detailed information on the use of e-Learning by university departments. Alkhalifa, H. S. (2010). E-Learning and ICT integration in colleges and universities in Saudi Arabia. E-learn Magazine. Retrieved March 16, 2011, from: Ataizi, M. (2006) . Readiness for e-Learning: Academician’s perspective. In C. Bonk and M. Steven. (Eds). Paper presented at conference on e- Learning in corporate, Government, Healthcare, and Higher Education 2007, 2316-2321. Chesapeake, VA: AACE. Retrieved 1 January, 2007, from: www.Aof.Edu.Tr/Iodl2006/Proceedings/ Book/Papers/Paper_61.Pdf Baptista-Nunes, M., & Mcpherson, M. (2002). No lectures on-capmus: Can e-Learning provide a better learning experience. In C. Dwyer (Eds). Paper presented at conferences in the field of educational technology and e-learning – ICALT, ICCE, E-Learn and AUA advanced learning technologies, IEEE Computer Society, 14-16 October. pp. 442-447. Kazan, Tartarstan. Chen, H. (1998). Theory-driven evaluations. advances in educational productivity, 3(7), 15-34. Chen, Y. N., & Chen, W. (2006). E-government strategies in developed and developing countries: an implementation framework and case study. Journal of Global Information Management,14(1), 23-46. Goel, S. L., & Kumar. R. (2004). Administration and management of NCOS text and case studies. New Delhi: Deep publications. Haverila, M., & Barkhi, R. (2009). The influence of experience, ability and interest on e-Learning effectiveness, European Journal of Open, Distance and e-Learning.2(3), 45-66. Keegan, D. (2002). Definition of distance education, distance education: teaching and learning in higher education. Issues in Accounting Education,20(3), 255-272. Lorens, S., & Salanova, M. (2002). Training to technological change. Journal of Research on Technology in Education, 35(2), 206-213. Mobaideen, H.(2006). Assessing information and communication technology in Jordanian universities. In A. Smith. (Eds). Paper presented at European and Mediterranean Conference on Information Systems (EMCIS), July 6-7, Costa Blanca, Alicante, Spain. Noor, N. A., & Agboola A. K. (2005). Effective integration of e-Learning tools among lecturers in a tertiary institution: A perceptual survey. The Public Sector Innovation Journal, 11(3). Rasaratnam, P. (2006). Development and evaluation of A web-based course for computing and information technology. INTI Journal, 2(1), 571-581. Resta, P. (2006).E-Learning for teacher development: building capacity toward the information society. Learning Technology Centre, University of Texas. USA. Shoniregun, C. & Gray S. (2003). Is e-Learning really the future or a risk. Ubiquity Archive,12(4), 43-55. Sun, P. C.,& Cheng, H. K. (2007). The design of instructional multimedia in e-Learning: A media richness theory-based approach. Computers & Education,49(3),662-676. Vooi, W. M., & Dahalin, Z. B. (2004). Is our public university ready for e-learning? the case of University Utara Malaysia (UUM). In M.S. Hj. Din and B. A. Rahman (Eds.). Paper presented at the international conference on management education, Kuala Lumpur. Watkins, T. (2005). Exploring e-Learning reforms for michigan, the new education (R) evolution. A report relevant, rigorous education for our revolutionalizedMichigan. Wayne State University. Retrieved 19 December, 2007, from. www.coe.wayne.edu/e-learningreport.pdf. Woodfine, B. P., &Nunes M. B. (2006). Text-based synchronous e-Learning and dyslexia: Not necessarily the perfect match, University of Sheffield.
<urn:uuid:36d8b3fd-d4ba-411b-82ca-231773fa6165>
CC-MAIN-2021-43
https://ishraqaat.com/a-review-on-readiness-and-implementation-of-e-learning-among-academic-institutions-of-higher-education/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00350.warc.gz
en
0.901948
2,459
2.59375
3
A visible minority (French: minorité visible) is defined by the Government of Canada as "persons, other than aboriginal peoples, who are non-Caucasian in race or non-white in colour". The term is used primarily as a demographic category by Statistics Canada, in connection with that country's Employment Equity policies. The qualifier "visible" was chosen by the Canadian authorities as a way to single out newer immigrant minorities from both Aboriginal Canadians and other "older" minorities distinguishable by language (French vs. English) and religion (Catholics vs. Protestants), which are "invisible" traits. The term visible minority is sometimes used as a euphemism for "non-white". This is incorrect, in that the government definitions differ: Aboriginal people are not considered to be visible minorities, but are not necessarily white either. Also, some groups that are defined as "white" in other countries (such as Middle Eastern Americans) are defined as "visible minorities" in the official Canadian definition. In some cases, members of "visible minorities" may be visually indistinguishable from the majority population and/or may form a majority minority population locally (as is the case in some parts of Vancouver, Toronto, and Montreal). Since the reform of Canada's immigration laws in the 1960s, immigration has been primarily of peoples from areas other than Europe, many of whom are visible minorities within Canada. Legally, members of visible minorities are defined by the Canadian Employment Equity Act as "persons, other than Aboriginal people, who are non-Caucasian in race or non-white in colour".[dead link] Over seven million Canadians identified as a member of a visible minority group in the 2016 Census, accounting for 22.3% of the total population. This was an increase from the 2011 Census, when visible minorities accounted for 19.1% of the total population; from the 2006 Census, when visible minorities accounted for 16.2% of the total population; from 2001, when visible minorities accounted for 13.4% of the total population; from 1996 when the proportion was 11.2%; and over 1991 (9.4%) and 1981 (4.7%). In 1961, the visible minority population was less than 1%. The increase represents a significant shift in Canada's demographics related to increased immigration since the advent of its multiculturalism policies. Based upon the annual immigration intake into Canada since the last census in 2006, accompanied by the steady increase in the visible minority population within Canada due to the higher fertility levels of minority females when compared to Canadian women of European origin, researchers estimate that by 2012, approximately 19.56% of the population in Canada will be individuals of non-European (visible minority) origin. The Aboriginal population within Canada, based upon projections for the same year (i.e. 2012), is estimated to be 4.24%. Hence, at least 23.8% of Canada's population in 2012 were individuals of visible minority and Aboriginal heritage. Projections also indicate that by 2031, the visible minority population in Canada will make up about 33% of the nation's population, given the steady increase in the non-European component of the Canadian population. Of the provinces, British Columbia had the highest proportion of visible minorities, representing 30.3% of its population, followed by Ontario at 29.3%, Alberta at 23.5% and Manitoba at 17.5%. In the 2006 census, South Asian Canadians superseded ethnic Chinese as Canada's largest visible minority group. In 2006, Statistics Canada estimated that there were 1.3 million South Asian people in Canada, compared with 1.2 million Chinese. In 2016, there were approximately 1.9 million South Asian Canadians, representing 5.6% of the country's population, followed by Chinese Canadians (4.6%) and Black Canadians (3.5%). List of Canadian census subdivisions with visible minority populations higher than the national average - Richmond (76.3%) - Greater Vancouver A (67.3%) - Burnaby (63.6%) - Surrey (58.5%) - Vancouver (51.6%) - Coquitlam (50.2%) - New Westminster (38.9%) - West Vancouver (36.4%) - Delta (36%) - Abbotsford (33.7%) - Port Coquitlam (32.4%) - North Vancouver (city) (31.3%) - Port Moody (30.4%) - North Vancouver (district municipality) (25.6%) - Winnipeg (28%) Legislative versus operational definitions According to the Employment Equity Act of 1995, the definition of visible minority is: "persons, other than aboriginal peoples, who are non-Caucasian in race or non-white in colour". This definition can be traced back to the 1984 Report of the Abella Commission on Equality in Employment. The Commission described the term visible minority as an "ambiguous categorization", but for practical purposes interpreted it to mean "visibly non-white". The Canadian government uses an operational definition by which it identifies the following groups as visible minorities: "Chinese, South Asian, Black, Filipino, Latin American, Southeast Asian, Arab, West Asian, Korean, Japanese, Visible minority, n.i.e. (n.i.e. means "not included elsewhere"), and Multiple visible minority". However, a few exceptions are applied to some groups. According to the Visible Minority Population and Population Group Reference Guide of the 2006 Census, the exception is: In contrast, in accordance with employment equity definitions, persons who reported 'Latin American' and 'White,' 'Arab' and 'White,' or 'West Asian' and 'White' have been excluded from the visible minority population. Likewise, persons who reported 'Latin American,' 'Arab' or 'West Asian' and who provided a European write-in response such as 'French' have been excluded from the visible minority population as well. These persons are included in the 'Not a visible minority' category. However, persons who reported 'Latin American,' 'Arab' or 'West Asian' and a non-European write-in response are included in the visible minority population. The term "non-white" is used in the wording of the Employment Equity Act and in employment equity questionnaires distributed to applicants and employees. This is intended as a shorthand phrase for those who are in the Aboriginal and/or visible minority groups. The classification "visible minorities" has attracted controversy, both nationally and from abroad. The UN Committee on the Elimination of Racial Discrimination has stated that they have doubts regarding the use of this term since this term may be considered objectionable by certain minorities and recommended an evaluation of this term. In response, the Canadian government made efforts to evaluate how this term is used in Canadian society through commissioning of scholars and open workshops. Another criticism stems from the semantic applicability of the classification. In some cases, members of "visible minorities" may be neither "visually" discernible from the majority population nor form a "minority", at least locally. For instance, many Latin Americans living in Canada self-identify as White Latin Americans and are visually indistinguishable from White Canadians. Moreover, some members of "visible minorities" may form a majority minority population locally (as is the case in most parts of Vancouver and Toronto). Since 2008, census data and media reports have suggested that the "visible minorities" label no longer makes sense in some large Canadian cities, due to immigration trends in recent decades. For example, "visible minorities" comprise the majority of the population in Toronto, Vancouver, Markham, Coquitlam, Richmond, Ajax, Burnaby, Greater Vancouver A, Mississauga, Surrey, Richmond Hill and Brampton. In the United States, such cities or districts are described as majority-minority. But, the term "visible minority" is used for the administration of the Employment Equity Act, and refers to its statistical basis in Canada as a whole and not any particular region. Yet another criticism of the label concerns the composition of "visible minorities". Critics have noted that the groups comprising "visible minorities" have little in common with each other, as they include both disadvantaged groups and groups who are not economically disadvantaged. The concept of visible minority has been cited in demography research as an example of a statistext, meaning a census category that has been contrived for a particular public policy purpose. Furthermore it is not clear why minority definition should center on the "visual", and the concept of "audible minority" (e.g. those who speak with what appears to the majority to be "accented" English or French) has also been proposed, as speech often forms the basis for prejudice, along with appearance. - Affirmative action - Classification of ethnicity in the United Kingdom - Employment equity (Canada) - Ethnic penalty - List of visible minority politicians in Canada - Majority minority - Minority language - Multiculturalism in Canada - Race and ethnicity in censuses - Race and ethnicity in the United States Census - Racialism (Racial categorization) - Canada, Government of Canada, Statistics. "Classification of visible minority". Archived from the original on September 26, 2015. - Visible Minority Population and Population Group Reference Guide, 2006 Census from StatsCan - "Minorities to rise significantly by 2031", cbc.ca - "Visible minorities to make up 1/3 of population by 2031", CTV, March 2010 - One in 6 Canadians is a visible minority, CBC, 2 Apr 2008 - Canada, Government of Canada, Statistics. "Visible Minority (15), Generation Status (4), Age (12) and Sex (3) for the Population in Private Households of Canada, Provinces and Territories, Census Metropolitan Areas and Census Agglomerations, 2016 Census - 25% Sample Data". www12.statcan.gc.ca. Retrieved 2018-04-12. - "Census Profile, 2016 Census". 12.statcan.gc.ca. 2011. - Employment Equity Act (1995, c. 44) Act current to Oct 20th, 2010 - Woolley, Frances. "Visible Minorities: Distinctly Canadian". Worthwhile Canadian Initiative. Retrieved May 26, 2013. - "Visible Minority Population and Population Group Reference Guide," 2006 Census Statcan - Visible Minority Population and Population Group Reference Guide, 2006 Census - Catalogue no. 97-562-GWE2006003 Statcan - Mentzer, M. S. (January 2002). "The Canadian experience with employment equity legislation". International Journal of Value-Based Management. 15 (1): 35–50. doi:10.1023/A:1013021402597. ISSN 0895-8815. S2CID 141942497. - "Report of the Committee on the Elimination of Racial Discrimination" (PDF). United Nations. United Nations: Committee on the Elimination of Racial Discrimination. Retrieved 4 March 2017. - Hamilton, Graeme (2008-04-03). "Visible minorities the new majority". National Post. Retrieved 2012-05-21. - Mentzer, Marc S.; John L. Fizel (1992). "Affirmative action and ethnic inequality in Canada: The Impact of the Employment Equity Act of 1986". Ethnic Groups. 9 (4): 203–217. ISSN 0308-6860. - Hum, Derek; Wayne Simpson (2000). "Not all visible minorities face labour market discrimination". Policy Options/Options Politiques. 21 (10): 45–48. ISSN 0226-5893. - Kobayashi, Audrey (1993). "Representing Ethnicity: Political Statistexts". Challenges of Measuring an Ethnic World: Science, Politics, and Reality. Washington, DC: Statistics Canada and U.S. Bureau of the Census, U.S. Government Printing Office. pp. 513–525. ISBN 0-16-042049-0. - Bauder, Harald (2001). "Visible minorities and urban analysis". Canadian Journal of Urban Research. 10 (1): 69–90. ISSN 1188-3774. |Look up visible minority in Wiktionary, the free dictionary.| - Visible minority population and population group reference guide (2006 Census) from Statistics Canada - Visible minority population, by census metropolitan areas (2006 Census) from Statistics Canada
<urn:uuid:bcbb6465-7efb-4c74-886d-f51cda45f4e6>
CC-MAIN-2021-43
https://en.wikipedia.org/wiki/Visible_minority
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00110.warc.gz
en
0.907141
2,583
3.8125
4
Hand ligament injury surgery by Dr Mark Gittos Plastic Surgeon in Auckland NZ The hand is made from multiple bones, muscles, tendons, and ligaments. Ligaments are thick bands of connective tissue that connect the small bones in your hands to each other. Hand and wrist sprains are traumatic injuries to ligaments that can lead to their partial or complete laceration. Leaving ligament injuries untreated may lead to problems in the joints in your hands. Several treatments are available for ligament injuries which include rest, immobilization with a splint, and surgery. Dr. Mark Gittos is a plastic and reconstructive surgeon that specializes in hand surgery and offers surgical and non-surgical treatment for different hand disorders, including ligament and tendon injuries. What are ligaments? Ligaments are thick bands of fibrous connective tissue that connect your bones to each other. They extend from one bone to another to form a joint. Their main function is to support and strengthen the joint, maintaining a specific range of motion. The fingers, hand, and wrist are composed of multiple bones that are connected to each other by ligaments. These ligaments are vulnerable to injury due to our extensive use of our hands in different activities of daily life. What causes hand ligament injuries? A “sprain” is the term used to describe common mild ligament injuries. Trauma, like falling from a height, is one cause of sprains. Traumatic lifting or twisting of a finger or your wrist can also lead to ligament injury. More severe trauma can lead to a more severe injury and possibly a complete tear of one or more ligaments, as is the case with some motor vehicle accidents. Ligament tears can subsequently cause dislocation of the affected joint, along with the corresponding symptoms. What are the signs and symptoms of hand ligament injuries? Pain and swelling are the most obvious symptoms that accompany hand ligament injuries. This can be managed early on with painkillers, anti-inflammatory drugs, rest, and splinting. If an injury is severe enough, it might cause joint instability, where certain hand or finger movements will be painful and abnormal or incomplete. Stiffness and weakness can also occur. If the affected joint gets dislocated, it will lead to an obvious deformity and pain, and it might require urgent relocation. How are ligament injuries diagnosed? Dr. Mark will start by asking you a few questions about your symptoms, the mechanism of injury, your medical history, and your home medications. After that, he will go on to examine your hands and try to assess the range of motion of your fingers and wrists to discover the affected ligament and evaluate the severity of the injury. An X-ray might be ordered just to be sure that there are no associated fractures that can be causing your symptoms. If the injury is mild, Dr. Mark will usually not ask for any further testing and will recommend rest, ice packing, and painkillers to manage your symptoms. In most mild cases, this conservative treatment is enough, and your injury will heal in a few days. If the injury is more severe, Dr. Mark might order some more advanced imaging tests to identify the injury and plan treatment. MRI and CT scanning of the hand are excellent in identifying ligament injuries and in visualizing the anatomical details of the hand. One of these tests might be ordered based on your specific case in order to confirm the diagnosis. How is hand ligament injury treated? Mild wrist sprains can be treated conservatively, however, more severe injuries might require surgery. 1. Conservative treatment for ligament injury - Anti-inflammatory drugs: Drugs such as ibuprofen and aspirin can be used to ease off the pain caused by wrist sprains. - Rest: You will be asked to avoid overusing your injured hand for a couple of days to allow it to properly heal - Ice packs: Ice packs will help reduce inflammation and decrease pain - Splints: Your doctor might recommend that you wear a splint for a few days to immobilize your hand joints and allow the injured ligament to heal 2. Surgical treatment for Ligament injury If your injury is severe, Dr. Mark might recommend surgical treatment. There are several surgical techniques to treat partial or complete ligament tears, and many times a combination of them is used: - Ligament repair/pinning: If the injury is discovered early on (within a few weeks), Dr. Mark might attempt to repair the ligament by inserting metallic pins. These pins stabilize the joint so that the ligaments can heal. The pins will be removed after complete healing has taken place. This technique is not usually effective if too much time has elapsed on the injury. - Reconstruction: If more than 6 months have passed on the injury, Dr. Mark might attempt reconstruction using a tendon graft. In this technique, Dr. Mark will take a tissue graft from a nearby location and implant it at the site of injury to replace the torn ligament. Pins will be inserted to stabilize to joint and allow healing. - Fusion: Fusion of the joint might be attempted if you already have arthritis (joint inflammation) to relieve joint pain and stabilize movement. - Arthroscopy: In certain cases, arthroscopic surgery might be feasible. This means the use of a scope to see and treat the injury through small skin incisions. What to expect after hand surgery? Dr. Mark will likely recommend that you wear a splint for some time after surgery to immobilize your joint and allow the ligaments to heal. Some pain, stiffness, and limited range of motion can be expected after surgery. You might be prescribed physical therapy sessions to restore your full hand functions. For those living near his clinics in New Zealand, Dr. Mark Gittos can offer full evaluation for hand trauma and hand disorders as well as surgical and non-surgical treatments. Dr. Mark is a well-known plastic and reconstructive surgeon who has a lot of experience in hand surgery. Contact us to set up an appointment and get a full evaluation, or to learn more about your condition. Complications and Risks of Hand Surgery Hand surgery incurs risks and complications like all invasive surgery. Dr Gittos will make you aware of potential complications during your consultation. This includes general anaesthesia risks, bleeding (Hematoma), infection, wound healing, deep vein thrombosis, scarring and numbness. Always stay informed and healthy, do NOT smoke before or after your procedure and read & understand your risks of surgery. Further Reading – Medical Sources about Hand Ligament injury & Sprains: - NYU Langone Health on Diagnosing Hand Sprains & Strains - Physio.co.uk article on Ligament Injuries in the Fingers - British Society for Surgery of the Hand on Hand Injuries - American Society for Surgery of the Hand on Sprained Wrist How to find a hand surgeon in Auckland, NZ Always choose a top specialist plastic surgeon or hand surgery expert for your hand surgery to ensure an excellent outcome. As a general rule it is better to avoid the cheap option when seeking surgery. Look at your surgeons online reviews to find out how they look after their patients and what their patient says about them. Why Choose Dr Mark Gittos? Dr. Mark Gittos in Auckland, New Zealand is a specialist plastic surgeon who is experienced in treating hand injuries and disorders, such as nerve compression disorders, ligament or tendon injuries, among others. If you’d like to learn more about your condition, call us to set an appointment with Dr. Mark to get a full assessment and discuss treatment options. Make an Appointment for a Hand Consultation with Dr Gittos If you have any symptoms that might be related to a hand tendon injury, please call to make an appointment with Dr. Mark Gittos in Auckland, New Zealand. Dr. Mark is a plastic surgeon who is experienced in treating a wide range of hand disorders, such as trigger finger, De Quervain’s syndrome, cubital and radial tunnel syndromes, carpal tunnel syndrome, ligament disorders, and tumours or ganglions. Come visit us to get a full assessment of your condition and learn about your treatment options. About Dr Mark Gittos FRACS (Plast) – New Zealand Plastic Surgeon Practice locations in Herne Bay Auckland, Northland and Bay of Plenty – Kerikeri, Whangarei, New Plymouth & Tauranga Dr Mark Gittos is a leading Specialist Plastic Surgeon and operates a practice in Herne Bay, Auckland and in the UK. The practice focuses on both surgical and non-surgical procedures, each designed to help restore, improve or change a physical characteristic or problem. The first step in every case is to talk through your personal requirements and explore all the options, before deciding on the most effective solution. Dr Mark Gittos offers high quality, natural-looking cosmetic surgery results and is highly experienced in Breast, Body and Face Surgery having performed over 4000 Surgeries in the last 26 years. With worldwide expertise Dr Gittos is an expert in breast, face and body surgery for men & women. Naturally, before any treatment is begun, we will explain clearly the advantages and risk factors; so that you have the information you need to make an informed decision that is best for you. Visit the practice to find out more. Do your Research - Read the Website and Blogs relevant to your procedure - Browse our Frequently Asked Questions including how to choose a Surgeon for your procedure - Download and read the FREE Guides to Surgery What to Bring to your Plastic Surgeon Consultation - Bring a friend or relative to help discuss the information and your choices - Take lots of notes and read the documents provided thoroughly - Dress in simple clothes as you may need to undress for examination - Bring your medical referral and any relevant medical documents or test results Book your Initial Surgery Consultation - A Referral from your GP or specialist is helpful but NOT essential – you can have a consultation without a GP Referral - Email us or Call on 09 529 5352 to arrange your surgeon consultation appointment. - Book a consultation with Dr Gittos by paying the Consultation Fee – $295+GST Please contact us to arrange to book a consultation with our Specialist Plastic Surgeon or to speak with our Patient Care Advisor.
<urn:uuid:fb3b2b73-ba2e-496b-9ce5-24fa2884641d>
CC-MAIN-2021-43
https://www.drmarkgittos.co.nz/hand-surgery/hand-ligament-injury-repair-nz/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00110.warc.gz
en
0.935507
2,180
2.578125
3
The Battle of Abiqua In February 1848, six months after the Whitman Massacre, the settlers in the Willamette valley were very tense, concerned that the tribes would gather together and attack. Many of the men had volunteered for a militia and were in eastern Oregon for the Cayuse War, so additional citizen militia were established in the valley. Ralph C. Geer was the captain of one company, while Don Waldo was the captain of another. The situation became much more tense when 80 Klamath Indians, friends of the Molallas, came into the Molalla area, to Dickie Prairie, and began harassing people and raising a ruckus. The settlers did not like the Klamaths, who were not from the Willamette Valley, but who traveled around on the trade trails, sometimes with Molalla Indians, and caused mischief. When the Klamaths were asked to leave, Chief Coosta (Coastno) defended the rights of the Klamaths to be in the valley stating that they were his kin. In this time period on Oregon history, the territory had not yet been purchased of the tribes, and any suggestion by American settlers that the tribes had no rights to be there, were incorrect. Tribes possess their own sovereignty which predates the United States, and there had been no purchase or conquest of the lands of the Oregon Territory, until the first treaties were ratified in 1853. The United States and Great Britain claimed the territory under a joint occupation agreement, both nations having setup outposts and towns, like Astoria and Fort Vancouver on the Columbia. However, considering all land transfer laws of both nations of the time, exploratory claims, outposts, forts, and even settlement, did not mean full ownership or transfer of title in any way. The situation in Dickie Prairie continued for several days, with many of the settlers gathering at Richard “Dicky” Miller’s homestead for protection in preparation for driving the Klamaths out of the valley. Word was sent by messenger to the militias organized at Don Waldo’s place outside of Salem (Waldo Hills), and in a day militias came from Marion and Clackamas counties to help stem the “attack” by the Klamaths and drive them away. Sources suggest that there was another reason for the actions of the Klamaths. One suggestion was that Cayuse Indians had come among the Molalla to get them to join them in driving the Whitemen from Oregon. Other historians that a similar action by the Cayuse, and perhaps Klickitat Indians, was happening throughout the region, that, as far south as the Rogue River, these tribes were meeting with other tribal chiefs to convince them to join with the Columbia River Indians and involve them in a war to drive the Whites from their lands. In fact, Klickitat Indians were under suspicion for trading arms and ammunition among the tribes in the south, to help fortify them against the incursion of White ranchers, farmers and gold miners. Such trade meetings would include news from the north of conflicts, battles and war between the Columbia River tribes and the Americans, and perhaps helped stir up feelings of the need to protect the sovereignty of all of the tribes from the invaders. At this time, the “Cayuse War” was happening in skirmishes along the Columbia, and may tribal chiefs of several tribes saw what was happening; that the Whitemen were coming in ever-increasing numbers and would soon take all the land and drive the Indians out. Molalla Chief Crooked Finger noted this, and participated in the actions of resistance and retribution towards the Whitemen, not unlike many other tribal leaders. Numerous reports of small thefts in the valley, as well as numerous reports of Indian men, like Crooked Finger, entering White homesteads and ordering White women to cook for them, suggests that the tribal chiefs were gaining a form of retribution upon the settlers, for taking land without permission, for not paying the tribes, and without paying deference to previous long-term tribal occupation and authority. Another suggestion, is that the fears of the settlers towards the tribes, were mainly stirred by rumors, and later by stories published in the Statesman Journal Newspaper out of Salem. This newspaper was the “conservative” newspaper for Oregon at the time, and published numerous editorials and letters about Indian depredations upon White settlements on the Columbia and in Southern Oregon. Numerous letters published in the paper called for the “extermination of all Indians” before they could gather their forces and attack the Willamette Valley Settlements. Extermination of the tribes would have eliminated an uprising, but would more likely would solve another problem the tribes posed; they were living on some of the best farmlands, and along the best stretches of gold mining rivers, and by eliminating them, these resource rich areas would immediately become open to White exploitation. Extermination fever was already raging in northern California after 1849, as settlers and miners sought to claim the best resource rich lands, and in the process committed innumerable acts of genocide on the tribes, by joining together as gangs of Ranger militia (reimbursed by the State of California for their expenses). After 1851 the Oregon Gold Rush causes a similar response as genocidal gangs of white militia, also called Rangers, and paid by the Oregon Legislature, committed genocide on numerous Indian villages for minor depredations claims, like theft of cattle or horses. It is more likely that the these action of the Ranger would cause the feared response, and in 1855, the tribes gathered at Table Rock Reservation to live peacefully, become fed up with the continued attacks on their people on the reservation and choose to act. The Rogue River tribes, many of them, gather under Chief John and leave the reservation to fight a total war against all settlers, for the next year and a half, in an attempt to drive the whites from their lands, and save themselves from genocide. These fears of an uprising of the tribes among prominent pioneers in 1843, prompted them to begin forming the Oregon Provisional Government, of mainly Americans. In 1841-43, during the Wolf Meetings these prominent settlers gathered, wrote, and voted upon laws for the new government, including a law to form an Oregon militia, created in order to protect the American settlements from Indian attacks. Some scholars suggest that the whole reason for forming the government was to protect the settlements from the Indians. This may have been one of the reasons. the other being to secure Oregon for the United States, and away from Great Britain. Much of the fears by these settlers were stirred by real fears that the war on the Columbia would spill into the Willamette Valley, and include the Kalapuyans and Molallans, while the reality was that these same tribes were so devastated in the 1830s by diseases that they had little will to make war against the Whitemen, even if they saw themselves losing all their lands and rights. In Dickie Prairie, the settler militia set out to drive the Klamaths out of the valley, and upon meeting them in a small prairie they began shooting, some accounts suggest that the first volley was an accident. Several Indian men were killed in the first volley and a pause occurred when Red Blanket, a Molalla Chief, was allowed to leave peacefully. However, when walking away, he turned back and began shooting arrows very swiftly at the company of volunteers, and he was shot down. The Klamaths were then driven from Dickie Prairie. The next day the volunteers again advanced further into the forest and began driving the Klamaths further back, with some back and forth firing of arrows and shot. The next prairie, near Scott’s Mills, saw several Indians killed as they were back up against a cliff. Men and women among the tribe were killed by the volunteers. The Klamaths were driven further south and that evening they crossed the Santiam on their way south. In all, over ten Indian men and women were killed by the militia, and several women taken hostage. When the militia was questioned about why they were so harsh on the Klamaths, as they had only been a bit rude and mischievous at first, the militia leaders suggested that the first attack was a mistake, but that the Klamath Indians did not belong in the Willamette Valley, and the Molalla Indians did. Regardless of the settlers feelings about the Klamaths belonging in the valley, the Klamaths, or at least a few bands of them, had a regular habit of traveling into the Willamette Valley in the summers. This was likely part of their seasonal round, where they traveled annually into the resource rich valley, to hunt elk, and camp among their friends and kin the Molalla Indians. The tribes were likely interrelated, and had a firm trading relationship. The Molallas were certainly related to the Santiam Kalapuyans, as the daughter of Chief Coastno is recorded as marrying Chief Alquema of the Santiam. The Cascade Mountains from the Klamath Basin to the Willamette Valley is full of trails, and trade routes. One such Klamath Trail lets out near Oakridge, and another extends down the Santiam basin, follows the north Santiam into Salem, follows State Street, and ended at the village of Chemeketa, at the Mill Creek outlet into the Willamette River. One such visit by the Klamaths in 1846 resulted in the Battle Creek incident in the Salem Hills. The local Molalla Indians did not join the battle at Dickie Prairie. They remained peaceful, and never thereafter caused any problems for the settlers. However, several letters to the Superintendent of Indian affairs in 1851 suggest that the Molalla Indians that remained at Dickie Prairie were harassed by settlers, and some chose to leave. Joel Palmer, in about 1855, was traveling up the McKenzie River basin into the Cascades and encountered a small village of Molallas. Palmer said they had moved south from Dickie Prairie some years earlier. The above account is as close to the truth of the manner as we can get. There are likely a few other details to be added. There are a dozen or more stories of the Battle of Abiqua in various books and newspapers. Many of the accounts are colored with the racist and romanticist ideas of the past 150 years. Its important to read these accounts closely and meld them as carefully as possible, as well as to understand the culture of the tribes involved. Most of the history is not told from the native perspective and its out role to find that perspective where we can. F0or Crooked Finger, there are transcriptions of his statements during the 1851 treaty with the Molalla. He must have been a powerful character to be around, perhaps one of the most brilliant of his time. He hated losing his land and insisted that the US pay all of the money they were offering in one lump payment. I think he know he would not live long and he is killed a couple years later for entering a whiteman’s cabin without permission.
<urn:uuid:15c9dc99-cb42-418a-823e-5c73741d7a81>
CC-MAIN-2021-43
https://ndnhistoryresearch.com/2017/12/30/the-battle-of-abiqua-second-battle-of-the-willamette-valley/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00550.warc.gz
en
0.982742
2,279
3.328125
3
What does OPEC stand for? OPEC stands for the Organization of Petroleum Exporting Countries. The OPEC cartel dominates oil and gas supply around the world and influences crude oil prices in both oil-producing countries and oil-purchasing countries. It’s important for global investors to understand how OPEC’s policies affect the economies and currencies around the world. 4 WAYS OIL PRICES IMPACT OPEC COUNTRIES’ ECONOMY AND CURRENCY - OPEC Regulates Oil Prices - Fracking Has Driven Down Oil Prices - OPEC Keeps Oil Production Levels High - OPEC Oil Embargo Spurred U.S. Domestic Oil Production Since the supply and demand for oil depends upon world economic and political events, OPEC conferences continue to discuss how oil-producing nations can seek a steady income for their oil exports in a volatile world. 1. OPEC Regulates Oil Prices OPEC regulates oil prices to stabilize the economies of oil exporting countries. Its goals are to coordinate the petroleum policies of all its members. The idea is to supply petroleum to buyers while earning a steady income and providing petroleum investors with a fair return on their money. 2. Fracking Has Driven Down Oil Prices Fracking in the US and other new energy production technologies used by countries outside OPEC has forced the price of oil down. This has reduced the economic stability of OPEC countries, lowering their influence in the oil markets. As oil production around the world has increased and prices dropped, OPEC’s influence has declined–it is not as significant a player in the oil industry as it used to be. If it were not for the fracking revolution in the United States, almost every other country in the world would have paid far more for the oil in the past decade. Without the oil boom in the United States, OPEC and Russia would have dominated world energy markets. 3. OPEC Keeps Oil Production Levels High Since June 2016, OPEC has kept oil production levels high. Although lower prices led to a loss in revenue, the long-term plan was to push oil producers like Canada and the United States with a higher production cost out of the market. OPEC thought the plan would help its members regain global energy market share. The high-pressure plan failed because US shale oil remained resilient. In response to this high-pressure plan, American companies cut oil exploration expenses and production costs. 4. OPEC Oil Embargo Spurred U.S. Domestic Oil Production Prior to the OPEC oil embargo in 1973, the United States did not embark on extensive domestic oil exploration because it was not cost effective to keep large crude oil inventories. It was far more economical to buy cheap oil from the Middle East than to invest in local oil production. There was no incentive to increase the production of domestic oil fields. OPEC received widespread criticism during the OPEC oil embargo, which is also sometimes referred to as the Arab oil embargo. Egypt and Syria attacked Israel on October 6th, 1973, on the Jewish holiday Yom Kippur. The surprise invasion led to territorial gains around the Suez Canal and the Golan Heights. However, Israeli troops not only regained their lost territories but also seized Egyptian and Syrian land. To pressurize Western countries to coerce Israel to withdraw from the occupation, the Arab members of OPEC took swift and drastic action. They cut oil production, started radical price increases, and banned oil supply to countries that had supported Israel. This drastic action affected many countries, leading to a global energy crisis. Many developing countries could no longer afford to buy as much oil as before and OPEC countries stopped oil shipments to the US and the Netherlands. By 1974, the price of oil had quadrupled, and both the US and its European allies had to reassess their reliance on oil from the Middle East. Since its foundation in 1960, OPEC has focused on establishing good relationships with all its worldwide customers. On the surface, its sudden geopolitical manipulation of worldwide oil distribution appeared to be in retaliation for the Western support of Israel. But, in fact, the Middle East’s simmering resentment toward the West had started much earlier–when US President Nixon released the dollar from the gold standard and declared it a fiat currency. The abrupt devaluation of the dollar revived the sluggish US economy at the expense of the Arab world. Since the revenues from oil sales were in US dollars, their economies suffered a colossal loss. The devaluation of the dollar came as a shock because the gold standard had been in place since the end of the Second World War. Since the demand for Middle Eastern oil had doubled over twenty-five years, oil-producing countries in the region had accumulated their enormous wealth based on the strength of the US dollar. Because of the OPEC oil crisis, the US government imposed domestic rationing of fuels for trucks and cars and lowered driving speed limits across the freeways. Besides emphasizing the need for energy efficiency, the US also invested in building its own domestic oil industries. The OPEC oil embargo kept tensions so high that President Nixon considered military action to save the US economy. He was prepared to invade the Middle East and commandeer major oil fields in Saudi Arabia, Abu Dhabi, and Kuwait. He believed such action necessary to keep the United States from experiencing an economic slowdown. However, in March 1974, frenetic negotiations in Washington lifted the oil embargo, making a military action unnecessary. OPEC: Organization, Membership, Influence Here is a brief overview of OPEC’s organizational structure, membership, and global economic influence. What Is OPEC? The Organization of Petroleum Exporting Countries (OPEC) is a cartel of 14 major oil exporting nations that serves as an intergovernmental organization. Founded in September 1960 in Baghdad by Iraq, Iran, Saudi Arabia, Kuwait, and Venezuela, OPEC aimed to coordinate petroleum policies and support member countries with economic and technological help. OPEC’s headquarters are in Vienna, Austria, where they carry out day-to-day operations. According to OPEC news, Mohammed Sanusi Barkindo of Nigeria has been the organization’s chief executive officer since March 2019. What Countries Are in OPEC? According to the statutes of OPEC, the cartel is open to any significant oil exporting country (including developing countries) that shares its ideals. Besides its five founding members — Iraq, Iran, Saudi Arabia, Kuwait, and Venezuela — there are now nine more countries in OPEC. The current OPEC countries list includes Algeria, United Arab Emirates, Libya, Ecuador, Nigeria, Angola, Equatorial Guinea, and Congo. The countries of OPEC are subject to changes in policy based on geopolitical factors, changes in currency exchange rates, trade disputes, and so on. Qatar and Indonesia, for example, are no longer OPEC members because of disagreements with the organization. A map of OPEC countries does not include all the large oil-producing countries in the world. Substantial oil producers like the United States, China, and Russia are not OPEC members and they have their own independent oil production policies. How Does OPEC Influence the World Market? OPEC has a considerable market influence on the global energy supply because OPEC nations produce 80% at the world crude oil reserves and about half of the world’s natural gas reserves. Usually, OPEC has exerted its global influence to keep oil prices high to benefit its members. But occasionally, it has tried to influence geopolitical issues—the most notable occurring during the Arab oil embargo. Does the IEA Oppose OPEC? On the surface, it would seem that the International Energy Agency (IEA) and OPEC have contrasting agendas. The IEA is committed to ensuring a sustainable environment while OPEC is engaged in exploiting the earth’s resources for profit. Although OPEC was founded in 1960 to support the interests of oil-producing nations while IEA was founded in 1974 to support the interest of oil-consuming developed economies, over the years, talks between the two organizations have agreed on three common policies regarding production in the oil market—the first one is to enhance the oil market’s predictability; the second, to increase its reliability, and the third, to enhance its stability. Abiding by these three oil policies will enable the IEA to ensure less pollution and still allow OPEC to produce oil at a profit. The reason for this domino effect is simple: Energy production and distribution affect economic growth. The IEA appears resigned to the fact that fossil fuels are here to stay despite many sources of clean energy now available. They are more concerned with stabilizing the global oil market, natural gas production, and shale oil production than in trying to disrupt these technologies. They are like the US Energy Information Administration, who is more focused on discussing how U.S. monthly electricity is increasingly coming from renewable sources than condemning coal-fired generation. The primary concern of these organizations designed to improve environmental pollution is not to push alternative technologies too rapidly and to risk an economic slowdown. How OPEC Impacts the World’s Economy and Currency In the 1960s, when OPEC first emerged, world governments saw it as an organization interested in cooperation with other nations. By the early 1970s, this attitude changed abruptly. OPEC became widely criticized for precipitating a global economic crisis by raising prices and reducing production and supply. OPEC asserted its economic clout for two reasons: to retaliate against Western democracies who had adopted a pro-Israeli and anti-Arab stance during the Yom Kippur War, and also out of resentment for the huge revenue losses Arab states experienced after the US dollar dropped the gold standard and became a fiat currency. However, it had to retreat from its tough policies, and so rejoined the US and Western Europe. At the time, US President Richard Nixon escalated pressure on OPEC delegates to come to an agreement in Washington DC by flatly stating that he planned to seize Saudi oil fields through military force as a last resort. Until 2014, oil production and prices remained fairly steady, then oversupply by non-OPEC producers caused natural gas and global oil prices to fall. Today, oil prices, as exhibited by WTI oil price charts on the New York Mercantile Exchange, is on a wild ride. Recently, for instance, crude futures buoyed upwards because of an outage at a US East Coast refinery. In the long run, however, analysts are expecting the price to decline as US shale oil output increases. If you compare the 2018 volume of WTI oil prices for West Texas Intermediate crude oil to the prices in 2019, you can see a drastic drop in price. While this is good news for nations on the side of oil imports, it is an alarming situation for OPEC nations on the side of oil exports because oil is often their main source of revenue. Venezuela currency, for example, is in a critical economic condition because of declining oil revenues. OPEC’s plan to undermine oil competitors like the US and Canada has backfired although these non-OPEC oil producers have higher production costs. Production and pricing fluctuate based on trading patterns between nations, news of currency recaps, demand from rapidly industrializing Asian countries, and the oil production level of non-OPEC producers. Even news of the Iraqi dinar revaluation can affect crude oil prices. The US invasion of Iraq not only changed the regime of Saddam Hussein but also affected oil supply quotas around the world, thus affecting the economy of countries that had nothing to do with the war. Geopolitical factors like US-China trade tensions have a ripple effect on indirectly determining crude oil prices in OPEC member countries. The recent conflict between the Trump administration and Iran over a drone shot down by the Iranian air force could cause a change in Iranian oil supply quotas to US allies. Even such things as thoughtless comments made by the energy minister of the United Arab Emirates at an OPEC meeting—if reported by the press—could trigger a trade dispute and currency rates and even affect oil output and OPEC production quotas. The world’s economy is now so interconnected that even trade disputes in non-OPEC countries affect OPEC production decisions.
<urn:uuid:75c4843b-3170-49e8-bec2-bad32fbcbb57>
CC-MAIN-2021-43
https://treasuryvault.com/currency-resources/4-ways-oil-prices-impact-opec-countries-economy-currency/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.962273
2,463
3.25
3