text
stringlengths
1.59k
23.8k
id
stringlengths
47
47
dump
stringclasses
8 values
url
stringlengths
15
3.15k
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
2.05k
4.1k
score
float64
2.52
5.34
int_score
int64
3
5
by Ashutosh Jogalekar Primitive science began when mankind looked upward at the sky and downward at the earth and asked why. Modern science began when Galileo and Kepler and Newton answered these questions using the language of mathematics and started codifying them into general scientific laws. Since then scientific discovery has been constantly driven by curiosity, and many of the most important answers have come from questions of the kind asked by a child: Why is the sky blue? Why is grass green? Why do monkeys look similar to us? How does a hummingbird flap its wings? With the powerful tool of curiosity came the even more powerful fulcrum of creativity around which all of science hinged. Einstein’s imagining himself on a light beam was a thoroughly creative act; so were Ada Lovelace’s thoughts about a calculating machine as doing something beyond mere calculation, James Watson and Francis Crick’s DNA model-building exercise, Enrico Fermi’s sudden decision to put a block of paraffin wax in the path of neutrons. What is common to all these flights of fancy is that they were spontaneous, often spur-of-the-moment, informed at best by meager data and mostly by intuition. If Einstein, Lovelace and Fermi had paused to reconsider their thoughts because of the absence of hard evidence or statistical data, they might at the very least been discouraged from exploring these creative ideas further. And yet that is what I think the future Einsteins and Lovelaces of our day are in danger of doing. They are in danger of doing this because they are increasingly living in a world where statistics and data-driven decisions are becoming the beginning and end of everything, where young minds are constantly cautioned to not speculate before they have enough data. We live in an age where Big Data, More Data and Still More Data seem to be all consuming, looming over decisions both big and mundane; from driving to ordering pet food to getting a mammogram. We are being told that we should not make any decision pending its substantiation through statistics and large-scale data analysis. Now, I will be the first one to advocate making decisions based on data and statistics, especially in an era where sloppy thinking and speculation based on incomplete or non-existent data seems to have turned into the very air which the media and large segments of the population breathe. Statistics has especially been found to be both paramount and sorely lacking in making decisions, and books like Daniel Kahneman’s “Thinking Fast and Slow” and Nate Silver’s “The Signal and the Noise” have stressed how humans are intrinsically bad at probabilistic and statistical thinking and how this disadvantage leads to them consistently making wrong decisions. It seems that a restructuring of our collective thinking process that is grounded in data would be a good thing for everyone. But there are inherent problems with implementing this principle, quite apart from the severe limitations on creative speculation that an excess of data-based thinking imposes. Firstly, except in rare cases, we simply don’t have all the data that is necessary for making a good decision. Data itself is not insight, it’s simply raw material for insight. This problem is seen in the nature of the scientific process itself; in the words of the scientist and humanist Jacob Bronowski, in every scientific investigation we decide where to make a “cut” in nature, a cut that isolates the system of interest from the rest of the universe. Even late into the process, we can never truly know whether the part of the universe we have left out is relevant. Our knowledge of what we have left out is thus not just a “known unknown” but often an “unknown unknown”. Secondly and equally importantly, the quality of the data often takes second stage to its quantity; too many companies and research organizations seem to think that more data is always good, even when more data can mean more bad data. Thirdly, even with a vast amount of data, human beings are incapable of digesting this surfeit and making sure that their decisions include all of it. And fourthly and most importantly, making decisions based on data is often a self-fulfilling prophecy; the hypothesis we form and the conclusions we reach are inherently constrained by the data. We get obsessed with the data that we have and develop tunnel vision, and we ignore the importance of the data that we don’t have. This means that all our results are only going to be as good as the existing data. Consider a seminal basic scientific discovery like the detection of the Higgs Boson, forty years after the prediction was made. There is little doubt that this was a supreme achievement, a technical tour de force that came about only because of the collective intelligence and collaboration of hundreds of scientists, engineers, technicians, bureaucrats and governments. The finding was of course a textbook example of how everyday science works: a theory makes a prediction and a well-designed experiment confirms or refutes the prediction. But how much more novelty the LHC would have found had the parameters been significantly tweaked, if the imagination of the collider and its operator been set loose? Maybe it would not have found the Higgs then, but it would have discovered something wholly different and unexpected. There would certainly have been more noise, but there would also have been more signal that would have led to discoveries which nobody predicted and which might have charted new vistas in physics. One of the major complaints about modern fundamental physics, especially in areas like string theory, is that it is experiment-poor and theory-rich. But experiments can only find something new when they don’t stay too close to the theoretical framework. You cannot always let prevailing theory dictate what experiments should do. The success of the LHC in finding the Higgs and nothing but the Higgs points to the self-fulfilling prophecy of data that I mentioned: the experiment was set up to find or disprove the Higgs and the data contained within it the existence or absence of the Higgs. True creative science comes from generating hypotheses beyond the domain of the initial hypotheses and the resulting data. These hypotheses have to be confined within the boundaries of the known laws of nature, but there still has to be enough wiggle room to at least push against these boundaries, if not try to break free of them. My contention is that we are gradually becoming so enamored of data that it is clipping and tying down our wings, not allowing us to roam free in the air and explore daring new intellectual landscapes. It’s very much a case of the drunk under the lamppost, looking for his keys there because that’s where the light is. A related problem with the religion of “dataism” is the tendency to dismiss anything that constitutes anecdotal evidence, even if it can lead to creative exploration. “Yes, but that’s an n of 1” is a refrain that you must have heard from many a data-entranced statistics geek. It’s important to not regard anecdotal evidence as sacrosanct, but it’s equally wrong in my opinion to simply dismiss it and move on. Isaac Asimov reminded us that great discoveries in science are made when an odd observation or fact makes someone go, “Hmm, that’s interesting”. But if instead, the reaction is going to be “Interesting, but that’s just an n of 1, so I am going to move on”, you are potentially giving up on hidden gems of discovery. With anecdotal data also comes storytelling which has always been an integral part not just of science but of the human experience. Both arouse our sense of wonder and curiosity; we are left fascinated and free to imagine and explore precisely because of the paucity of data and the lone voice from the deep. Very few scientists and thinkers drove home the importance of taking anecdotal storytelling seriously as well as the late Oliver Sacks. If one reads Sacks’s books, every one of them is populated with fascinating stories of individual men and women with neurological deficits or abilities that shed valuable light on the workings of the brain. If Sacks had dismissed these anecdotes as insufficiently data-rich, he would have missed discovering the essence of important neurological disorders. Sacks also extolled the value of looking at historical data, another source of wisdom that would very easily be dismissed by hard scientists who think all historical data suspect because of its absence of large-scale statistical validation. Sacks regarded historical reports as especially neglected and refreshingly valuable sources of novel insights; in his early days, his insistence that his hospital’s weekly journal club discuss the papers of their nineteenth century forebears was met largely with indifference. But this exploration off the beaten track paid dividends. For instance, he once realized that he had rediscovered a key hallucinogenic aspect of severe migraines when he came across a paper on similar self-reported symptoms by the English astronomer John Herschel, written more than a hundred years ago. A data scientist would surely dismiss Herschel’s report as nothing more than a fluke. The dismissal of historical data is especially visible in our modern system of medicine which ignores many medical reports of the kind that people like Sacks found valuable. It does an even better job ignoring the vast amount of information contained in the medical repositories of ancient systems of medicines, such as the Chinese and Indian pharmacopeias. Now, admittedly there are a lot of inconsistencies in these reports so they cannot all be taken literally, but neither is the process of ignoring them fruitful. Like all uncertain but potentially useful data, they need to be dug up, investigated and validated so that we can keep the gold and throw out the dross. The great potential value of ancient systems of medicine was made apparent when two years ago, the Nobel Prize for medicine was awarded to Chinese medicinal chemist Tu Youyou for her lifesaving discovery of the antimalarial drug artemisinin. Youyou was inspired to make the discovery when she found a process for low-temperature chemical extraction of the drug in a 1600-year-old Chinese text titled “Emergency Prescriptions Kept Up One’s Sleeve”. This obscure and low-visibility data point would have been certainly dismissed by statistics-enamored medicinal chemists in the West, even if they had known where to find it. Part of recognizing the importance of Eastern systems of medicine consists in recognizing their very different philosophy; while Western medicine seeks to attack the disease and is highly reductionist, Eastern medicine takes a much more holistic approach in which it seeks to modify the physiology of the individual itself. This kind of philosophy is harder to study in the traditional double-blinded, placebo-controlled clinical trial that has been the mainstay of successful Western medicine, but the difficulty of implementing a particular scientific paradigm should not be an argument against its serious study or adoption. As Sacks’s and Youyou’s examples demonstrate, gems of discovery still lie hidden in anecdotal and historical reports, especially in medicine where even today we understand so little about entities like the human brain. Whether it’s the LHC or medical research, the practice of gathering data and relying only on that data is making us stay close to the ground when we could have been soaring high in the air without these constraints. Data is critical for substantiating a scientific idea, but I would argue that it actually makes it harder to explore wild, creative scientific ideas in the first place, ideas that often come from anecdotal evidence, storytelling and speculation. A bigger place for data leaves increasingly smaller room for authentic and spontaneous creativity. Sadly, today’s publishing culture also rooms little room for pure speculation-driven hypothesizing. As just one example of how different things have become in the last forty years, in 1960 the physicist Freeman Dyson wrote a paper in Science speculating on possible ways to detect alien civilizations based on their capture of heat energy from their parent star. Dyson’s paper contained enough calculations to make it at least a mildly serious piece of work, but I feel confident that in 2017 his paper would probably get rejected from major journals like Science and Nature which have lost their taste for interesting speculation and have become obsessed with data-driven research. Speculation and curiosity have been mainstays of human thinking since our origins. When our ancestors sat around fires and told stories of gods, demons and spirit animals to their grandchildren, it made the wide-eyed children wonder and want to know more about these mysterious entities that their elders were describing. This feeling of wonder led the children to ask questions. Many of these questions led down wrong alleys, but the ones that survived later scrutiny launched important ideas. Today we would dismiss these undisciplined mental meanderings as superstition, but there is little doubt that they involve the same kind of basic curiosity that drives a scientist. There is perhaps no better example of a civilization that went down this path than ancient Greece. Greece was a civilization full of animated spirits and Gods that controlled men’s destinies and the forces of nature. The Greeks certainly found memorable ways to enshrine these beliefs in their plays and literature, but the same cauldron that imagined Zeus and Athena also created Aristotle and Plato. Aristotle and Plato’s universe was a universe of causes and humors, of earth and water, of abstract geometrical entities divorced from real world substantiation. Both men speculated with fierce abandon. And yet both made seminal contributions to Western science and philosophy even as their ideas were accepted, circulated, refined and refuted for the next two thousand years. Now imagine if Aristotle and Plato had refused to speculate on causes and human anatomy and physiology because they had insufficient data, if they had turned away from imagining because the evidence wasn’t there. We need to remember that much of science arose as poetic speculations on the cosmos. Data kills the poetic urge in science, an urge that the humanities have recognized for a long time and which science has had in plenty. Richard Feynman once wrote, “Poets say that science takes away the beauty of the stars and turns them into mere globs of gas atoms. But nothing is ‘mere’. I too can see the stars on a desert night, but do I see less or more? The vastness of the heavens stretches my imagination; stuck on this carousel my little eye can catch one-million-year-old light…What men are poets who can speak of Jupiter as if he were a man, but if he is an immense spinning sphere of methane and ammonia must be silent?” Feynman was speaking to the sense of wonder that science should evoke in all of us. Carl Sagan realized this too when he said that not only is science compatible with spirituality, but it’s a profound source of spirituality. To realize that the world is a multilayered, many-splendored thing, to realize that everything around us is connected through particles and forces, to realize that every time we take a breath or fly on a plane we are being held alive and aloft by the wonderful and weird principles of mechanics and electromagnetism and atomic physics, and to realize that these phenomena are actually real as opposed to the fictional revelations of religion, should be as much a spiritual experience as anything else in one’s life. In this sense, knowing about quantum mechanics or molecular biology is no different from listening to the Goldberg Variations or gazing up at the Sistine Chapel. But this spiritual experience can come only when we let our imaginations run free, constraining them in the straitjacket of skepticism only after they have furiously streaked across the sky of wonder. The first woman, when she asked what the stars were made of, did not ask for a p value.
<urn:uuid:a26cbe7d-e8f9-4feb-9b1e-416c6f287038>
CC-MAIN-2021-43
https://3quarksdaily.com/3quarksdaily/2017/11/big-data-is-shackling-mankinds-sense-of-creative-wonder.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00150.warc.gz
en
0.965136
3,259
2.796875
3
We continue with the discussion on the legendary tales from Tenmangū. Since we were able to achieve an understanding behind these shrines through the history of Sugawara no Michizane in part 1, we will now proceed with those tales and get an idea how they have deep ties with the yearly ox Zodiac sign theme. Note that many of these stories were made long ago in Japan’s past, during a time where superstition was prevalent, and natural phenomenons were believed to have been caused by one of many gods. Whether they are believable or not, they do play a big role in the development of both culture and society. BIRTH & DEATH Sugawara no Michizane was elevated to the level of a divine being after his death due to his contributions while he was alive. This isn’t so unusual, as there are plenty of examples of this happening not only in Japan, but in other countries as well. Interestingly, one could say that this was already predetermined on the day of his birth. A tale that is told at the Tenmangū shrines is that his birth was an auspicious one, and truly denotes his connection with the ox Zodiac sign, which is considered beyond normal. In this particular tale, Michizane’s birth is recorded to not only been in the year of the ox, but was also on the day of the ox, and at the time of the ox¹. What does this mean? The Zodiac signs have a multitude of purposes, some utilitarian, others mystical. In the past, they were used to denote years, days, and time, which was key for fortune telling. Depending on the period and the tasks that are at hand, a person may believe they will see benefits, or will heed caution and refrain from doing anything important. In Michizane’s case, this repeated occurrence with the ox sign in his birth is pretty auspicious, and viewed as beyond normal. On top of this, Michizane is said to have died on the day of the ox. Such a repetition of a Zodiac sign may point to him as being divine, like a deity who took the form of a human. As for the ox reference, one could interpret it that the ox brought him into the world, as well as returned him to his true realm, since the ox is naturally a vehicle of the gods. More on this point later. VENGEFUL SPIRIT, WRATHFUL GOD This tale can almost be seen as a continuation to part 1, based on how it’s told in the visual records of the Kitanō Tenmangū shrine called “Kitanō Tenjin Engi Emaki” (北野天神縁起絵巻). In 908, just 3 years after Michizane’s death, a member of the Fujiwara clan would die suddenly from disease. One year later, Fujiwara no Tokihira, the main antagonist in Michizane’s misfortune, also dies from disease. In 913, new Minister of the Right Minamoto no Hikaru would tragically die through drowning while out on a hunting expedition. As the Fujiwara clan gained a stronger hold of both the Imperial palace and Imperial family, more tragedy befell upon them. Such can be seen in the 930 incident where a lightning storm would strike down upon a building on the Imperial grounds where many members of the Fujiwara family were, resulting in a few of them dying on the spot, or later passing away due to suffering from lightning burns. The final tragedy befell on 60th Emperor Daigō, who is believed to have been the main target of the lightning storm. After the incident, Emperor Daigō’s health deteriorated, until finally dying 3 months later. The cause of this is viewed to be linked to his agreement with the validity of the accusations made by Tokihira and others, and Michizane being exiled from Heian Kyō. This entire story is seen as an act of revenge by Michizane’s spirit that took its course over the course of almost 30 years. Initially, as these events were unfolding, the consensus within the Imperial palace was that Michizane’s vengeful spirit was cursing the Fujiwara clan. There were different attempts to try and “appease” him, such as bestowing upon him different titles including Minister of the Right, which was taken away from him through slander while he was still living. The lightning storm was the most severe, which happened later after the Fujiwara clan were able to become part of the Imperial family through one of the women conceiving a child for then Emperor Daigō, making him a prince. As a result, A Fujiwara member was sent to Anrakuji, where Michizane was buried at, to build an enshrinement. This enshrinement was then named Tenmangū. A few centuries later the Kitanō Tenjin Engi Emaki was created, which retells this story. While there were those who described him as a vengeful spirit, Tenmangū instead envisions him as a wrathful god punishing wrongdoers in an act of justice. As a result, Michizane is called by several other names, including “Raijin” (雷神), which means “Thunder God”. According to old beliefs, a thunder god is generally depicted having the guise of an oni (鬼, demon) with horns². According to the Zodiac signs, the combination of the Ox and Tiger signs refer to demons, both metaphorically (i.e. they point towards the unlucky north-east direction on the typical Zodiac chart) and visually (demons are usually illustrated having ox-like horns and wearing tiger fur loincloth). This goes back to Michizane being born in the year of the ox, which contributes to this image. PERSONAL RELATIONSHIP WITH OXEN There is a legend that Michizane had encounters with an ox, which may have been his guardian spirit in disguise. During his youth, Michizane found a baby ox wandering alone in a wooded area. Appearing to be lost or abandoned, he took it into his residence, where he nurtured it until it grew into an adult. At some point, just as it suddenly appeared in his life, this ox suddenly disappeared without a trace. While he wanted to set out to search for it, in the end he let the matter go. Fast forward to when he was exiled to live his life in Dazaifu in the south, Michizane would one day travel west to Dōmyōji (道明寺, Dōmyō Temple) in Osaka to visit a relative³. After parting ways, he set out to head back home when he was unexpectedly attacked by an assailant. Before harm could befall on him, a large ox suddenly appeared and drove the assailant away, saving Michizane’s life. Just as quickly as it appeared, this ox would disappear from sight in the same way. One of the ways to interpret this story is that the baby ox was a spirit. Since Michizane showed kindness and helped raise it, this ox spirit in return acted as a guardian spirit. In a way, it is not so different from many other Japanese fabled tales of similar nature. Although it is just a legend, this contributes to Michizane’s ever-persistent connection with the ox Zodiac sign. On another note, while in this version of the story the color of the ox is not mentioned, I’ve heard another one, although very brief, where Michizane was rescued by a white ox. While I’m not sure if this is a variation of the story mentioned above, there is significance in the white ox to the Buddhist god Shiva, which the Tenjin of Tenmangū is loosely based off of. AN OX’S STUBBORNNESS AS FATE Another story is directly related what took place after Michizane’s death and the decision with what to do with his remains. In his final days, Michizane wrote a poem as part of his will that states “people should allow themselves to be pulled along in a wagon by an ox, letting it take us where ever it may desire, and to eventually be buried in the spot where it stops at”⁴. Following this as his last wish, those sent to bury his remains put it in an ox-drawn wagon, and had intended to carry it all the way to Heian Kyō (present-day Kyōto) in the west in a procession. During the journey, the ox suddenly stopped in the middle of the road, laid down, and wouldn’t move. They didn’t make it far, as they were still in the southern part of Japan. Despite efforts to get it to stand up and proceed again, the ox wouldn’t budge. With no other choice, They took Michizane’s remains to a near by temple called Anrakuji, and had it buried there. At Tenmangū shrines, the underlining point of this story is that everything happened based on fate. Michizane was destined to be laid to rest in the south, and the ox was like a divine messenger to show where the burial spot should be. Interestingly, this is where Michizane was enshrined in the 1st Tenmangū shrine, thus being deified. Again we see the significance of the ox, whether we choose to view this as chance or by fate. OX AS A SERVANT OF THE GODS If we look at some of the stories mentioned above, we see the ox had a close role in the life of Sugawara no Michizane, as well as after his death. At the Tenmangū, the ox is often described as a “shinshi” (神使), which can stand for being a servant or messenger of the gods. According to Shinto beliefs, there are spiritual creatures who, acting on the will of the god(s) they serve, come down to earth to handle tasks they were assigned to. At times, humans may also view these spiritual creatures as gods themselves. They would take the guise of earthly creatures such as foxes, monkeys, birds, snakes, and centipedes. In the Tenjin faith of Tenmangū, the ox is the main servant. From another perspective, the ox can also be viewed as a vehicle for the gods. In Eastern religions and beliefs, gods are depicted as coming down to Earth on the back of a divine creature. These creatures include boars, horses, and oxen. There are artwork that feature Michizane sitting on the back of an ox, although in these he is in his humanly form, as if to say he did this while he was alive. Since Michizane is deified and now recognized as the Tenjin, this is fitting. These are the majority of legendary tales from the Tenmangū. Bearing a lot of references to the ox, one can get an idea how important their underlining messages are especially when the ox Zodiac years come around. This here brings the 2-part series to a close. I hope readers enjoy this piece of history, and get an understanding about how intricately enwoven the Zodiac signs were with Japanese culture. 1) This is commonly written as “丑の年の丑の日の丑の刻”, which reads “ushi no toshi no ushi no hi no ushi no koku” 2) This is more in the vein of a divine demon, who is a guardian of Buddhism. Another way to describe this would be “onigami” (鬼神), or “demon god”. 3) This relative is stated to be an oba (叔母), which could mean aunt. 4) Although written in modernized Japanese, this is an interpretation of the poem: “Kuruma wo ushi ni hikasete, ushi no yuku mama ni makase, ushi no tomatta tokoro ni hōmuttekure” Note that during the Heian period, ox-drawn wagons were popular among the populous, which may have had an influence on him writing this.
<urn:uuid:4764f974-e071-4a1a-9e9c-78fe8be121ae>
CC-MAIN-2021-43
https://lightinthecloudsblog.com/tag/raijin/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00591.warc.gz
en
0.981501
2,619
2.578125
3
How COVID-19 can impact pregnancy September 9, 2021 Maternal-fetal medicine specialists Joana Lopes Perdigao, MD, and Sarosh Rana, MD, MPH, answer common questions about coronavirus (COVID-19) and pregnancy. Are pregnant people at higher risk for COVID-19? Pregnancy causes many changes in the body and has a strong effect on the immune system. While we are still learning about SARS-CoV-2, the novel coronavirus that caused the COVID-19 disease, we have started to learn some important things about how it affects pregnant people. While we don’t yet know if pregnant people are more likely to contract COVID-19 after they are exposed to the virus, we do know that pregnant people are more likely to experience severe illness if they do get sick. That means that they have an increased risk of hospitalization, ICU admission, the need for a ventilator and death. Physicians have also found that contracting COVID-19 leads to an increased risk of pregnancy complications, such as preterm birth. There is also data indicating that experiencing a severe case of COVID-19 increases the risk of complications such as cesarean delivery, hypertensive disorders of pregnancy such as preeclampsia, and postpartum hemorrhage. These risks are of even more concern for individuals who have additional health risk factors, such as being overweight or obese, having high blood pressure or diabetes, or being part of a minority group that may experience more severe outcomes. The delta variant of SARS-CoV-2 is even more contagious than previous strains of the virus, which makes it more likely for an individual to contract the virus and get sick if they are exposed. Because it is so contagious, the delta variant is circulating at high rates in the community, and the risk of being exposed continues to go up. It is not yet clear whether or not the delta variant causes more serious illness than previous virus strains, but more research is needed. It is also possible that as the virus continues to spread and mutate, strains such as the mu variant or other new strains may evolve that are more or less contagious or cause more or less severe illness. What can I do to reduce my risk of catching COVID-19 and/or preventing severe illness? The number one thing you can do to protect yourself from COVID-19 is get vaccinated. The number one thing you can do to protect yourself from COVID-19 is get vaccinated. The COVID-19 vaccines currently available have been found to be safe and highly effective for individuals over the age of 12, including those who are pregnant. On August 23, 2021, the Comirnaty vaccine (previously called the Pfizer-BioNTech mRNA vaccine) was given full FDA approval for individuals age 16 and up. CDC data indicate that the vaccine presents no safety concerns for pregnant people or their babies. Data looking at pregnancy outcomes in nearly 2500 pregnant people who received an mRNA vaccine (Comirnaty or Moderna) before 20 weeks of pregnancy found no increase in the risk of miscarriage, and there is no evidence that the vaccine impacts future fertility or breastfeeding. However, no vaccine is perfect. Even though the vaccines are highly effective at preventing COVID-19, and are especially effective at preventing severe illness and death, it is still possible to contract the virus after being vaccinated. In addition to getting vaccinated, we recommend that our patients follow all currently recommended public health guidelines, which include avoiding large crowds, wearing masks in public, frequent handwashing and social distancing. If I’m pregnant and have been vaccinated, are there any additional precautions I should be taking? We would advise that pregnant people behave more cautiously than if they were not pregnant. While the COVID-19 vaccines are highly effective, no vaccine is perfect; and with the spread of the delta variant, we are seeing an increase in the number of vaccinated people testing positive for the virus. We would advise that pregnant people behave more cautiously than if they were not pregnant. Pregnant people should avoid high-risk situations such as traveling in an airplane or attending indoor gatherings with people outside of their own household — especially large groups of people. You should wear a mask when indoors with people outside of your household and if you are in crowded outdoor areas. In general, settings that are outdoors and uncrowded (where social distancing is possible) are relatively safe, as are settings where everyone is masked. If you received one of the mRNA vaccines, you should speak with your physician about whether or not you are eligible for a third or booster dose of the vaccine, and if so, when you should schedule your shot. As a pregnant person, what should I do if I have symptoms? Call your doctor, but do not come in for treatment right away, unless you are experiencing severe, life-threatening symptoms, such as struggling to breathe or a very high fever. After a series of questions, your doctor will help you decipher whether you need to come to the hospital or stay at home. If possible, you should get tested for COVID-19. Your doctor may ask you to quarantine and rest at home. If you stay at home, stay away from others as much as possible. You should remain in a specific “sick room” and away from other people and any pets. If you are a University of Chicago Medicine patient, you can sign up to get a COVID-19 test at our clinic. First, you should call our COVID-19 triage hotline at 773-702-2800 or log in to your MyChart account to complete a screening questionnaire. The healthcare provider who reviews your questionnaire will determine if you should schedule an appointment to be tested. You can then schedule the testing appointment over the phone or through your MyChart account. We currently provide inside and curbside testing at our Hyde Park and Orland Park locations. If you are unable to completely avoid others while you are sick, you should wear a face mask when you are around other people. If you feel worse, call your doctor right away and your doctor will decide if you need to come into the hospital. What if I test positive for COVID-19 while I am pregnant? If you do test positive for COVID-19, you should quarantine at home for at least 10 days after your symptoms first appear and it has been at least 24 hours with no fever, without fever-reducing medications. While you are at home, stay away from others as much as possible, including other family members and pets. You should remain in a specific “sick room” and away from other people in your home. Use a separate bathroom, if available. Make sure to carefully dispose of all potentially infectious waste, such as used tissues, and wash your hands thoroughly and frequently. Even if you are experiencing no to light symptoms, self-quarantine for 10 days after you test positive. If there are other family members in the home, try to isolate yourself to lessen the chance of spreading. Discuss choices that you will make for you and your baby beforehand with the baby’s co-parent, a family member and your healthcare team in the case that you may become critically sick and unable to make those choices. The mode of delivery (vaginal versus C-section) and choice of pain control should not be different for COVID-19 positive patients. However, the number of companions who can be with you during labor will be limited. Should I keep my prenatal appointments if I’m sick with COVID-19? It is strongly recommended that you stay home and avoid being around other people while you are sick. If you believe you may have COVID-19, we ask that you call your physician to discuss your symptoms and determine next steps. At UChicago Medicine, we are currently conducting many appointments via telemedicine. After a virtual visit or telephone call, we can decide if you need to come in for an appointment based on your pregnancy history and needs. There is evidence that COVID-19 leads to increased risk of preterm birth, cesarean delivery, hypertensive disorders of pregnancy such as preeclampsia, and postpartum hemorrhage. It is therefore important that you let your doctor know if you suspect you have or have been diagnosed with COVID-19. If you have a pre-scheduled appointment, call your doctor’s office and see if it is possible to do a telemedicine visit. If you are asked to come into the doctor’s office, practice social distancing (do not give your doctor or nurse a hug or shake their hand), and wash your hands frequently. Your doctor’s office should follow cleaning protocols commonly used in clinical spaces in between patient visits. Similar principles apply for your ultrasound. Confirm with your doctor if it is necessary that you get an ultrasound or if it can be delayed. Contact your doctor’s office immediately if you have any signs of labor, bleeding or if your baby is not moving. What extra precautions are UChicago Medicine taking to safeguard pregnant and delivering patients when there are current patients that have the virus? Patients who have tested positive for COVID-19 are kept in special isolation rooms. Patients who have COVID-19 who are delivering will be on the Labor and Delivery unit but in a separate area from other patients. We have developed protocols in coordination with the infectious diseases team and other specialists, including from other prestigious healthcare institutions from Boston, New York and Texas, to provide the safest and most evidence-driven care for all our patients during the pandemic. Will my partner be allowed in the delivery room during the pandemic? Will visitors be allowed after birth? We are allowing two support people to accompany you during vaginal delivery and one support person during a cesarean section. For more information, read our current Family Birth Center visitor guidelines. Can I transmit the virus to my baby? Though it is uncommon, it does appear to be possible for parents to transmit the virus to their unborn child, either just before, during or after birth, as some newborns test positive for COVID-19. Most newborns who test positive for COVID-19 experience only mild symptoms or are asymptomatic (experience no symptoms at all). Rarely, a newborn may develop a severe illness. If you test positive for COVID-19, please wear a mask when you are within six feet of your infant and when breastfeeding, and stay more than six feet away as much as possible. You should wash your hands with soap and water for at least 20 seconds before holding or touching your newborn. A team of pediatricians will be available to care for your baby if any medical conditions arise. Even after you are no longer positive for COVID-19, and/or if there are other caregivers who are providing care for newborns, all caregivers should wash their hands for at least 20 seconds with soap and water before touching the baby, or should use a hand sanitizer with at least 60% alcohol if soap and water are not available. You should monitor your newborn for COVID-19 symptoms. Contact your physician if your baby has a fever, appears to be lethargic (overly tired), has a runny nose or cough, appears to be having difficulty breathing, or is vomiting or has diarrhea. Despite the COVID-19 pandemic, the American College of Obstetricians and Gynecologists (ACOG) guidelines suggest that it is still safer to deliver in a hospital. Should I think about home birth? If you are considering home birth, please discuss this with your doctor. Despite the COVID-19 pandemic, the American College of Obstetricians and Gynecologists (ACOG) guidelines suggest that it is still safer to deliver in a hospital. At UChicago Medicine, we are taking steps to ensure the safety of all of our patients, with and without COVID-19, and the risk of contracting the virus in the hospital is very low. Individuals with COVID-19 are at increased risk of complications during pregnancy and delivery. If you tested positive for COVID-19 and plan to deliver at home, please consider your ability to get an emergency cesarean section (C-section). As the Labor and Delivery unit continues to follow rules and recommendations from the infection control team, your care may be delayed. This could have adverse effects on your pregnancy outcome and outcomes of your baby if you are attempting to deliver at home and reach the hospital in critical condition. Joana Lopes Perdigao, MD Maternal-fetal medicine specialist Joana Lopes Perdigao, MD, provides high-risk obstetrical care for patients with congenital heart disease and other conditions. Learn more about Dr. Perdigao Sarosh Rana, MD, MPH Sarosh Rana, MD, MPH, is Section Chief of Maternal-Fetal Medicine. She is an expert in preeclampsia and performs high-level ultrasounds, which provide a greater assessment of the fetus than traditional ultrasounds.Learn more about Dr. Rana Labor and Delivery Safety Precautions During COVID-19 We know this is a very special time in your life, and these are also difficult times with the current pandemic. You may be wondering what your care will look like and if things will be different. We want you to know that our labor and delivery team is here to care for you and your family. Some things have changed to keep you and your family safe during this time. We have developed protocols in coordination with the infectious diseases team and other specialists, including from other prestigious healthcare institutions from Boston, New York and Texas, to provide the safest and most evidence-driven care for all our patients during the pandemic. When you arrive, you and your support person must both wear a mask when in the hospital. This is recommended by the CDC and the hospital has put this in place to keep everyone safe when they are here. We are testing every patient in Labor and Delivery for COVID-19. This is very important for you and your family to make sure you get the care you need during and after your delivery. If you test positive, we will talk with you about the next steps and what we recommend for you. Patients who have tested positive for COVID-19 are treated with special isolation precautions, involving the use of protective equipment to prevent the spread of the virus to other patients and staff. Your support person is an important member of your care team. This person must also wear a mask in the hospital. We also want to make sure your support person has meals when at the hospital. We will have three meals a day delivered for them. Your support person must stay in your hospital room and may not spend time in common areas. Support people are allowed to leave the hospital and return as needed. Read more important information about guidelines and requirements for your support person during COVID-19. When your baby arrives, they will stay with you in your room until you go home. Your baby will be cared for by a dedicated pediatrician and care team in the hospital. It is important that when you or your support person hold your baby that you wash your hands really well. This will help keep your baby safe from germs. Your care team will keep providing you with the best care. You will have a doctor and nursing team caring for you during your stay. If you need any other support, we have resources here to help you. This includes a lactation consultant if you are breastfeeding, emotional or spiritual advisors, and support to help with bonding and developmental resources. We also have our in-room entertainment (GetWell Network) with a variety of educational videos to support you in caring for your new baby. Ask your nurse to connect you with any of these resources. Everyone is here to help support you. Breastfeeding and COVID-19 Some people have questions about if COVID-19 can be passed to a baby from breastfeeding. A lot is still unknown about how COVID-19 is spread. Person-to-person spread is believed to happen mainly from respiratory droplets passed on when an infected person coughs or sneezes. This is like how influenza (the flu) and other respiratory pathogens spread. In some studies on COVID-19 and another coronavirus infection called Severe Acute Respiratory Syndrome (SARS-CoV), the virus has not been found in breast milk. However, we do not know if the COVID-19 virus can be passed in breast milk. Breast milk gives protection against many illnesses. There are rare times when breastfeeding or feeding expressed breast milk is not recommended. CDC does not have specific guidelines for breastfeeding when infected with a similar viruses such as SARS-CoV or Middle Eastern Respiratory Syndrome (MERS-CoV). CDC recommends people with the flu keep breastfeeding or feeding expressed breast milk to their baby while taking precautions not to spread the virus to the infant. These are CDC Breastfeeding Guidelines for people with COVID-19 or who are being tested for COVID-19. Breast milk is the best source of nutrition for most babies. A lot is still unknown about COVID-19. The decision to start or if to keep breastfeeding must be made by the person who will be breastfeeding along with their family and doctor. A person who is positive for COVID-19 or has symptoms of COVID-19 and is being tested for the virus must take all precautions to keep from spreading the virus to their baby. Precautions include washing hands before touching your baby and wearing a face mask if possible, when feeding at the breast. When expressing breast milk with a manual or electric breast pump: - Wash hands before touching any pump or bottle parts. - Follow recommendations for proper pump cleaning after each use. If you can, have someone who is not sick and is feeling well feed the baby.
<urn:uuid:a1694630-6f9e-4abe-a2fb-e4811df79f52>
CC-MAIN-2021-43
https://www.uchicagomedicine.org/forefront/coronavirus-disease-covid-19/pregnancy-and-coronavirus-covid-19
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00671.warc.gz
en
0.958
3,740
3.15625
3
Edible / biodegradable packaging for food Posted: 1 May 2018 | Dr Lizhe Wang, Biomaterial Scientist and Dr Joe P. Kerry, Head of the Food Packaging Group, Department of Food and Nutritional Sciences, University College Cork | 8 comments As traditional food packaging materials show shortcomings in terms of their environmental pollution impact and in their manufacturing requirements for non-renewable resources, the need for alternative packaging materials and packaging formats is now required more than ever. A major group of alternative and novel materials which possess future commercial potential are those derived from utilised and under-utilised food ingredients, or food grade ingredients. Consequently, they provide packaging materials which are not just biodegradable in nature, but which are edible also, thereby presenting greater opportunities for commercial application in a more sustainable manner. Therefore, the potential advantages that such packaging materials have over more current conventional packaging forms used by the food industry are obvious. Consumers demand eco-friendly packaging solutions To date, the majority of food packaging materials and formats consist primarily of laminates which can be comprised of plastics, metals and/or paper and glass. These materials are, and have traditionally been, manufactured and engineered for specific food packaging applications. However, consumer demands are changing with respect to food product purchasing and consumers are becoming increasingly aware of the presence, role and implications of the food packaging that surrounds their retail food purchases. Issues pertaining to sustainability, environment, ethics, food safety, food quality and product cost are all becoming increasingly important factors for modern-day consumers when purchasing food products and a number of these issues are also enforced by food packaging legislative regulations. Changes in consumer packaging demands are being informed by continual drip feeding of negative information pertaining to conventionally-used packaging. For example, reports1 claim that plastics formed 38 per cent of all food packaging materials used in the US last year and that most of this would end up in landfill or cause an environmental risk when processed in incinerators. The author also went on to say that only small amounts of the 38 per cent of plastic waste would go on to be recycled. Additionally, most of the packaging used today is synthetic in nature and has been derived from fossil fuels (from naphtha or natural gas) for over half a century. Commercial usage of such forms of packaging for a whole host of goods from food and pharmaceutical products through to electronic and fragile items have relied on a plentiful supply of cheap packaging materials. However, there are now global concerns about the depletion of non-renewable raw materials for manufacturing plastics. The global economy is quickly facing the scenario that greater oil demands are being faced with falling supplies. The need for sustainability in food packaging It is estimated that at some point between today and 2030, oil extraction will have peaked and oil fuel for transportation will have become increasingly uneconomic2. The world’s annual consumption of plastic materials has increased from approximately five million tonnes in the 1950s to nearly 230 million tonnes today. The total production of plastics in Europe was approximately 57.5 million tonnes in 2005, representing 25 per cent of the total worldwide production of 230 million tonnes, at similar levels to that of North America, at 24 per cent. The total demand of plastics in Europe was 47.5 million tonnes in 2005, of which 17.6 million tonnes (37 per cent) were used for manufacturing packaging materials. Since 37 per cent of plastics are used for packaging, it is not surprising that this category has attracted most attention from policy-makers and environmentalists3. Food packaging is a significant part of that total and thus even a small reduction in the amount of materials used for each package would result in a significant cost reduction and may improve solid waste disposal4. The current trend in new food packaging development is that, wherever possible, it should not only be natural and ‘environmentally friendly’, but also functional and cost-effective. Thus, development of edible/biodegradable films/coatings for effective food packaging has generated considerable interest in recent years due to their potential to reduce and/or replace conventional, non-biodegradable plastics. As food manufacturers require packaging materials to be food grade, maintain/enhance product shelf-life stability and safety and utilise nominal values of packaging, the reduction or replacement with alternative biodegradable forms would clearly allow improvement in overall operating costs while reducing waste streams. EU regulatory pressures, coupled with indirect demands via consumer groups on EU food processors and packaging manufacturers, to develop/utilise ‘environmentally friendly’ packaging systems are increasing. Research development in the area of edible/biodegradable films/coatings is a key and unique field of exploration within food packaging which possesses enormous commercial and environmental potential. Edible/biodegradable packaging research Scientific research on the production, quality and potential applications of edible/biodegradable films in food manufacturing has been carried out by several research groups worldwide and has been reported in research publications5-9. The enormous commercial and environmental potential in the area of edible/biodegradable films/coatings has often been stressed5,10,11 and numerous publications have primarily addressed issues relating to mechanical properties, gas migration, and the effects of other factors on these properties, such as type and content of plasticisers, pH, relative humidity and temperature etc.6,8,10-15. However, research into edible/biodegradable films is still in its infancy and research on industrial application of edible/biodegradable films has received more attention in recent years, however, coverage is still quite limited. Researchers in the Food Packaging Group, Department of Food and Nutritional Sciences, University College Cork, Ireland, have developed several functional, biopolymer-based, edible/biodegradable films over the last few years. Testing food-grade polymers In our most recent study, more than 20 food ingredients were investigated for their film-forming abilities and a number of optimised food-grade polymers were produced over a range of concentrations and processing parameters in an attempt to evaluate their mechanical and permeability properties (tensile strength, puncture resistance, tear strength and oxygen/water vapour permeability)16-19. The properties of biopolymer films were found to be significantly affected by pH adjustment and corn oil addition and laminated films showed the most desirable properties among all the film formats evaluated. Laminate films consisting of sodium alginate emulsion, gelatine emulsion and whey protein isolate emulsion had the highest tensile strength, puncture strength and tear strength values (55.77 MPa, 41.36 N, 27.32 N, respectively); films formed from gelatine emulsion (pH = 10.54, CO = 27.25 per cent) had the highest elongation values (351.12 per cent); composite films formed from WPI, G and SA (WPI: G: SA = 10.0:16.0:14.0) had the lowest oxygen permeability values 8.00 (cm3·mm/m2·d·kPa); Bilayer films formed from gelatine emulsion and sodium alginate emulsion had the lowest water vapour permeability values (22.63 g·mm/kPa·d·m2). Edible films formed from protein-polysaccharide powders (whey protein concentrate (WPC)-45, alginate, pectin, carrageenan, or konjac flour) were also investigated by our group at UCC. Results showed that films formed from co-dried powders had lower water vapour permeability, higher tensile strength, elastic modulus and elongation than equivalent films formed from the dry blended powders and there was potential to alter the physical properties of hydrophilic films by combining whey protein and polysaccharide components20. Previous work by our research group also involved the extrusion of pectin, pea starch and gelatine/sodium alginate blends for the preparation of sausage casings21,22. The limitations of edible packaging Generally, edible films have limited application primarily because of their inferior physical characteristics. For example, single, lipid-based films have good moisture barrier properties but contain no mechanical strength23. Consequently, laminated films were formed by adhering two or more biopolymer films together. However, laminated films are advantageous to single, emulsion-based biopolymer films due to their possession of enhanced barrier properties. The creation of laminated structures has the potential to overcome these shortcomings by engineering edible/biodegradable films with multiple functional layers. Edible films and coatings based on water-soluble proteins are often water-soluble themselves but possess excellent oxygen, lipid and flavour barrier properties. Proteins act as a cohesive, structural matrix in multicomponent systems, yielding films and coatings with good mechanical properties. Lipids, on the other hand, act as good moisture barriers, but have poor gas, lipid and flavour barriers. By combining proteins and lipids in emulsion or bilayer (a membrane consisting of two molecular layers), the positive attributes of both can be combined and the negatives minimised. From the research conducted by the Food Packaging Group at UCC, the general characteristics of developed edible/biodegradable films are as follows: - Thickness of manufactured edible/biodegradable films range from 25μm to 140μm - Films can be clear, transparent, and translucent or opaque depending on the ingredients used and the processing technique employed (Figure 1: a, b) - Aging specific film types under controlled environmental conditions (Figure 2) improved mechanical properties and gas barrier properties - Storing films at ambient condition (18-23°C, 40- 65 per cent RH) for five years did not significantly alter structural characteristics - Films formed from various ingredients can be relatively easily laminated together (Figure 3) - Manufactured films can be labeled, printed on or heat sealed - Small variations in film microstructure (e.g. biopolymer phase separation) affects film properties (Figure 4) Edible/biodegradable films versus traditional synthetic film (Polyethylene terephthalate (PET)/Polystyrene (PS)) In comparison to synthetic film (Figure 5), edible/biodegradable films (4G) had the best elongation (351.12 per cent), followed by PET (136.94 per cent). PS showed the highest tensile strengths (77.17 MPa), followed by edible/biodegradable films SAOGOWPIO (55.77 MPa). However, puncture strength of edible/biodegradable films SAOGOWPIO was nearly seven times greater than either synthetic film used (6.34 N, 6.36 N for PET and PS, respectively). Although edible/biodegradable films showed an overall lower mechanical strength than synthetic films, they were still within a sufficiently acceptable range to be capable of holding most food products. One great advantage that edible/biodegradable films possess is that they generally have significant resistance to oxygen permeability (e.g. GWPISA-4, 8 cm3·mm/m2·d·kPa). Conversely, water vapour permeability of edible/biodegradable films is in general, higher than that of synthetic films, which is a major disadvantage but one which is being addressed. Application example of developed edible/biodegradable packaging on meat Manufactured films were tested in food packaging/processing applications to assess their performance during food preparation and storage. Figure 6 shows one example of meat products prepared, wrapped and stored using edible film wrappings. From this one example, observed advantages in using such materials included: - Increasing cooking yield by 12 per cent - Decreasing moisture loss by seven per cent during storage (-18°C, 60 days). - Improved product flavour and juiciness The production of edible/biodegradable films is mostly at research level and the commercial use of films is limited at present. Research is currently ongoing, however, it is expected that the use of edible/biodegradable packaging materials in foods has a bright future as edible/biodegradable packaging can offer a natural protection to foods according to specific packaging requirements. Despite the advantages of edible/biodegradable materials which have been presented by scientists, a number of obstacles in their development potential have to be overcome, such as cost effectiveness, improved water vapour permeability barrier and technological application methods. Full commercial production and application is still some way off, but the potential that edible/biodegradable films possess is now being realised. The authors would like to acknowledge Dr. Mark A. E. Auty, Senior Research Scientist, Manager of the National Food Imaging Centre (NFIC), Teagasc Moorepark, Co. Cork for assistance in preparing and imaging edible/biodegradable film samples (SEM & CLSM) for this article. - Hopkins (2009). Sustainable (“Green”) packaging market for food and beverage worldwide, 2nd edition. Shelley Carr, Maryland Rockville, Maryland, US. - Hammond, R (2007). The World in 2030: Summary and Initial Industry Response. Plastics Europe, Brussels, Belgium. - Azapagic A, Emsley A, Hamerton, I. (2003). Polymers: The Environment and Sustainable Development. John Wiley and Sons, Chichester, UK. - Han, J.H. (2005). New technologies in food packaging: an overview. In: Han J.H. (Ed.), Innovations in Food Packaging. Elsevier Academic Press. London, 3-11. - Cruz-Romero, M. and Kerry, J.P. (2008). Crop-based biodegradable packaging and its environmental applications. CAB Reviews: Perspectives in Agriculture, Veterinary Science, Nutrition and Natural Resources 3, No. 074, 1-25. - Krochta, J. M. (1992). Control of mass transfer in foods with edible coatings and films. In: R. P. Singh, & M. A Wirakartakasumah (Eds), Advances in Food Engineering, Boca Raton, FL: CRC Press, pp. 517-538. - Gontard, N. & Guilbert, S. (1994). Bio-packaging: Technology and properties of edible and /or biodegradable material of agricultural origin. In: M. Mthlouthi, (Eds), Food Packaging and Preservation. London: Blackie Academic and Professional, pp.159 – 181. - Krochta, J.M. & de Mulder-Johnston, C. (1997). Edible and biodegradable polymer films: challenges and opportunities. Food Technology, 51(2), 61 – 74. - Rivero, S., Garcia, M. A. & Pinotti, A. (2009). Composite and bi-layer films based on gelatine and chitosan. Journal of Food Engineering, 90 (4), 531-539. - Kester, J. J. & Fennema, O. R. (1986). Edible Films and Coatings: A Review Food Technology, 40, (12) 47 – 59. - Debeaufort, F., Quezada, G. A. & Voilley, A. (1998). Edible films and coatings: Tomorrow’s packagings: A review. Critical Reviews in Food Science and Nutrition, 38, 299-313. - Krochta, J. M. (1997). Edible composite moisture-barrier films. In: B. Blakistone, (Eds), Packaging Yearbook, National Food Processors Association. Washing, DC: pp. 38 – 51. - Cuq, B., Gontard, N. & Guilbert, S. (1994). Edible films and coatings as active layers, In: M. L. Rooney (Eds), Active Food Packaging, London: Blackie Academic and Professional, pp. 111-142. - McHugh, T. H., Huxsoll, C. C. & Krochta, J. M. (1996). Permeability properties of fruit puree edible films. Journal of Food Science, 61, 88-91. - Gennadios, A. (2002). Protein-based films and coatings. M. A., Boca Raton, FL: CRC Press. pp, 1-12. - Wang, L.Z., Liu, L., Holmes, J., Kerry, F.J., & Kerry, J.P. (2007). Assessment of film-forming potential and properties of protein and polysaccharide-based biopolymer films. International Journal of Food Science and Technology, 42 (9), 1128-1138. - Wang, L.Z., Liu, L., Holmes, J., Huang, J., Kerry, F.J., & Kerry, J.P. (2008). Effect of pH and addition of corn oil on the properties of whey protein isolate-based films using response surface methodology. International Journal of Food Science and Technology, 43 (5), 787-796. - Wang, L.Z., Auty, Mark, A.E., & Kerry, J.P. (2009 a). Physical assessment of composite biodegradable films manufactured using whey protein isolate, gelatine and sodium alginate. Journal of Food Engineering, doi:10.1016/j.jfoodeng.2009.07.025 Available online 19 August 2009. - Wang, L.Z., Auty, Mark, A.E., Rau, A., Kerry, J.F., & Kerry, J.P. (2009 b). Effect of pH and addition of corn oil on the properties of gelatine-based biopolymer films. Journal of Food Engineering, 90, 11-19. - Coughlan, K, Shaw N. B., Kerry J. F. & Kerry J. P. (2004). Combined effects of proteins and polysaccharides on physical properties of whey protein concentrate-based edible films. Journal of Food Science, 69 (6), E271-E275. - Liu, L., Kerry, J. F. & Kerry, J. P. (2005). Selection of optimum extrusion technology parameters in the manufacture of edible/biodegradable packaging films derived from food-based polymers. Journal of Food, Agriculture & Environment, 3 (3&4), 51-58. - Liu, L., Kerry, J. F. & Kerry, J. P. (2006). Effect of food ingredients and selected lipids on the physical properties of extruded edible films/casings. International Journal of Food Science and Technology, 41, 295-302. - Debeaufort, F., Quezada-Gallo, J. A., Delporte, B. & Voilley, A. (2000). Lipid hydrophobicity and physical state effects on the properties of bilayer edible films. Journal of Membrane Science, 180, 47-55.
<urn:uuid:48c61a44-fb78-42a7-9be4-fbd02c6b306b>
CC-MAIN-2021-43
https://www.newfoodmagazine.com/article/215/edible-biodegradable-packaging-for-food/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00190.warc.gz
en
0.897358
4,022
3.140625
3
Game programming is a multi-billion dollar industry that is among the fastest growing in the world. The video game industry is in need of trained programmers who can produce optimized and efficient code for computers, game consoles, web pages, cell phones and other devices. If you are creative and like to control the action, Game - Programming is for you. Program content introduces you to fundamental game concepts including introductory computer programming in C++, C#, scripting languages, web development, and database storage techniques. You will learn mathematical calculations for advanced programming techniques required for sophisticated graphics, A.I. (artificial intelligence) and networked multiplayer games. You will broaden your knowledge by examining and implementing 3-dimensional games using industry standard libraries and game engines. Portability, modularity and efficiency of code is emphasized at all stages of instruction in addition to proper documentation and team communications. Graduates of the Game Programming program develop a sound background in software design methodology and programming. Study in Ireland Graduates of this program can turn their diploma into a degree. Learn how you can continue your education on our Study at Ireland page. Students focus on C++ and C# programming languages throughout the full 2-year program acquiring skills the industries demand. Students study Network Programming, Artificial Intelligence, Graphics Programming and a leading Game Engine. In the Final semester, students form teams to complete a final culmination project where they develop and release games to a professional market using industry project management software and development environments. This course is designed to help students develop and practice the communication skills needed to succeed in college and workforce environments. Emphasis is placed on improving foundational communication strategies-reading, writing, listening, and speaking—and on developing research and critical thinking skills. In this course, students learn the fundamentals of a current programming language used in the Gaming Industry. Topics addressed include; standard software design methodologies, custom design of simple 2D games, and various programming techniques. Through the use of hands-on exercises, by means of C++ programming, students create and debug games which implement variables, functions, conditions, loops and classes. Throughout this course, students are presented with an overview of the video gaming industry. Through lecture and lab activities students discover many of the concepts involved in gaming such as types of video games, the roles of members of a gaming team, the game development life cycle and the technical components required to produce high quality video games. Other topics examined are the impact of playing video games on ones’ life, legal and ethical considerations, and professional opportunities that are available in the gaming industry. This course introduces students to the creation of 2D digital images, 3D game assets and the design of levels using assets. Topics include; sprite sheets, character creation, polygon modeling, texturing, lighting, animations, building expansive landscape environments and exporting assets to game engines. Course learning activities center on the game development pipeline, workflow and best practices. The game art skills learned in this course are used to help enhance game projects throughout the program. Students learn C++ object programming and the creation of programs for 2D games using C++ libraries. In this course, students are taught intermediate programming concepts available through the use of C++. Students learn about bitwise operations, file streaming, exception handling, and string manipulation. Students are also introduced to recursive functions and learn how to solve programming problems recursively. Through lab and class activities students create projects which interact with a game controller and save/load data from text files. This course familiarizes students with the fundamental components of popular game engines in order to facilitate the making of robust games. Students learn how to create and modify properties of game objects such as models, environments, lights, cameras and sound. Students learn how to apply materials, textures and shaders to enhance the look of their game environment. Using leading edge game engines, students discover how adding gaming scripts to their game objects produces dynamic behaviour. Through a series of labs students build 2D & 3D mini-games that respond to various input devices and can run on different platforms: PC, web and mobile. This course provides a review of fundamental laws and operations in algebra and trigonometry: linear, quadratic, exponential, logarithmic, and trigonometric functions, related graphs and equations, vectors, and their applications. Through in-class presentations, learning activities and group work, students are introduced to various math concepts which gives them the knowledge necessary for future technical courses. In COMM 234, the emphasis is on reinforcing writing, reading, and research skills for a variety of professions. Short reports, summaries, resumes, and cover letters are used to enhance writing and analytical skills. American Psychological Association (APA) format and documentation is reinforced. Oral communication is developed through a variety of formal and informal speaking activities. This course introduces students to concepts of render programming through the use of shaders and graphics libraries. Throughout the course students utilize lab time and in class activities to explore the stages of the graphics pipeline, communication of data between the CPU and the GPU and Normal & UV data for the purpose of creating 3D objects within game worlds. The concept of converting 2D screen pixels to 3D scenes is explained preparing the students for a greater understanding of graphics programming and the render programmer role. In this course students are introduced to the concepts of artificial intelligence used to make games more engaging to players. Through lab exercises and use of a leading game engine, students bring life to enemies within games by demonstrating strategies learned in the lectures such as finite state machines, pathfinding, behaviour trees and flocking. Prerequisite(s): GAME202 + GAME212 Throughout this course, students explore the concepts of networking and its uses within games. Through lecture and lab exercises, students will analyze the way network traffic is processed using sockets from one device to another. Students will discuss the use of different network topologies and its effectiveness for games based on their genre. Server architecture design for security and data integrity are emphasized when implementing client server communication through Remote Procedural Calls (RPC) using HTTP requests. Students use a current game engine to communicate from clients to a server and create a multiplayer game experience. Prerequisite(s): GAME202 + MATH10 In this course students are introduced to advanced C# programming concepts used in scripting with the Unity game engine. By carrying out a series of labs, students implement advanced programming constructs such as generics, object oriented design, interfaces, extension methods, co-routines, delegates, and more. Students learn about the singleton architectural design pattern to create game managers which control the state of their game objects. Students are familiarized to an industry standard source control software and learn to work collaboratively in groups to build small games. Mathematics for Games Students learn the basic concepts of matrices and coordinate systems for use in 3D games. This course provides applications of mathematical concepts to calculate physics quantities such as distance, height, time, velocity and acceleration in 1D, 2D, and circular motions. Average and instantaneous rates are calculated by different methods. Vectors and trigonometry are applied to horizontal, vertical, projectile motions, and to forces using Newton’s Laws. Work, conservation of mechanical energy, and collisions are explored to solve motion problems. Physics simulations are used to demonstrate object motion, physics laws and principles, and to compare physical quantities. Students are introduced to scripting languages and learn how to use them to interface with their computer and gaming code. Students learn the basic syntax of the Python programming language by completing a series of labs. Students create python scripts to automate tasks on their computer, like controlling their keyboard and mouse, send emails, crawl websites and parse documents. Students will use Python scripts to interface with large software components and existing games. Students then learn how to write windows command scripts to help automate daily tasks involving file operations, the task scheduler, windows processes & services and the build pipeline. In this course students are formed into teams and must work together to ensure a solid prototype is produced, showcasing many of the skills acquired from previous courses in the program. Students prepare game design documents (GDD’s) and create playable prototypes to prove out their games design and functionality. Students set up automated build systems for their projects and work collaboratively using online documentation, project management and source control software. Students perform peer code reviews using web interface tools to collectively provide feedback to each other as well as maintain the coding & projects standards supplied. All work performed in this course is designed to emulate a real world game development studio. The hands on lab time provides the students with an opportunity to experience, and solve, the day to day team collaboration issues that arise in development. This course allows the students to explore advanced effects capable through the use of different shaders within the graphics pipeline. Through lab exercises, students leverage internal capabilities of shaders to produce visual effects such as fog, grass and terrain generation learned in the lectures. By leveraging camera systems and viewports students analyze the concepts of raycasting and shadowmapping. Analysis of current professional game engines assist students in understanding the impact graphics programming plays on creating virtual worlds. Prerequisite(s): GAME300 + MATH 21 This course provides students with the essential knowledge of data structures and algorithms required to build video games which run more efficiently. Students learn how to design and implement custom data structures such as arrays, queues, stacks, linked lists, trees, and graphs. Students gain practical experience using the standard C++ data structures and algorithms provided by the Standard Template Library (STL). In a series of labs, students recognize which data structure and algorithm are the best suited to solve a particular problem. Students evaluate the theoretical time and space complexity of algorithms and determine their actual performance using benchmarking methods. Prerequisite(s): GAME202 + MATH10 Ontario Secondary School Diploma (OSSD) with the majority of Grade 11 and 12 courses at the C, U or M level including the following prerequisites: - Grade 12 English at the C or U level - Grade 12 Math at the C, U or M level For OSSD equivalency options, see Admission Requirements. If you are missing prerequisite courses, enroll in the Career/College Prep program - free for Ontario residents who are 19 years or older. Students of the Game Programming program will require access to a Windows 10 capable PC with the following minimum specs: - Core i3 (5th gen) CPU or higher. - 8gb Ram or higher. - 250GB Hard drive or larger (Ideally SSD). - Graphics card capable of DirectX 10 or higher. - A stable Internet Connection. Our Cornwall campus has a brand new library, new health simulation labs, renovated student common areas and more to make your transition to college life an easy one. Ubisoft Montreal on our graduates: “We are delighted with the unique training they receive at the College. It provides the training required for critical technical needs in production systems and software development support that can be tricky to staff. “ Recruiting Team Lead Ubisoft Divertissements Inc. 5505 St-Laurent, Montréal Tel: 514 490 2079 Graduates find employment with game studios in Canada and throughout the world. We are proud to have had graduates in recent years continue on to AAA studios such as Ubisoft and Eidos (Square-Enix). This industry is experiencing job growth that is out-pacing the training of potential workers. Canada has six of the top 50 game studios in the world and this includes one of the five top studios based on product sales. Potential positions at these studios include: - Generalist Programmer - Audio Programmer - Build / Pipeline Engineer - Database Programmer - Online / Network Programmer - Front End Developer - Graphics Programmer - Tools Programmer 613.933.6080 ext. 2120 Click here to message Recruitment. International Students Contact +1 613.544.5400 ext. 5514
<urn:uuid:b8cceede-adec-4f06-91ed-c1b11220ddc6>
CC-MAIN-2021-43
https://www.stlawrencecollege.ca/programs/game-programming/full-time/cornwall
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00510.warc.gz
en
0.907771
2,473
3.40625
3
by Vijay Chalasani This article appears in the September 2020 issue of EMAg, the Magazine of Early Music America. Black Lives Matter. This simple yet powerful phrase, which first captivated the world several years ago, has seen a resurgence in recent months. We have all been forced to reckon with what these words mean to us as individuals, no matter the color of our skin, and as members of a society that has not fulfilled its promise to create systems that benefit all people equally. Emboldened by this historic moment, those who had spoken up before spoke louder, and many—like myself—who had previously observed quietly from the sidelines discovered they could no longer be silent as a new awareness of injustice came to the fore. We cannot continue on the same paths as before; our world must change. As the world has grappled with the question of how to change for the better, the practice of anti- racism is one answer that has been suggested. What is anti-racism? It is the conscious effort to understand and work against the negative impacts that actions and systems have on BIPOC—Black, Indigenous, and People of Color. Anti-racism works to correct historic injustices by recognizing that it is not enough to simply not be racist—you must instead be actively anti-racist. It is important to distinguish between being not racist and being anti-racist because, as EMA founding member, board member, and gambist Patricia Ann Neely says, “We know that there are subtle forms of racism and that those who are the perpetrators may not even know they are committing the offense.” While we can claim to not be racist, implicit biases sometimes influence our decisions without us ever knowing, and historically unequal systems can be perpetuated by those who believe them to be creating opportunities for everyone. Anti-racism comes into play by advocating for active work against these biases and systems. Anti-racism is “morally right, basic human decency,” says Reginald Mobley, countertenor and recently elected EMA board member. As Henry Lebedinsky, keyboardist and co-artistic director of Pacific MusicWorks in Seattle, says, “It is a matter of justice.” Classical music is no stranger to accusations of a lack of diversity, and early music, as a smaller and somewhat insulated sub-group within classical music, has an even greater problem with this issue. A 2016 report from the League of American Orchestras showed that minority musicians represented less than 15% of modern orchestra membership, of which only 2.5% total were Hispanic and 1.8% were Black. Unfortunately, no formal survey of this sort exists (yet) of early music ensembles. However, in my own experience and those of the musicians I interviewed for this piece, there are often no more than one or two musicians of color, if any, onstage at an early music concert. If we truly believe the arts and early music are for everyone, then what active anti-racist steps can we take to fix this? How can we make early music ensembles better reflect the diverse communities that they claim to serve? “A lack of diversity stems from a lack of access,” says Maria Romero, baroque violinist and assistant professor of violin at the Blair School of Music at Vanderbilt University in Nashville. Issues of access can begin as early as childhood and last throughout professional careers. Bass-baritone and composer Jonathan Woody points out that ultimately the “responsibility falls to the gatekeepers.” Gatekeepers can come in many forms: They are the leaders of ensembles and directors of arts organizations, but they are also the principal donors and season ticket holders, educational outreach organizers and festival faculty, publishers and colleagues. Anyone who can “put their thumb on the scale” and tip it toward change, as Mobley puts it, has an obligation to use their position to create a more diverse, equitable, and inclusive world. A conscious effort should be made to understand how the language we use can potentially have a negative impact when directed at an individual of color or injected into a conversation. “Recognition of these issues is the first step toward learning how to be anti-racist,” says Neely, who chairs EMA’s Inclusion, Diversity, Equity, and Access (IDEA) task force. When we know we need to change, what can we do? As gatekeepers responsible for improving equity through access, Lebedinsky advocates for “proving that our actions—not just our policies—illustrate our commitment to inclusion.” This is anti-racism in practice: action that works to correct racial injustice. What can each of us, in our numerous roles as gatekeepers engaging with early music, do to actively correct racial injustices in early music? The following collected suggestions will help begin the conversation, but they are in no way perfect solutions. It is ultimately up to each individual to decide what can be done in their own lives to be anti-racist. Hire musicians of color to perform and teach “We need to start with reviving music programs in schools that have abandoned them in order to make early music accessible in every community that wishes to take part,” says Neely. “We should reach out to advocacy organizations, such as Sphinx and institutions of higher learning, including HBCUs (Historically Black Colleges and Universities), to identify promising musicians of color and encourage them to take a chance in this field.” Early musicians of color are active in the field, despite not being seen on stages as often as they could be. Making the extra effort to find musicians of color to perform in concerts is an important part of improving equity and inclusion in early music. As Lebedinsky says, “This ultimately means making the effort to go outside our circle of favorite and trusted friends and developing honest connections with performers of color.” This is an especially important idea to keep in mind as we learn to make music within the restrictions of the COVID-19 crisis. If we are creating an online collaboration with musicians from many places, why not include musicians of color? When we return to live concerts and we’re not able to fly in our favorite players from far away, why not hire local musicians of color instead? A word of caution: While hiring musicians of color is extremely important, I advise against falling into a couple of easy pitfalls when doing so. The first is tokenism: If you hire one or two musicians of color and then proclaim to the world how diverse and progressive you are, chances are those musicians will feel alienated, and the world will see you as disingenuous. The second danger can occur if musicians of color are told they were hired primarily because of their back- grounds as people of color. But wait. If I’m trying to hire musicians of color, why can’t I tell them that’s why I hired them? Put yourself in their shoes: No one wants to be told they are a diversity hire instead of being hired on their professional merit. A musician of color who is told this will doubt their musical qualifications, regardless of how much diversity played into their hiring, and the other musicians might resent the hiring of someone who was not brought in for merit alone. Despite these hazards, it is still worthwhile to bring in musicians of color whenever possible to help balance the historic racial inequalities of hiring and to bring new voices to our performances. Hire staff and recruit board members of color We all know that what happens onstage is only a small percentage of the work that goes into presenting any musical activity. The administrative staff and board shape the direction of every arts organization, no matter how small or large. When we include diverse voices in the decision-making process, we can dramatically change the ideas that make it into discussions and the resulting paths that are taken by organizations on and off the stage. The same cautions against tokenism and diversity hire treatment also apply here. Include equity, diversity, and inclusion explicitly in organizational mission statements If we truly believe early music is for everyone, then our mission statements need to reflect these values. Updating mission statements to include equity, diversity, and inclusion makes absolutely clear that an organization will actively work toward these goals in everything it does—and makes it easier to change course if it is ever apparent that an organization’s activities are working against these core values. Perform music by composers of color “It is essential for institutions, ensembles, and organizations to work toward expanding the early music canon to be much more inclusive of historically underrepresented composers,” says Romero. Programming music by composers of color is a crucial step toward equity in early music concerts. This isn’t to say that we should stop performing the pieces we love written by our favorite composers but rather that we can work toward inclusiveness by featuring music from composers of non-European (and non-male) backgrounds alongside the standard canon. On the other hand, willfully ignoring diverse repertoire reinforces inequality. Performing works by composers of color will not only support equity but also teach us new ways to appreciate our canonical works and composers. There are rich repertories of music from colonial Latin America, and while repressive racism of the past meant that there were fewer composers of color living in Europe, people like Joseph Boulogne, Chevalier de Saint-Georges show us that these composers were there—and highly successful in their day—if you look closely for them. Another aspect of the conversation around diverse repertoire has to do with what we classify as early music in the first place. If we can expand our definition of early music past the outdated limits of “music before 1800,” we suddenly open up a world of composers of color from the 19th century whose music can be played in a historically informed style. Likewise, new compositions for period instruments can take our familiar sound world into today’s contemporary world. In March, Mobley was hired by the Handel and Haydn Society as their first programming consultant to help diversify their repertoire, and he argues that “[we] should accept that there are not many composers of color, but opening ourselves up to performing new music [by composers of color] for old instruments can help balance the scales and even out disparities.” Mobley encourages organizations to follow the model of the Handel and Haydn Society so that others can take the necessary steps to diversify their performances, and he urges us to “be free with sharing ideas” about diverse repertoire and program concepts. Foster educational opportunities for students of color at all levels This is one of the most important areas of access that we must address to include BIPOC in early music. Support for students of color must begin in early childhood and go all the way through collegiate education and emerging professional development. Organizations and ensembles can support this by prioritizing educational engagement at schools with a majority of students of color, especially if they don’t have regular access to music education, and in particular at the elementary level. Later on, we must work to help students of color feel included at universities and conservatories, which are still predominantly white institutions. Financial support, while critical for helping get students of color in the door to these opportunities, may not help the students stay and complete their education or feel comfortable entering the professional world. “Students of color are discouraged from becoming part of white institutions,” Woody argues. We can help our students of color feel like they can be a part of these worlds by making our curriculums more diverse and by training educators in equity, diversity, and inclusion. Changing recruitment strategies by looking toward new sources of potential early musicians, such as HBCU, or schools in majority BIPOC neighborhoods, over time can change the student body make-up to better reflect the faces we want to see onstage. Invest in communities of color “Where do we focus on presenting our concerts?” Mobley asks. “Can we present more in Black and Brown communities? Why are we not facing our inherent biases and those of our patrons?” There is a ripple effect that goes beyond the opportunities to recruit new audience members when we move to new neighborhoods; it also means that we are supporting other businesses in the area when audiences go out before or after the concerts. Establishing a presence in the community through performances can also help support the establishment of education and engagement programs, which will in turn feed into more audience development and access for young musicians of color who eventually want to see themselves on stage. As we look forward to a post-pandemic period of returning to live concerts, we must rethink who we want our audiences to be and how we can be as inclusive as possible. When we invest in our communities, our communities will invest in us, and our audiences will grow. Encourage organizations you engage with to be anti-racist No matter what our role is in our interactions with arts organizations, we all are gatekeepers in our own ways. We each have a responsibility to keep organizations accountable and on the path of anti-racism. Audiences and donors have a key responsibility and ability to effect change, despite not necessarily being part of the day-to-day activities of an organization. If your favorite arts organization isn’t hiring musicians or administrators of color, you can tell them you won’t donate again until they change. If your local ensemble or festival isn’t programming music by composers of color, you can tell them you won’t attend their performances or classes until they change. The voice of one person might be enough to push these organizations to do things differently, but collectively we can help create a tide of change that will make our beloved early music more equitable and inclusive. As we consider these actions, it is important to remember that we are not trying to invite people of color into our sphere of early music but rather trying to create a new world of early music, where everyone has a chance to experience this incredible repertoire we love. This is the difference between being not racist and being anti-racist: Rather than standing idly by and wondering why those different from us are not present, despite our invitations and openness, we should instead strive to actively build a space for everyone. Creating this new space for people of color in early music will also help support other vulnerable members like gender-diverse, low-income, and disabled people in the community, creating a truly diverse and equitable world of early music. “We need to make early music accessible to every community,” Neely says firmly. In the end, we must be anti-racist in early music because early music is worth it. “Early music has unparalleled power to inspire us, move us, and teach us about how to be in relationship with one another,” says Lebedinsky. Woody furthers this sentiment, saying, “I do early music because I love it. Inclusion is important because every voice could bring something beautiful to the experience of early music. This music is for everybody, and we have to work to bring it to everyone.” South Asian-American violist Vijay Chalasani is Assistant Professor of Viola at the University of Northern Colorado, where he also directs UNC’s contemporary music ensemble, the UNCommon Ensemble. Equally at home on modern and historical violas, Chalasani has been featured as a soloist in performances ranging from Telemann and Graun to Walton and Feldman. He is a founding member of the Northern California-based baroque chamber orchestra Sinfonia Spirituosa and has appeared on period instruments with American Bach Soloists and Boulder Bach Festival. Chalasani’s research on original viola pedagogy and performance practices of the 19th century has led to performances and conference presentations at the Universities of Oxford and Huddersfield (UK). A native of Northern California, he lives in Colorado with his wife, baroque oboist Ruth Denton.
<urn:uuid:862c13a1-4895-4887-9fbc-1c204e52437a>
CC-MAIN-2021-43
https://www.earlymusicamerica.org/emag-feature-article/solutions-for-change-anti-racism-in-early-music/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585270.40/warc/CC-MAIN-20211019140046-20211019170046-00190.warc.gz
en
0.964343
3,337
2.859375
3
At the end of 2002, the first cases of severe acute respiratory syndrome (SARS) were reported, and in the following year, SARS resulted in considerable mortality and morbidity worldwide. SARS is caused by a novel species of coronavirus (SARS-CoV) and is the most severe coronavirus-mediated human disease that has been described so far. On the basis of similarities with other coronavirus infections, SARS might, in part, be immune mediated. As discussed in this Review, studies of animals that are infected with other coronaviruses indicate that excessive and sometimes dysregulated responses by macrophages and other pro-inflammatory cells might be particularly important in the pathogenesis of disease that is caused by infection with these viruses. It is hoped that lessons from such studies will help us to understand more about the pathogenesis of SARS in humans and to prevent or control outbreaks of SARS in the future. Associated Clinical Trials Early death after feline infectious peritonitis virus challenge due to recombinant vaccinia virus immunization Intrinsic resistance of feline peritoneal macrophages to coronavirus infection correlates with in vivo virulence Host genetic control of mouse hepatitis virus type 4 (JHM strain) replication. I. Restriction of virus amplification and spread in macrophages from resistant mice The biological relationship of mouse hepatitis virus (MHV) strains and interferon: in vitro induction and sensitivities Adoptive transfer of EAE-like lesions from rats with coronavirus-induced demyelinating encephalomyelitis Antibody-mediated enhancement of disease in feline infectious peritonitis: comparisons with dengue hemorrhagic fever Association of mouse fibrinogen-like protein with murine hepatitis virus-induced prothrombinase activity Pattern of disease after murine hepatitis virus strain 3 infection correlates with macrophage activation and not viral replication B lymphocyte and macrophage expression of carcinoembryonic antigen-related adhesion molecules that serve as receptors for murine coronavirus Dissociation of demyelination and viral clearance in congenitally immunodeficient mice infected with murine coronavirus JHM Feline aminopeptidase N serves as a receptor for feline, canine, porcine, and human coronaviruses in serogroup I Mouse hepatitis virus is cleared from the central nervous systems of mice lacking perforin-mediated cytolysis Fulminant hepatic failure in murine hepatitis virus strain 3 infection: tissue-specific expression of a novel fgl2 prothrombinase Murine hepatitis virus strain 3 induces the macrophage prothrombinase fgl-2 through p38 mitogen-activated protein kinase activation. Cellular composition, coronavirus antigen expression and production of specific antibodies in lesions in feline infectious peritonitis Depletion of blood-borne macrophages does not reduce demyelination in mice infected with a neurotropic coronavirus Macrophage infiltration, but not apoptosis, is correlated with immune-mediated demyelination following murine infection with a neurotropic coronavirus Histopathological alterations of lymphatic tissues in cats without feline infectious peritonitis after long-term exposure to FIP virus The T cell chemoattractant IFN-inducible protein 10 is essential in host defense against viral-induced neurologic disease Expression of Mig (monokine induced by interferon-gamma) is important in T lymphocyte recruitment and host defense following viral infection of the central nervous system A comparison of lymphatic tissues from cats with spontaneous feline infectious peritonitis (FIP), cats with FIP virus infection but no FIP, and cats with no infection Lack of CCR2 results in increased mortality and impaired leukocyte activation and trafficking following infection of the central nervous system with a neurotropic coronavirus Variability and asymmetry in the human precentral motor system. A cytoarchitectonic and myeloarchitectonic brain mapping study Cutting edge: CD8 T cell-mediated demyelination is IFN-gamma dependent in mice infected with a neurotropic coronavirus Murine coronavirus replication-induced p38 mitogen-activated protein kinase activation promotes interleukin-6 production and virus replication in cultured cells CD4 T-cell-mediated demyelination is increased in the absence of gamma interferon in mice infected with mouse hepatitis virus Induction of prothrombinase fgl2 by the nucleocapsid protein of virulent mouse hepatitis virus is dependent on host hepatic nuclear factor-4 alpha Clinical progression and viral load in a community outbreak of coronavirus-associated SARS pneumonia: a prospective study. Haematological manifestations in patients with severe acute respiratory syndrome: retrospective analysis The Fgl2/fibroleukin prothrombinase contributes to immunologically mediated thrombosis in experimental and human viral hepatitis Isolation and characterization of viruses related to the SARS coronavirus from animals in southern China Expression of the nucleocapsid protein of porcine reproductive and respiratory syndrome virus in soybean seed yields an immunogenic antigenic protein. Pathogenesis of acute and chronic central nervous system infection with variants of mouse hepatitis virus, strain JHM. Establishment of the eukaryotic cell lines for inducible control of SARS-CoV nucleocapsid gene expression Infectious diseases emerging from Chinese wet-markets: zoonotic origins of severe respiratory viral infections Protective and pathologic roles of the immune response to mouse hepatitis virus type 1: implications for severe acute respiratory syndrome. Extremely low exposure of a community to severe acute respiratory syndrome coronavirus: false seropositivity due to use of bacterially derived antigens Severe acute respiratory syndrome coronavirus infection causes neuronal death in the absence of encephalitis in mice transgenic for human ACE2. Cellular immune responses to severe acute respiratory syndrome coronavirus (SARS-CoV) infection in senescent BALB/c mice: CD4+ T cells are important in control of SARS-CoV infection. Mutational analysis of aminopeptidase N, a receptor for several group 1 coronaviruses, identifies key determinants of viral host range Chimeric feline coronaviruses that encode type II spike protein on type I genetic background display accelerated viral growth and altered receptor usage. Cytokine responses in porcine respiratory coronavirus-infected pigs treated with corticosteroids as a model for severe acute respiratory syndrome. Broad-spectrum in vitro activity and in vivo efficacy of the antiviral protein griffithsin against emerging viruses of the family Coronaviridae. Release of severe acute respiratory syndrome coronavirus nuclear import block enhances host transcription in human lung cells Isolation and characterization of current human coronavirus strains in primary human epithelial cell cultures reveal differences in target cell tropism Primary severe acute respiratory syndrome coronavirus infection limits replication but not lung inflammation upon homologous rechallenge. Understanding PRRSV infection in porcine lung based on genome-wide transcriptome response identified by deep sequencing. Immunization with SARS coronavirus vaccines leads to pulmonary immunopathology on challenge with the SARS virus. Analysis of the host transcriptome from demyelinating spinal cord of murine coronavirus-infected mice Coronavirus non-structural protein 1 is a major pathogenicity factor: implications for the rational design of coronavirus vaccines A systems immunology approach to plasmacytoid dendritic cell function in cytopathic virus infections. Neutralization interfering antibodies: a "novel" example of humoral immune dysfunction facilitating viral escape? Human coronaviruses: insights into environmental resistance and its influence on the development of new antiseptic strategies Active replication of Middle East respiratory syndrome coronavirus and aberrant induction of inflammatory cytokines and chemokines in human macrophages: implications for pathogenesis CpG-ODNs induced changes in cytokine/chemokines genes expression associated with suppression of infectious bronchitis virus replication in chicken lungs A general strategy to inhibiting viral -1 frameshifting based on upstream attenuation duplex formation Synergistic antiviral effect of Galanthus nivalis agglutinin and nelfinavir against feline coronavirus. Trilogy of ACE2: a peptidase in the renin-angiotensin system, a SARS receptor, and a partner for amino acid transporters. Plant-produced candidate countermeasures against emerging and reemerging infections and bioterror agents What caused lymphopenia in SARS and how reliable is the lymphokine status in glucocorticoid-treated patients? Systemic fatal type II coronavirus infection in a dog: pathological findings and immunohistochemistry Regulation of cell death during infection by the severe acute respiratory syndrome coronavirus and other coronaviruses Replication-dependent downregulation of cellular angiotensin-converting enzyme 2 protein expression by human coronavirus NL63 Coronaviruses encompass a large family of viruses that cause the common cold as well as more serious diseases, such as the ongoing outbreak of coronavirus disease 2019 (COVID-19; formally known as 2019-nCoV). Coronaviruses can spread from animals to humans; symptoms include fever, cough, shortness of breath, and breathing difficulties; in more severe cases, infection can lead to death. This feed covers recent research on COVID-19. Blastomycosis fungal infections spread through inhaling Blastomyces dermatitidis spores. Discover the latest research on blastomycosis fungal infections here. Nuclear Pore Complex in ALS/FTD Alterations in nucleocytoplasmic transport, controlled by the nuclear pore complex, may be involved in the pathomechanism underlying multiple neurodegenerative diseases including Amyotrophic Lateral Sclerosis and Frontotemporal Dementia. Here is the latest research on the nuclear pore complex in ALS and FTD. Applications of Molecular Barcoding The concept of molecular barcoding is that each original DNA or RNA molecule is attached to a unique sequence barcode. Sequence reads having different barcodes represent different original molecules, while sequence reads having the same barcode are results of PCR duplication from one original molecule. Discover the latest research on molecular barcoding here. Chronic Fatigue Syndrome Chronic fatigue syndrome is a disease characterized by unexplained disabling fatigue; the pathology of which is incompletely understood. Discover the latest research on chronic fatigue syndrome here. Evolution of Pluripotency Pluripotency refers to the ability of a cell to develop into three primary germ cell layers of the embryo. This feed focuses on the mechanisms that underlie the evolution of pluripotency. Here is the latest research. Position Effect Variegation Position Effect Variagation occurs when a gene is inactivated due to its positioning near heterochromatic regions within a chromosome. Discover the latest research on Position Effect Variagation here. STING Receptor Agonists Stimulator of IFN genes (STING) are a group of transmembrane proteins that are involved in the induction of type I interferon that is important in the innate immune response. The stimulation of STING has been an active area of research in the treatment of cancer and infectious diseases. Here is the latest research on STING receptor agonists. Microbicides are products that can be applied to vaginal or rectal mucosal surfaces with the goal of preventing, or at least significantly reducing, the transmission of sexually transmitted infections. Here is the latest research on microbicides.
<urn:uuid:94709b3f-a65c-4780-965d-fb7ae6032720>
CC-MAIN-2021-43
https://www.meta.org/papers/immunopathogenesis-of-coronavirus-infections/16322745
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00270.warc.gz
en
0.861664
2,519
2.921875
3
A new city created by the defence industry The new city of Karlskoga was born on a cold New Year’s night in 1940 – when it was the largest town in Sweden in terms of its area. The growth of Bofors as a defence industry was the engine behind its rapid development. It can be said without exaggeration that it was the company that created the city. This gave rise to the saying: “What’s good for Bofors, is good for Karlskoga”. Originally Bofors was a fairly small ironworks – one of the many works built in the forestry and mining districts of Värmland and Bergslagen. It was not until well into the 1900s that Bofors started to grow into an industry of some considerable size. But it long retained the old industrial spirit and the company had its own sewers, refuse collection service, electric power, public baths – and even its own fire service and its own policeman, who was called Jonsson and lived above the Bofors tailor’s shop! But the rapid development of Bofors as a defence industry, particularly after the First World War, resulted in a rapid increase in the number of employees and there was a risk of the area around the works being transformed into a ‘Wild West’. The small municipal community of Karlskoga, where conditions were quite orderly, lay west of the works. But the capacity of the fresh and waste water infrastructure was becoming inadequate. And a disorganised settlement without proper streets and neighbourhoods grew up around the works and the municipal community. In addition, there was a crying need for more housing. A close examination of the population statistics paints a good picture of the dash to Karlskoga in the 1930s and 1940s. In 1936, there were 19,027 people living in the municipality – by 1945 that number had increased to 29,464! Choosing a new city The problem required a solution and discussions went back and forth. It was obvious that a new city needed to be created, but where should they draw the boundary? Should the city only include the municipal community and the ironworks, or should it cover a larger area? The discussion resulted in the entire municipality of Karlskoga becoming a new city – with a surface area larger than London and the largest in Sweden (until Kiruna exceeded it some years later). The issue of its name also generated a great deal of interest and the main national newspapers ran headlines such as ‘Sweden’s largest city’, ‘The forest that wants to become a city’, ‘Is Bofors becoming a metropolis?’ and ‘Karlskoga – a mega-city’. Many local residents wrote letters to the editor and the subject attracted rhymes and topical articles in the evening papers. One proposal that was aired was the name Karlsfors, combining Karlskoga and Bofors. There was a lively debate in the municipal council, which was held in the assembly hall of the old secondary school. In the end the well-established name of Karlskoga was chosen after all. In the midst of the world war and in a time of crisis the birth of the city was celebrated on 1 January 1940. ‘Home-made’ Bofors guns fired the salute on that cold New Year’s night. The new city gradually took over many of the company’s old tasks, but there was also close cooperation on practical issues, including the acute shortage of housing. Company commitment to housebuilding Early on Bofors had been keen to help find various solutions to the housing problems of its employees. It was a natural commitment for the traditional ironworks, and may have been necessary to attract a labour force to settle down in the sparsely populated areas where the works were often located. In the late 19th/early 20th century the company became interested in the Swedish movement for home ownership and the Bofors Arbetares Byggnads Aktiebolag was formed in 1903, which went on to help workers to acquire a home of their own. The basis of the Swedish movement for home ownership was that small farmers, workers and junior civil servants should be given advice on obtaining their own, comfortable homes surrounded by a small garden or patch of land – marking the beginnings of what subsequently became residential districts. In order to facilitate construction the company had a beautiful area in Sandviken planned, which contained 61 plots on the slopes down towards Lake Möckeln. Areas such as Stackfallsskogen, Karls Åby and Stackfallsängen were added later. Cooperation with the city After Karlskoga became a city, the ties between the company and city became closer and closer on housing issues and they jointly formed Karlskoga Bostadsaktiebolag (Karlskoga Housing Company), with the city and the company putting up half the share capital each. Bofors also initiated the creation of various housing foundations and supported HSB-type houses. (HSB = Hyresgästernas Sparkasse - och Byggnadsförening – ‘Tenants’ Savings and Construction Association’). Under the auspices of the Bofors housing foundation more than 600 apartments and 300 ‘bachelor pads’ were built after the Second World War with names such as Hultebo, Gasellen and Malmhagen. Bofors also helped to provide grants and interest-free loans to get homes constructed by other players and more than 1,000 apartments were created in this way. Bofors made its mark on the lives of the residents of Karlskoga, from the cradle to the grave. The company made significant investment in promoting various activities for the general public. In 1908, the Bofors meeting house was built with the following motto over its doors: “Greater knowledge lights the way, stronger forces ease the journey”. The meeting room had a large hall for lectures, concerts and other events and a modern gymnasium. After the Second World War the meeting house was rebuilt according to drawings by the architects Backström and Reinius. The company also supported sporting life in Karlskoga and many not-for-profit organisations in the cultural and charitable sector. The company paid for the Bofors sports ground in its entirety, including the fences and grandstands, and also had a hand in the creation of the ice hockey rink. Major focus on children and young people There was a particular focus on children and young people. In 1918 Bofors built its own nursery or ‘kindergarten’, where the children were looked after while their parents were working. From 1942-43 a youth centre was built, which was the first of its kind in Swedish industry. A key aim of the facility was to provide young people with worthwhile leisure activities and with a sanctuary outside the home. Many families lived in cramped conditions as a consequence of the lack of housing. The drawings for the youth centre were drawn up by the architect Gustaf Birch-Lindgren. It had a yellow brick façade and was built over three floors with a furnished basement. The basement housed a large handicraft room, weaving room, hobby room, playroom and changing rooms, toilets and shower rooms. The manager lived one floor up on the ground floor and there was also a fairly large coffee shop with a reading room and kitchen areas as well as premises for school kitchens. Another floor up there was a sewing room, library, art room, reading and writing room as well as a number of study rooms. On the top floor there was a meeting room with seating space for 250 people, a full stage area, sound equipment etc. A glance in the archives indicates there was lively activity in the youth centre in the period from 1945-46. Five study circles were held for ‘young adults’ in English, a social conversation circle, metalworking, radio technology and home furnishing. There were various practical groups, including amateur theatricals, an athletic club, bookbinding, old-time dancing, fancy needlework, painting, knitting and even household courses for men! The company’s apprentices’ workshop was located next to the youth centre and was built at the same time. A kind of apprenticeship had existed in every age – knowledge of a profession was handed down from the older generation to the younger one. But in 1918 the organisation of apprenticeship courses began in line with modern requirements and with the growth of the business. The new apprentices’ workshop was built to house 75 apprentices, who spent three years of their four-year apprenticeship there. During the last year they worked outside in the different workshop departments. For young people the apprentices’ workshop was an affordable school, since the education was free and they even received some pay. The advantage for Bofors was that it was continually able to add to its staff from a well-trained workforce. Most of the apprentices remained in the company’s service. An unusual transformation Within a few years Karlskoga underwent a transformation that is almost without precedent in Sweden. In a short time the new city was built up with a town hall, fire station, hospital, hot baths, new schools, a district court and police house, post office, restaurants and cinemas etc. A large number of new homes were built, as already mentioned. In 1953 the People’s House or community centre was finished and there was also a department store, Aveny. Bofors was a strategic industry during the war years and there was an enormous expansion afterwards with a great influx of people to the city. A well-known saying is attributed to Frans Andersson, who was employed at the company from 1880 to 1935. His last position was as chief supervisor. He was also a committed local politician. Andersson coined the phrase: “What’s good for Bofors, is good for Karlskoga.” The saying matched the facts. The city of Karlskoga was dependent on the changing business cycles of Bofors, for better or worse. In the early 1970s Bofors had more than 10,000 employees and the city had more than 40,000 inhabitants. But when the defence industry fell on hard times in the 1980s and 1990s this naturally affected the municipality as well. There was a drastic reduction of more than 10,000 in the number of inhabitants and many apartments were demolished. But times have changed. In the 2010s the number of inhabitants is increasing again and there is once again a shortage of housing in Karlskoga. If Bofors had not developed into an international defence industry, as part of the current-day Saab, Karlskoga would probably still have been a small parish village with rolling farmlands and forests. The fortunes of the company and the city are therefore closely intertwined.
<urn:uuid:f1ac0108-4fc5-4f4d-94fb-824a70d39110>
CC-MAIN-2021-43
https://www.saab.com/newsroom/stories/2016/may/a-new-city-created-by-the-defence-industry
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.981756
2,261
2.703125
3
PinPoint Guest Blog – by Corey Bleich, EdgePoint Learning Converting Your Current Online Training to Location-Based Microlearning For many of today’s workers whose jobs don’t involve a desk, it’s critical that they have access to the information they need when and where they need it. These employees don’t need drawn-out (and frequently boring) training courses. They need training when and where they need to use the information. That’s where location-based microlearning comes in. How ACME Corp Improved Safety As a result of a safety review, ACME Corp learned that the managers and line workers in their manufacturing facilities were dangerously unaware of how to identify combustible dust hazards. Even though every employee was required to complete a half-hour combustible dust safety eLearning course each year, they did not appear to retain the information or to put it into practice. In response, ACME took a new approach. They used their existing eLearning as the basis for creating several short microlearning learning objects. These short training topics were created specifically for use on the employees smartphones and tablets giving them access to the information wherever they were. These microlearning objects were not just used to provide training, but they also served as on-the-job resources allowing workers to quickly review or find information right when they needed it. ACME saw significant increase in their workers’ knowledge about combustible dust and the number of reports of combustible dust hazards increased by 25% in the six months following the introduction of microlearning. At the suggestion of one of the factory managers, ACME implemented a technology that allowed relevant microlearning to be “pushed” to workers based on where the workers were at work or to quickly find relevant information specific to their location and the combustible dust hazards in that area. As expected, making the microlearning location-based further increased the employees’ working knowledge of combustible dust hazards and the reporting of these hazards. What is microlearning? So what is microlearning? Microlearning was born out of the necessity to create quick, accessible training for employees. Microlearning resources capitalize on these smaller training opportunities since they: · Are typically only two to three minutes long · Can be accessed by employees when and how they choose · Are tied to one specific learning objective or concept How does microlearning work? Microlearning isn’t just a shorter course that’s sometimes delivered on a mobile device. It’s a departure from the idea of “courses” all together. Instead, think of microlearning as resources that your employees access when and where they need them the most.Because it’s not just shorter courses, microlearning is built around an employee’s actual learning cycles. Instead of forgetting most of the content after a longer training, employees can continue to access those resources, as they need them. This repetition helps move that information into long-term, working memory. What are the benefits of microlearning? Microlearning is more effective for most topics, is desperately needed by time- and attention-hungry employees, and reduces strain on your current training resources. The stats don’t lie: · 17% improvement in knowledge transfer when learning is broken into smaller chunks · Mobile learners study an additional 40 minutes per week, on their own time · 50% of college-going Millennials are not in favor of physical classroom learning · Most employees only have 24 minutes a week to dedicate to training · Microlearning resources take 300% less time and 50% less cost to produce than traditional courses Why Make Training Location-Based? Where training happens plays a big role in retention and how well learners can apply knowledge and skills. In the case of microlearning, location means place AND time. The closer the training is to the place and time the information is needed, the more useful it is to your learners and the more likely they are to retain and apply the information in similar situations in the future. What is Location-Based Training? There are several ways to make training location-based. In the days before eLearning, it was chaining a binder of information at a workstation near a piece of manufacturing equipment. Today, thanks to mobile devices, digital information can be “tied” to specific locations in various ways. Training can be pushed to your employees using geofences and the location services on their phones, using Bluetooth beacons to make training available within areas within a facility or job site, or RFID to make training available in a very specific place. Sometimes the best way to train is to allow your employees to pull the information when they need it. Barcodes, QR codes, and advancements in Augmented Reality (AR) make it possible for your learners to get just the information they need when they need it using the cameras on their phones. Making the Switch Many organizations have invested considerable cost and time into their existing eLearning - training that can sometimes require an hour or more for their employees to complete. In the best of scenarios, employees remember only a small fraction of the training and the chances of the training resulting in lasting behavior change are minimal. How can companies follow ACME’s example and leverage their existing eLearning investments to improve the effectiveness of their training without significant additional cost? Let’s break this town into a 2-step process: 1. Convert your training to microlearning objects 2. Make your new microlearning available when and where your employees need the information. Don’t Reinvent the Wheel Why start from scratch if you can use your existing online training as a starting point to save time and money. 1. Think small. Microlearning works best when it’s short and focused to one very specific learning objective or task. Review your existing training and ask yourself, “What are the key points my employees need to remember and be able to apply?” Next, try to distill each point down to as few words as possible while noting any existing media elements you might want to leverage such as images or audio clips. If you find your topic expanding to more than 2-5 minutes, think smaller. 2. Choose your media. One of the advantages of microlearning is that you have many options to choose from when it comes to the form your training should take: short videos, infographics, checklist, diagrams, audio clips, annotated photos, or any other type of media you can think of. If you can see or hear it on your phone, it can probably be used for training. Let the content and what you know about your employees drive your choice of media. Is the topic something that is easier to understand with an infographic? A short illustrative animation like this one? A simple text document? Sometimes a short eLearning video like this video on SMART Goals will do the trick. 3. How do your employees like to learn? Are they used to learning home repair on YouTube? Do they prefer to read something at their own speed? If you’re first starting out with microlearning, try some different formats and get feedback from your employees to find the best fit for your workforce. 4. Create your microlearning. Once you’ve determined the content you will use for each microlearning object and the format, it’s time to start building. Remember, microlearning should convey the most important information quickly while being engaging and memorable. The tools you’ll use to create your microlearning will depend on the format (video, audio, infographic, eLearning, etc) and the tools you have at your disposal. Remember, it doesn’t need to be fancy. Instead, think useful and engaging. If you’re just getting started with microlearning, there’s no need to go out and buy a lot of software. There are some great free or nearly-free programs out there. Even better - Think about creating content using your phone or tablet. It will help you keep things simple and you’ll be guaranteed it will work well on your employees mobile devices. Get Your Microlearning Out There Now that you have your microlearning objects, how will you get them to your employees when and where they need them? 1. Choose your platform. If your employees can’t easily access microlearning when they need it, they are not likely to use it. The key is to remove barriers for your employees. Searching through a catalog to find a microlearning object in your LMS can be time consuming and frustrating. Broaden your view and think outside the LMS. What platforms does your company already have that employees can easily access? Can they access these platforms from a mobile device? Will the microlearning be easy to find? think about: Will you push content or will your employees pull content? To make your microlearning more effective, consider using a system that can make delivery location-based. If your employees can view the information they are most likely to need based on their physical location in the workplace or proximity to a piece of equipment, they are more likely to use the microlearning you created. With platforms like PinPoint Workforce, you can make your microlearning location-based using geofences, Bluetooth beacons, barcodes, or QR codes. Employees can receive notifications on their phone when they are “in range” of relevant microlearning or they can easily find information by scanning a barcode or QR code. 2. Deliver. Once you’ve created your microlearning and decided how you will get the training to your employees, it’s important to have a well-defined process for how you will set up your microlearning in the chosen delivery platform, making sure your employees know how the microlearning is available and how to access it, and most importantly - getting feedback from your employees to make sure the microlearning is providing what your employees need when and where they need it. Your employees will provide valuable information to help you improve your microlearning and how that training is delivered. Ready to Go Micro and Location-Based? At EdgePoint learning, we’ve been creating all types of custom eLearning (including microlearning) for over a decade. We’ve partnered with PinPoint Workforce to help our customers stay ahead of the game in meeting their employees needs, making them safer, and improving the bottom line through training. If you’re ready to learn more about how you can create location-based microlearning for your employees, it’s time to talk to the experts. PinPoint’s mobile-native platform was created for just this purpose. Let PinPoint show you how they can help get you up and running and provide your employees the training they need when and where they need it.
<urn:uuid:8463f593-2add-4128-95f4-7fc861ae6d4b>
CC-MAIN-2021-43
https://www.pinpointworkforce.com/post/converting-to-location-based-microlearning
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00031.warc.gz
en
0.942158
2,229
2.625
3
Many people are familiar with Abraham Maslow’s “hierarchy of needs,” with self-actualization depicted at the top of a pyramid. Chances are, you learned about it in your introduction to psychology course in college or saw it diagrammed on Facebook (perhaps humorously with “WiFi” or “toilet paper” added to the base of the pyramid). There are a lot of misconceptions about “Maslow’s Pyramid”. For one, Maslow never actually created a pyramid to represent his hierarchy of needs! Maslow was actually a developmental psychologist at heart and viewed human development as often involving a two-steps forward, one-step-back dynamic, in which we are continually returning to our basic needs to draw strength, learn from our hardships, and work toward greater integration of our whole being. Rather than a lockstep Pyramid, Maslow actually emphasized a different feature of the hierarchy. I believe this framework of human motivation is highly relevant to the uncertain times we are currently living in. Deficiency vs. Growth Maslow argued that all the needs can be grouped into two main classes of needs, which must be integrated for wholeness: deficiency and growth. Deficiency needs, which Maslow referred to as “D-needs,” are motivated by a lack of satisfaction, whether it’s the lack of food, safety, affection, belonging, or self-esteem. The “D-realm” of existence colors all of our perceptions and distorts reality, making demands on a person’s whole being: “Feed me! Love me! Respect me!” The greater the deficiency of these needs, the more we distort reality to fit our expectations and treat others in accordance with their usefulness in helping us satisfy our most deficient needs. In the D-realm, we are also more likely to use a variety of defense mechanisms to protect ourselves from the pain of having such deficiency in our lives. Our defenses are quite “wise” in the sense that they can help us to avoid unbearable pain that can feel like too much to bear at the moment. Nevertheless, Maslow argued that the growth needs—such as self-actualization and transcendence—have a very different sort of wisdom associated with them. Distinguishing between “defensive-wisdom” and “growth-wisdom,” Maslow argued that the Being-Realm of existence (or B-realm, for short) is like replacing a clouded lens with a clear one. Instead of being driven by fears, anxieties, suspicions, and the constant need to make demands on reality, one is more accepting and loving of oneself and others. Seeing reality more clearly, growth-wisdom is more about “What choices will lead me to greater integration and wholeness?” rather than “How can I defend myself so that I can feel safe and secure?” From an evolutionary point of view, it makes sense that our safety and security concerns, as well as our desires for short-lived hedonic pleasures, would make greater demands on our attention than our desire to grow as a whole person. As the journalist and author Robert Wright put it in his book Why Buddhism Is True, “The human brain was designed—by natural selection—to mislead us, even enslave us.” All that our genes “care” about is getting propagated into the next generation, no matter the cost to the development of the whole person. If this involves narrowing our worldview and causing us to have outsize reactions to the world that aren’t actually in line with reality, so be it. However, such a narrowing of worldview runs the risk of inhibiting a fuller understanding of the world and ourselves. Despite the many challenges to growth, Maslow believed we are all capable of self-actualization, even if most of us do not self-actualize because we spend most of our lives motivated by deficiency. Maslow’s emphasis on the dialectical nature of safety and growth is strikingly consistent with current research and theorizing in the fields of personality psychology, cybernetics, and artificial intelligence. There is a general consensus that optimal functioning of the whole system (whether humans, primates, or machines) requires both stability of goal pursuit in the face of distraction and disruption as well as the capacity for flexibility to adapt and explore the environment. Becoming a Fully Functioning Human At a very young age, we feel hungry, or tired, or fearful, but are often given messages by well-meaning (and, sadly, often not-so-well-meaning) parents and other caretakers that “if you feel that way, I won’t love you.” This can happen in a number of subtle and unsubtle ways anytime an expression of a need is disregarded as not as important as the needs of the caretaker. And so we start acting how we think we should feel, not how we actually feel. As a result, so many of us grow up being constantly swayed by the opinions and thoughts of others, driven by our own insecurities and fears of facing our actual self, that we introject the beliefs, needs, and values of others into the essence of our being. Not only do we lose touch with our real felt needs, but we also alienate ourselves from our best selves. To the psychotherapist Carl Rogers, one of the founders of humanistic psychology, the loneliest state of all is not the loneliness of social relationships, but an almost complete separation from one’s own experience. Based on his observations with a large number of patients with healthy development of their whole self, he developed the notion of the “fully functioning person.” Like many of the other founding humanistic psychologists, Rogers was inspired by the existential philosopher Søren Kierkegaard, who noted that “to will to be that self which one truly is, is indeed the opposite of despair.” According to Rogers, the fully functioning person: - Is open to all of the elements of their experience, - Develops a trust in their own experiences as an instrument of sensitive living, - Is accepting of the locus of evaluation as residing within themselves, and - Is learning to live their life as a participant in a fluid, ongoing process, in which they are continually discovering new aspects of themselves in the flow of their experience. Rogers believed that we each have an innate self-actualizing tendency that can be explained by the existence of an organismic valuing process (OVP). According to Rogers, the OVP is a vital part of humanity and evolved in order to help the organism move in the direction of growth, constantly responding to feedback from the environment, and correcting choices that consistently move against the current of growth. Rogers believed that when people are inwardly free to choose their deepest values, they tend to value experiences and goals that further survival, growth, and development, as well as the survival and development of others. Modern research supports the existence and importance of an OVP in humans. Positive organizational psychologists Reena Govindji and P. Alex Linley created a scale to measure OVP and found that it was positively correlated with greater happiness, knowledge and use of one’s greatest strengths, and a sense of vitality in daily life. Here are some statements that can give you a rough estimate of how in touch you are with your deepest feelings, needs, and values: ORGANISMIC VALUING SCALE - I know the things that are right for me. - I get what I need from life. - The decisions I make are the right ones for me. - I feel that I am in touch with myself. - I feel integrated with myself. - I do the things that are right for me. - The decisions I make are based on what is right for me. - I am able to listen to myself. In another line of research on OVP, Kennon Sheldon conducted a series of clever experiments demonstrating that when given autonomy, people do tend to favor the growth choice over time. Sheldon gave people free choice over time to choose from a wide menu of goals and found that the goals naturally grouped into two main clusters: security vs. growth: - Have well-respected opinions. - Have many nice things. - Be admired by many others. - Be well-known to many. - Be financially successful. - Be well-liked and popular. - Find a good, high-paying job. - Help those who need it. - Show affection to loved ones. - Feel much loved by intimates. - Make others’ lives better. - Be accepted for who I am. - Help improve the world. - Contribute something lasting. Sheldon found that under conditions of complete freedom to choose, people tend to move toward growth, changing their minds over time in directions most likely to be growth-enhancing. Of course, the goal isn’t to become 100 percent growth-oriented and 0 percent security-driven; we need both security goals and growth goals. The point here is that under optimal conditions for choosing, the relative balance over time tends to tip toward growth. In fact, Sheldon found that those with the highest initial adoption of security goals shifted the most toward growth goals over time. As Sheldon notes, those holding “unrewarding values are most in need of [growth-relevant] motivational change and are thus most likely to evidence such change.” Therefore, the research suggests that when free of anxiety, fear, and guilt, most people do tend to not only move in the direction of the realization of their unique potential but also tend to move in the direction of goodness. This should give us hope and point to what is possible under optimal conditions. But it should also give us a healthy dose of realism, considering that in the real world, most people are not entirely free to choose their most valued direction. The cultural climate matters a lot. For instance, many individuals with marginalized identities—whether based on ethnicity, race, religion, gender, socioeconomic status, sexual orientation, disability, or even special education status (“learning disabled,” “gifted,” “twice-exceptional”)—often do not receive the environmental support and encouragement they need to feel comfortable fully expressing themselves. For such individuals, they may have greater difficulty feeling authentic in environments where they truly do not feel as though they fit in, or in which their minority status is so salient to them and everyone around them. The culture of an institution can also have an effect on everyone within it. Sheldon found that new law students shifted toward security goals and away from growth goals during their first year of law school, presumably because “traditional legal education induces profound insecurity, which serves to alienate students from their feelings, values, and ideals.” There are many other harsh and unpredictable environmental conditions that can lead people to be more safety focused at the expense of growth. All over the world– at this very moment-– people are finding themselves in just such a position with the growing Coronavirus pandemic that is forcing us all into a prolonged state of extreme insecurity and anxiety. Not only can environmental conditions impede the realization of our self-actualizing tendency, but even within ourselves, we have so many different (often unconscious) aspects of our mind constantly clamoring for our attention. Which is why awareness is so important, including awareness of our inner conflicts and extreme traits. Ultimately, though, we must choose growth, over and over again. As Maslow wrote, “One can choose to go back toward safety or forward toward growth. Growth must be chosen again and again; fear must be overcome again and again.” From TRANSCEND by Scott Barry Kaufman, Ph.D. published by TarcherPerigee, an imprint of Penguin Publishing Group, a division of Penguin Random House. LLC Copyright (c) 2020 by Scott Barry Kaufman Today is publication day! You can order here.
<urn:uuid:50ef4392-36b3-44a7-b6a1-82d39a19594e>
CC-MAIN-2021-43
https://scottbarrykaufman.com/choose-growth/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00150.warc.gz
en
0.95617
2,519
2.546875
3
Shearing Process and Shears Shearing Process and Shears Shearing is a simple process used for the cutting of metals. It is basically a metal fabrication process and is being used in rolling mills at several places for hot as well as for cold cutting of the steel material. Shears can be used to cut steel and other materials of any size or shape. In the shearing process, metal is separated by applying a great force enough to cause the material to fail. The most common shearing processes (such as shearing, punching, piercing, slitting, and blanking etc.) are performed by applying a shearing force. When a very large shearing force is applied, the shear stress in the material exceeds the ultimate shear strength and the material fails and separates at the cut location. The shearing force is applied by two blades, one above and the other below the material (upper blade and lower blade). These blades are positioned at an angle relative to each other. These two blades are forced past each other with the space between them determined by a required offset. Normally, one of the blades remains stationary. The shearing blades used in the shearing process typically have a square edge rather than a knife-edge and are available in different materials, such as high carbon steel, low alloy steel, and tool steel etc. During the shearing process, shear stress is applied along the thickness of the material being cut. Shearing happens by severe plastic deformation locally followed by fracture which propagates deeper into the thickness of the material. Since shearing of a material involves plastic deformation due to shear stress, the force required for shearing is theoretically equal to the shear strength of the material. Due to friction between shear blade and the material, the actual force required is always greater than the shear strength of the material. A small clearance is present between the edges of the upper and lower blades, which facilitates the fracture of the material. Blade clearance is the distance between the upper and lower blade of the shear as they pass each other during the shearing process. The clearance between the two blades is an important parameter which decides the shape of the sheared edge. Large clearance leads to rounded edge. The edge has distortion and has burr. The shearing load is also higher for larger clearance. Insufficient clearance leaves sheared pieces with a double cut. Also, a ductile material has burr of larger height. For harder materials and higher thickness, larger clearances are required. Generally, clearance can vary between 2 % and 10 % of the material thickness. The size of the clearance depends upon several factors, such as the specific shearing process, the material, and the material thickness. An optimum blade setting allows the material to fracture cleanly. Most shears are normally equipped with either a manual or powered blade clearance system, however in some cases, there can be an awkward method to set or to have a limited amount of adjustment. Usually shearing begins with formation of cracks on both sides of the material being sheared, which propagates with application of shear force. A shiny, burnished surface forms at the sheared edge due to rubbing of the material along the shear edge with the blades. Shear zone width depends on the speed of shear blade motion. Larger speed leads to narrow shear zone, with smooth shear surface and vice-versa. The quality of the cut during the shearing process is directly proportional to the sharpness of the shear blades. Dull blades leave ragged edges. The rake angle of the blade (the angle of the moving blade as it passes the fixed blade) is also important in determining the quality of the cut. Generally, the lower the rake angle, the better is the quality of the cut. Problems with cut quality, such as bow, twist, and camber are seen on shorter pieces (upto 100 mm long) which fall behind the shear after they are cut. Shears with lower rake angles require more power than those which have a higher rake angle. Geometry of the shearing zone The effects of shearing on the material change as the cut progresses and are visible on the edge of the sheared material. When the blade impacts the material, the clearance between the blades allows the material to plastically deform and a small projection is formed. This projection is known as rollover. This region corresponds to the small depression made by the shear blade on the material. Below this, the burnished surface is present. The burnished surface is a smooth surface formed by the rubbing of the shear surface against shear blade. The burnished surface is located below the rollover. The burnished region is usually located on the upper side. Below the burnished zone, the fracture zone is located. The burr is formed below the fracture zone. Burr is a sharp edge formed at the end of the process due to the elongation of the material before completely getting severed off. The depth of the deformation zone depends on the ductility of the metal. If ductility is small, the depth of this zone is small. The depth of penetration of the shear blade into the material is the sum of the rollover height and burnishing zone height. The depth of rough zone increases with increase in ductility, material thickness or clearance. There is severe shear deformation in the fracture zone. The stages in the process of shearing are shown in Fig 1. Fig 1 Stages in the process of shearing Shears and their types Shears are used to cut thin sheet, plates, billets, rounds, squares, sections, beams, and bars etc. in the rolling mills. Depending on the application, shears typically employ a fixed lower blade and a moving upper blade to perform the cutting action. The type of shears to be used is determined by many factors, including the material length that it can process and the thickness and type of material which it has to cut. Shears which are used in rolling mills before the cooling bed are called hot shears while the shears which are used after the cooling bed are cold shears. Hot shears cut desired length and as well as cut front and tail ends of the rolling stock. These shears also cut the bar being rolled in case of cobble in the rolling mill. Hot shears are designed to cut the rolling stock at the rolling temperature. Cold shears are used to cut the rolled product in to desired saleable lengths. The type of shears normally used in rolling mills are (i) crop and cobble shear, (ii) cooling bed dividing shear, and (iii) rotary shear. The crop and cobble shear is used in hot rolling mills is to crop front end, tail end, as well as to segment cutting in case of eventualities. These shears are usually of start/stop type and are driven either with flywheel mounted pneumatic clutch/brake or with direct DC (direct current) motor driven. These shears are controlled through PLC (programmable logic control) system and provide very close tolerance of the cut length. The cooling bed dividing shear is used to cut cooling bed lengths. This shear is usually designed for low surface temperatures. This shear is generally installed before entry to cooling bed. The cooling bed shear is generally stop/start and continuous operating type and is driven by the direct DC motor drive. This shear is also normally controlled through PLC system and hence very close tolerance of cut length is achieved. The rotary shear is a cost effective shear which is used to cut crop front end, tail end and as well as to scrap the material under rolling during emergency. This shear is of continuous rotating type. Generally this shear is used to trim material in the hot rolling mill at considerably lower speed. Hot shears are usually flying shears. Flying shears are those shears which cur the material while it is moving in the rolling mill at the rolling speed. Flying shears are used for cutting applications, where endless material to be cut to length, cannot be stopped during the cutting process and the cut is required to be effected ‘on the fly’. The mechanical construction provides a shear system mounted on a carriage, which follows the material with synchronous speed while cutting is in progress, and then returns to a home position to wait for the next cut. The flying shear control is based on a PLC system. The system is generally designed for the special requirements of flying shears under consideration of maximum efficiency and accuracy at minimum stress for all mechanical parts. Shears can also be categorized into different types by shear design and the drive systems which are used in the design. Two design types are common to power squaring shears. They are (i) the guillotine shear (also known as the slider unit), and (ii) the swing beam shear. A guillotine shear has a moving blade which runs on straight slides. The moving blade is almost parallel to the fixed blade during the entire stroke. The guillotine design (see Fig 2) uses a drive system to power the moving blade down. The guillotine shear requires a gibbing system to keep the blade beams in the proper position as they pass each other. The shear with a swing beam design uses one of the drive systems to pivot the moving blade down on roller bearings. This eliminates the need for a gibbing system to keep the blades in proper position as they pass each other. Fig 2 Types of shears The drive system of the shears powers the moving blade through the material to make a cut. Drive systems can be categorized into five basic types namely (i) foot or manual, (ii) air, (iii) mechanical, (v) hydro-mechanical, and (v) hydraulic. In rolling mills normally last three types of shear drives are used. A foot shear is engaged when the operator steps on a treadle to power the blade beam to move down to make a cut. Foot shears are generally used in sheet metal applications. For using an air shear, an operator steps on a pedal which activates air cylinders to make a cut. Shop air system or a freestanding air compressor is used to power an air shear. Air shears have a simple drive design, and they provide overload protection. The direct-drive mechanical shear operates when the operator steps on a pedal to turn on the motor that brings the beam down to make a cut. The motor turns off at the end of the cycle, and the blade beam returns to the top of the stroke. This design is suitable for shears when they are not in constant use because the machine uses power only when it is activated. In case of the flywheel type mechanical shear, the operator steps on a pedal to activate a clutch which engages the flywheel to generate the power to move the blade beam down. Mechanical shears are fast and have a better design for cutting certain types of material. The hydro-mechanical shear has a hydraulic cylinder or cylinders which power a mechanical device such as an arm to move the blade beam down to make a cut. In this type of shear, a smaller hydraulic system can be used since the mechanical device produces the power. Hydraulic shear work with only the hydraulic power and is powered when the operator steps on a pedal to activate the hydraulic cylinders to power the blade beam. There are a number of shear configurations which have been developed over the years, and which have different potentials for improvement via revamps. These are described below. Clutch and brake shears – These are of an older design, but can benefit from new automation, though accuracy and repeatability are limited by clutch and brake system performance. The main advantage of this type of the shear is the possibility of fine tuning clutch and brake timing to optimize accuracy and friction material life. Also, a new control system can improve cutting repeatability by minimizing the electrical error. Start/stop shears – These shears are very similar to clutch and brake shears, but in this case the motor and the shear gear box are permanently connected. This kind of shear needs very accurate blade position control to assure high precision and reliability. In the present applications of these shears, it is usually not necessary to replace the entire system, merely to apply a new motion control system to the existing drive. Rotating shears – These shears has the leading edge technology and are used when high speed and accuracy are required. This is achieved by an optimized combination of motion control strategies aiming to get the best performance with the minimum effort from the machine. Fast dynamic motion applied to rotating blades and the diverter are necessary to deliver highly versatile and accurate rotating shears, capable of doing head and tail crops, scrapping and cut-to-measure at a speed of upto 100 meters per second. A peculiarity of rotating shears is the synergy between a high inertia system (the shear blades) and a low inertia system (the diverter). The big challenge for upgrades is to use the same motion control system for both parts, optimizing it for the two different tasks. The common types of shears normally installed in rolling mills are as below. Snap shears – These shears are normally arranged at the entry side of rolling mill stand 1. They are used for dividing the hot input material conveyed to the rolling mill. Pendulum shears – These shears consist of cutting systems suspended in an ‘oscillating’ configuration. The cut can be performed for material which is travelling or stopped. These shears are used for cropping head end or tail end or dividing the hot input material conveyed to the rolling mill. Universal shears – These shears are usually designed for higher product speeds and normally used for head end and tail end cropping as well as for cobble cutting. In these shears the cutting is initiated by automatic pulse. Universal shears are generally of continuous running type. Dual system shears – These shears are normally used as cooling bed shears. They are equipped with two cutting systems namely (i) crank rotary system, and (ii) crank lever system. The crank lever system is mainly used for cutting sections. The shear is moveable perpendicular to the rolling direction in order to bring the system being used in line of rolling. Crank shears – These shears can be designed as (i) continuously running shears, (ii) start-stop shears, and (iii) coupling shears with coupling- brake combination. These shears are used for cropping, dividing, or emergency cutting during cobble. Shear start is initiated by pulse generator. Crank shears can be crank lever shear or double crank shears. Drum shear – A drum type shear is generally used for product with a simple shape such as flats or rounds. The blades are mounted on a rotating cylinder (or drum) and are set at a ‘lead’ speed to minimize the ‘kinking’ of the bar. Cold shears – These shears are for cutting of cooling bed lengths into saleable lengths. They are installed downstream of the cooling bed exit roller table after the straightener. Cold shears can also be flying shears. The shears need optimized motion control. These include dedicated motion planning algorithms, drive parameter optimization and fine tuning parameter recording. The shear control system block diagram is shown in Fig 3. All the control technologies are flexible and support different set-ups which are required for different products and different mechanical arrangements, such as shears with a combination of flying and crank arms and optional flywheel. The start-stop shear cycle can be summarized as (i) acceleration during which the motor is starting from the home position, and accelerate to the speed needed to perform the cut (synchronization speed), (ii) synchronization during which the motor remains at constant speed from the moment the blades impact on the bar until they exit from the synch angle, (iii) deceleration during which period the motor decelerates from the synchronization speed to zero speed, and (iv) repositioning during this phase starting from the stop position the motor is moved to the initial home position ready for the next cut. The cut cycle is normally performed using an electronic cam. This function controls the position of the slave axis (shear blades) according to the position of the master axis (material position). Fig 3 Typical start-stop shear configuration and shear control system block diagram Optimized parameters for different productions are easily selected by an integrated recipe system, and combined with automatically computed motion paths to bring several advantages such as (i) reduced mechanical stress and wear, (ii) reduced operating noise, (iii) reduced electrical stress on both drive and motor, (iv) reduced energy requirements, and (v) cost-effective selection of motors and drives. The main components of the system are given below. Axis control – It is the heart of the control system and controls the position of the shear blades to assure precision and repeatability of the cut length. For performing this function, it receives, as inputs, the encoder of the stand, the encoder of the shear, the hot metal detector (HMD) and the proximity switch and generates as output the speed or torque request for the shear drive. Master encoder – It is the incremental encoder connected to the stand motor and is used to detect the material position. Shear encoder – It is the incremental encoder connected to the shear motor and is used to detect the shear blade position. Hot metal detector – It is the sensor which is necessary to determine the head and the tail of the bar for the bar position tracking. Shear proximity switch – It is the sensor which is used to reset the shear position at the moment of the cut.
<urn:uuid:661388b8-b58e-4a73-be5d-16f4588d7f2b>
CC-MAIN-2021-43
https://www.ispatguru.com/shearing-process-and-shears/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00230.warc.gz
en
0.937367
3,702
3.921875
4
Who invented the first coffee maker? - FAQ. Those who are looking for an answer to the question «Who invented the first coffee maker?» often ask the following questions - 1 other answer - Your answer - 20 Related questions Those who are looking for an answer to the question «Who invented the first coffee maker?» often ask the following questions: 🚩 Who invented the coffee maker? Coffee makers date all the way back to the Turks in 575 A.D. But it wasn't until 1818 when the first coffee percolator was created. This coffee pot became known as the "cowboy pot" because so many cowboys had begun using it. between the years of 1835 to 1850, coffee makers began to completely saturate the marketplace. Everything from glass balloons, to pressure steamers, to grinders and roasters had all started to become available. In the year of 1890, the Manning-Bowman Percolator started to get distributed in the United States. These percolators were made out of linen with a cloth that had to be cleaned after every use. It wasn't until 1912 that a paper filter was finally introduced. This help coffee makers explode in popularity since the clean up was so much easier now. In 1963, the first ever automatic drip coffee brewers and fluted disposable coffee filters were invented and available for commercial use thanks to George Bunn (Bunn-O-Matic Corporation). It was 1972 when home coffee makers became available thanks to Mr. Coffee. - Who invented the electric percolator coffee maker? - Who invented the first coffee cup? - When was the first coffee machine invented? 🚩 When was the coffee maker invented? The first ever coffee maker was created in 1865 by European coffee lovers that came to America by JAMES MASON. - Who made the first automatic drip coffee maker? - When was the automatic coffee machine first invented? - When did bunn make the first drip coffee maker? 🚩 When was the drip coffee maker invented? 19541954 – The first electric drip coffee maker called the Wigomat was invented in Germany by Gottlob Widmann. - What coffee maker? - What kind of coffee maker is a mr coffee maker? - What coffee maker brews the hottest coffee? 1 other answer In 1865, coffee percolators appeared in Europe and America thanks to inventor James Mason. We've handpicked 20 related questions for you, similar to «Who invented the first coffee maker?» so you can surely find the answer! Which coffee maker makes the best coffee? - The best coffee makers of 2019 1. The best drip coffee maker: Technivorm Moccamaster 2. The best value drip coffee maker: Black & Decker CM1100B 3. For a mid-range priced drip coffee maker: Cuisinart PurePrecision Pour-Over Coffee Brewer 4. The best cold brew coffee maker (and best value): Takeya Cold Brew Coffee Maker Can coffee maker boil water? Can you use a coffee maker to boil water? - Stovetop coffee makers can indeed boil water, as anything you put on a stovetop can. They are a great method for boiling water, as the closed chamber heats up much quicker inside. This allows the coffee maker to provide a quick source for boiling water. Who invented coffee capsules? - Éric Favre - the Swiss inventor who put coffee into capsules Who invented coffee crisp? Coffee Crisp was invented in Canada. Who invented coffee lids? Who invented coffee machines? - The first device that can be called a “coffee machine” was the work of the French pharmacist François Antoine Descroisilles, who in 1802 had the happy idea of joining two metal containers and separating them with a plate with holes in it (what today would be a strainer or filter). His invention was called caféolette. Can you make coffee without a coffee maker? - Thankfully, coffee can still be brewed without any kind of maker or contraption. In fact, it’s surprisingly easy to make a great cup without a coffee maker. How does instant coffee maker work? Instant coffee is a type of coffee made from dried coffee extract… There are two main ways to make instant coffee: Spray-drying. Coffee extract is sprayed into hot air, which quickly dries the droplets and turns them into fine powder or small pieces. How much cofee for coffee maker? The standard ratio for brewing coffee is 1-2 tablespoons of ground coffee per 6 ounces of water – 1 tablespoon for lighter coffee and 2 for stronger coffee. That 6-ounce measure is equivalent to one “cup” in a standard coffeemaker, but keep in mind that the standard mug size is closer to 12 ounces or larger. How much coffee does toddy maker? - The Toddy brewing container is designed to hold (1) one pound of coffee and (9) nine cups (72 fluid ounces) of water. (If your coffee is packaged in sizes larger or smaller than one pound, click here for detailed proportion suggestions.) How much is tassimo coffee maker? Compare with similar items |This item TASSIMO Single Serve Coffeemaker, T45||Cuisinart SS-10P1 Premium Single-Serve Coffeemaker Coffemaker, 72 Oz, Silver| |Item Dimensions||12 x 8 x 8 inches||11.03 x 9.33 x 12.13 inches| How to choose coffee maker accessories? Making coffee at home is not a difficult task, but making good coffee is all about how well you accessorize your coffee maker. There are several things you can add to make sure you have the freshest coffee possible. The carafe is one the basic coffee maker accessories that is an absolute must. Carafes come in 12 cup, 10 cup, and smaller sizes to accommodate your coffee maker. Carafes mostly come in glass, but there are some that come in plastic or stainless steel, again depending on your coffee maker. Thermal carafes have gained popularity because of their portability, and they also seem to keep the coffee hotter longer than do the glass carafes. You can buy thermal vacuum carafes at just about any discount or home goods store. Coffee filters are another one of those basic accessories that you’ve just got to have, unless you use those premeasured coffee pouch packs. The most important thing to consider when buying coffee filters is to get the right size. Watch the labels carefully, because some are for commercial models and others are for home models. They make filters by brand and cup size of the coffee maker. Coffee filters are available in the standard white and also brown, which are unbleached and use chemicals in their making. Brown filters are supposed to be more environmentally friendly as well. If you want to try to save on waste or have environmental concerns, you can get permanent coffee filters instead of the regular, disposable ones. The advantage of permanent filters is that you will never be caught running out of filters, but the disadvantage is that they don’t filter grounds as well as the paper, disposable filters. Coffee makers need to be cleaned out periodically. This is determined by how often you use them and how much coffee you run through them. Although water and white vinegar is the typical way to do this, cleaning tablets work well to get residue out of your coffee maker. Depending on your volume of coffee needs, you might want to purchase a single or double burner coffee warmer, that you can plug in next to your coffee maker on the countertop. The burners only accommodate glass carafes, but they keep your coffee hot while you brew a new batch if you are serving several people. How to paint a coffee maker? - If you do want to paint it, let it dry well, then spray paint it with automotive engine paint. That is not a water soluble latex paint. If you have to clean up overspray, use turpentine. Best to hang it on a stick outdoors and spray it there. Engine paint can stand temperatures twice as hot as the coffee pot keep-warm plate ever gets. Ninja coffee maker how many scoops? - The SCA (Specialty Coffee Association) recommends using 11g. (2 TBSP) of coffee per 6 oz. of water. Here are my scoop recommendations when brewing with the Ninja coffee bar. Start with the classic brew and if you need it to be richer then select the “rich” brew. Cup – 3 small scoops Should i unplug my coffee maker? So, should you unplug coffee maker when you're not using it? Absolutely! The US Department of Energy highlights the fact that your electric bill cost might increase by anywhere from $100 to $200 per year due to all the appliances that are using energy 24/7 (1). What coffee maker uses instant pods? Melitta offers coffee makers that use instant coffee and tea pods. [email protected] What is grind control coffee maker? Breville's Grind Control is the first coffee maker for your home with an adjustable grinder and calibration function that lets you grind before brewing as well as customize grind size and coffee volume to suit your personal coffee preferences. What is the cleanest coffee maker? - 100. Cuisinart Premier Coffee Series 12-Cup Programmable Coffee Maker… - Proctor Silex 10-Cup Coffee Maker. Users chose the Proctor Silex 10-Cup Coffee Maker as the #2 best coffee maker in terms of cleanup… - Zojirushi ZUTTO Coffee Maker… - Hamilton Beach Ensemble 12-Cup Coffee Maker 43254… What is the use coffee maker? To make coffee Why is my coffee maker watery? Watery? Watered-down coffee is no fun and can be caused by a variety of factors. These factors include not using enough coffee to brew, not brewing for long enough, not brewing hot enough, or using a too-small grind size… Then you can check your brewing time, grind size, and water temperature.
<urn:uuid:beb07e98-8391-47bb-9058-d1599a7601a8>
CC-MAIN-2021-43
https://coffee-hack.com/who-invented-the-first-coffee-maker
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00071.warc.gz
en
0.942857
2,121
2.6875
3
Part Of: Bayesianism series Content Summary: 2300 words, 23 min read Epistemic Status: several of these ideas are not distillations, but rather products of my own mind. Recommend a grain of salt. The Biology of Uncertainty In the reinforcement learning literature, there exists a bedrock distinction of exploration vs exploitation. A rat can either search for a new food source, or continue mining calories from his current stash. There is risk in exploration (what if you don’t find anything better?), and often diminishing returns (if you’re confined to 2 miles from your sleeping grounds, there’s only so much territory that needs to be explored). But without exploration, you hazard large opportunity costs and your food supply becomes quite fragile. Exploitation can be conducted unconsciously. You simply need nonconscious modules to track the rate of returns provided by your food site. These devices will alarm if the food source degrades, but otherwise don’t bother you much. In contrast, exploration engages an enormous amount of cognitive resources: your cognitive map (neural GPS), action plans, world-beliefs, causal inference. Exploration is about learning, and as such requires consciousness. Exploration is paying attention to the details. Exploration will tend to produce probability matching behaviors: your actions are in proportion to your action value estimates. Exploitation tends to produce maximizing behaviors: you always choose the action estimated to produce the most value. Statistics and Controversy Everyone agrees that probability theory is a profoundly useful tool for understanding uncertainty. The problem is, statisticians cannot agree on what probability means. Frequentists insist on interpreting probability as relative frequency; Bayesians interpret probability as degree of confidence. Frequentists use random variables to describe data; Bayesians are comfortable also using them to describe model parameters. We can reformulate the debate as between two conceptions of uncertainty. Epistemic uncertainty is the subjective Bayesian interpretation, the kind of uncertainty that can be reduced by learning. Aleatory uncertainty is the objective Frequentist stuff, the kind of uncertainty you accept and work around. Philosophical disagreements often have interesting implications. For example, you might approach deontological (rule-based) and consequential (outcome-based) ethical theories as a winner-take-all philosophical slugfest. But Joshua Greene has shown that both camps express unique circuitry in the human mind: every human being experiencing both ethical intuitions during moral dilemmas (but at different intensities and with different activation profiles. The sociological fact of persistent philosophical disagreement sometimes reveals conflicting intuitions within human nature itself. Controversy reification is a thing. Is it possible this controversy within philosophy of statistics suggests a tension buried in human nature? I submit these rivaling definitions of uncertainty are grounded in the exploration and exploitation repertoires. Exploratory behavior treats unpredictability as ignorance to be overcome, exploitation behavior treats unpredictability as noise to be accomodated. All vertebrates possess two ways of approaching uncertainty. Human philosophers and statisticians are rationalizing and formalizing truly ancient intuitions. Cleaving Nature At Its Joints Most disagreements are trivial. Nothing biologically significant hinges on the fact that some people prefer the color blue, and others green. Do frequentist/Bayesian intuitions resemble blue/green, or deontological/consequential? How would you tell? Blue-preferring statements don’t seem systematically different from green-preferring statements. But intuitions about epistemic vs aleatory uncertainty do systematically differ. The psychological data presented in Brun et al (2011) is very strong on this point. Statistical concepts are often introduced with ridiculously homogenous events, like a coin flip. It is essentially impossible for a neurotypical human to perfectly predict the outcome of a coin flip (which are determined by the arcane minutiae of muscular spasms, atmospheric friction, and chaos theory). Coin flips are perceived as the same. Irrelevant is the location of the coin flip, the atmosphere of the room, the force you apply – none seem to disturb the outcome of a fair coin. In contrast, epistemic uncertainty is perceived within single-case heterogenous events, such as propositions like “Is it true that Osama Bin Ladin is inside the compound” As mentioned previously, these uncertainties elicit different kinds of information search (causal mental models versus counting), linguistic markers (“plausible” vs “chance”), and even different behaviors (exploration vs exploitation). People experience epistemic uncertainty as more aversive. People prefer to guess the roll of a die, the sex of a child, and the outcome of a horse race before the event rather than after. Before a coin flip, we experience aleatory uncertainty; if you flip the coin and hide the result, out psychology switches to a more uncomfortable sense of epistemic uncertainty. We are often less willing to bet money when we experience significant epistemic uncertainty. These epistemic discomforts of course make sense from an sociological perspective: if we sit under epistemic uncertainty, we are more vulnerable to being exploited – both materially by betting, and reputationally by appearing ignorant. Several studies have found that although participants tend to be underconfident assessing probabilities that their specific answers are correct, they tend to be underconfident when later asked to estimate the proportion of items that they had answered correctly. While the particular mechanism driving this phenomenon is unclear, the pattern suggests that evaluations of epistemic vs aleatory uncertainty rely on distinct information, weights, and/or processes. People can be primed to switch their representation. If you advise a person to “think like a statistician”, they will invariably This is true drawing balls from an urn: if you remove it but don’t show the color, people switch from Outside View (extensional) to Inside View (intensional). Other Appearances of the Distinction Perhaps the most famous expression of the distinction comes from Donald Rumsfeld in 2002: As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones. You can also find the distinction hovering in Barack Obama’s retrospective on the decision to raid a suspected OBL compound: - The question of whether Osama Bin Laden was within the compound is an unknown fact – an epistemic uncertainty. - The question of whether the raid would be successful is an outcome of a distribution – an alethic uncertainty. A related distinction, Knightian uncertainty, comes from the economist Frank Knight. “Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated…. The essential fact is that ‘risk’ means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating…. It will appear that a measurable uncertainty, or ‘risk’ proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all.” It is It is well illustrated by the Ellsburg Paradox: As Hsu et al (2005) demonstrates, people literally use different systems in their brains to process the above games. When the game structure is known, the reward processing centers (the basal ganglia) are used. When the game structure is unknown, fear processing centers (amygdala nuclei) are instead employed. Mousavi & Gigerenzer (2017) use Knightian uncertainty to defend the rationality of heuristics in decision making. Nassim Taleb’s theory of “fat tailed distributions” are often interpreted as affirmations of Knightian uncertainty, a view he rejects. Towards a Formal Theory For some, Knightian uncertainty has been a rallying cry driven by discontents with orthodox probability theory. It is associated with efforts at replacing its Kolmogorov foundations. Intuitionistic probability theory, replacing classical axioms with computationally tractable alternatives, is a classic example of this kind of work. But as Weatherson (2003) notes, other alternatives exist: It is a standard claim of modern Bayesian epistemology that reasonable epistemic states should be representable by probability functions. There have been a number of authors who have opposed this claim. For example, it has been claimed that epistemic states should be representable by Zadeh’s fuzzy sets, Dempster and Shafer’s evidence functions, Shackle’s potential surprise functions, Cohen’s inductive probabilities or Schmeidler’s non-additive probabilities. A major motivation of these theorists has been that in cases where we have little or no evidence for or against p, it should be reasonable to have low degrees of belief in each of p and not-p, something apparently incompatible with the Bayesian approach. Evaluating the validity of these heterodoxies is beyond the scope of this article. For now, let me state that it may be possible to simply accommodate the epistemic/aleatory distinction within probability theory itself. As Andrew Gelman claims: The distinction between different sources of uncertainty can in fact be encoded in the mathematics of conditional probability. So-called Knightian uncertainty can be modeled using the framework of probability theory. You can arguably see the distinction in the statistical concept of Bayesian optimality. For tasks with low aleatory uncertainty (e.g., classification on high-res images), classification performance can approach 100%. But other tasks with higher aleatory uncertainty (e.g., predicting future stock prices), model performance asymptotically approaches a much lower bound. Recall the Bayesian interpretation of learning: Learning is a plausibility calculus, where new data pays down uncertainty. What is uncertainty? Uncertainty is how “loosely held” our beliefs are. The more data we have, the less uncertain we must be, and the sharper the peaks in our belief distribution. We can interpret learning as asymptoptic distribution refinement, some raw noise profile beyond which we cannot reach: Science qua cultural learning, then, is not about certainty, not about facts etched into stone tablets. Rather, science is about painstakingly paying down epistemic uncertainty: sharpening our hypotheses to be “as simple as possible, but no simpler”. Inside vs Outside View The epistemic/aleatory distinction seems to play an underrated role in forecasting. Consider the inside vs outside view, first popularized by Kahneman & Lovallo (1993): Two distinct modes of forecasting were applied to the same problem in this incident. The inside view of the problem is the one that all participants adopted. An inside view forecast is generated by focusing on the case at hand, by considering the plan and the obstacles to its completion, by constructing scenarios of future progress, and by extrapolating current trends. The outside view is the one that the curriculum expert was encouraged to adopt. It essentially ignores the details of the case at hand, and involves no attempt at detailed forecasting of the future history of he project. Instead, it focuses on the statistics of a class of cases chosen to be similar in relevant respects to the present one. The case at hand is also compared to other members of the class, in an attempt to assess its position in the distribution of outcomes for the class. … Tetlock (2015) describes how superforecasters tend to start with the outside view, It’s natural to be drawn to the inside view. It’s usually concrete and filled with engaging detail we can use to craft a story about what’s going on. The inside view is typically abstract, bare, and doesn’t lend itself so readily to storytelling. But superforecasters don’t bother with any of that, at least not at first. Suppose I pose to you the following question. “The Renzettis live in a small house at 84 Chestnut Avenue. Frank Renzetti is forty-five and works as a bookkeeper for a moving company. Mary Renzetti is thirty-five and works part-time at a day care. They have one child, Tommy, who is five. Frank’s widowed mother, Camila, also lives with the family. Given all that information, how likely is it that the Renzettis have a pet? A superforecaster knows to start with the outside view; in this case, the base rates. The first thing they would do is find out what percentage of American households own a pet. Starting from this probability, then you can slowly incorporating the idiosyncrasies of the Renzettis into your answer. At first, it is very difficult to square this recommendation with how rats learn. This ordering is, in fact, precisely backwards: Fortunately, the tension disappears when you remember the human faculty of social learning. In contrast with rats, we don’t merely form beliefs from experience; we also ingest mimetic beliefs – those which we directly download from the supermind of culture. The rivaling fields of personal epistemology and social epistemology is yet another example of controversy reification. This, then, is why Tetlock’s advice tends to work well in practice1: On some occasions, for some topics, humans cannot afford to engage in individual epistemic learning (see the evolution of faith). But for important descriptive matters, it is often advisable to start with a socially accepted position and “mix in” your own personal insights and perspectives (developing the Inside View). When I read complaints about the blind outside view, what I hear is a simple defense of individual learning. 1. Even this individual/social distinction is not quite precise enough. There are in fact, two forms of social learning. Qualitative social learning is learning by speech generated by others, quantitative social learning is learning by maths and data curated by others. Figuring out how the quantitative/qualitative data intake mechanisms work is left as an exercise to the reader 😉 - Brun et al (2011). Two Dimensions of Uncertainty - Hsu et al (2005). Neural Systems Responding to Degrees of Uncertainty in Human Decision-Making - Kahneman & Lovallo (1993). Timid choices and bold forecasts: A cognitive perspective on risk taking - Mousavi & Gigerenzer (2017). Heuristics are Tools for Uncertainty - Tetlock (2015). Superforecasting - Weatherson (2003). From Classical to Intuitionistic Probability.
<urn:uuid:3b387b09-8f26-4816-9d5a-c73c4ed9d624>
CC-MAIN-2021-43
https://kevinbinz.com/2020/08/28/epistemic-vs-aleatory-uncertainty/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.926658
3,124
2.9375
3
A review of Rapid Advance: High Technology in the Global Electronic Age, by Susan K. Mays. How is a semiconductor like a steam engine? Though reminiscent of a riddle the Mad Hatter might pose over a mug of Darjeeling, Susan Mays suggests in her dissertation that these two technologies performed a similar economic function. Both, she notes, contributed to significant increases in manufacturing output, the establishment of improved distribution networks for goods and information, and the creation of a wide range of new business opportunities. What Watt’s engine and the railroad were for the Industrial Revolution, the integrated circuits in our computers and cell phones are for the Information Revolution of the 20th and 21st centuries. Today the semiconductor industry is synonymous with high-tech innovation, as firms struggle to produce smaller, faster microprocessors by cramming more transistors on to a sliver of silicon. The substantial equipment and human capital requirements associated with such efforts initially ensured the dominance of American firms like Intel and Texas Instruments, but beginning in the 1970s, businesses in Japan, South Korea, and Taiwan established themselves as major players in an increasingly global electronics marketplace. More recently, China has emerged as a key site for integrated circuit design and fabrication, a particularly impressive feat given the institutional legacies of its centrally planned economy. Mays’ dissertation reconstructs the history of the Chinese semiconductor industry from the late 1970s to the mid-2000s, showing how policymakers and integrated circuit manufacturers overcame bureaucratic delays, restrictive trading policies, and a near-total absence of venture capital to secure a 25% share of the global market. Their success reflects China’s newfound capacity to supplement traditional state-led development strategies with what Mays terms “enterprise led development” (p. 4) to cultivate an environment more conducive to sustained economic growth. In focusing upon a single segment of the Chinese economy and acknowledging the contributions of both government officials and industry personnel, Mays distinguishes her study from previous discussions of China’s state-owned sector. Instead, she embraces an evolutionary economics approach which advocates a bottom-up examination of how people within a specific industry adopt new tools, processes, or organizational forms and learn from those experiences. In her first chapter, Mays acknowledges that applying this framework to China’s semiconductor industry presents several challenges, given its close ties to the defense establishment. Nonetheless, Mays was able to collect more than fifty first-hand accounts from electronics executives and senior engineers. Considered alongside documents uncovered while working at the Wuxi National Integrated Circuit Base Company (WXICC), these oral histories enable her to trace how politicians, corporate managers, and international partners shaped the semiconductor industry in China. After outlining her methodological approach, Mays devotes her second chapter to preliminary attempts to reform the semiconductor industry. Chinese scientists had persuaded their government of semiconductors’ strategic significance during the 1950s, but at the end of the Cultural Revolution only a handful of factories had moved beyond producing discrete circuit components like diodes or transistors. In the early 1980s, the State Council—China’s highest political body—moved to address this technological deficiency. After consulting with industry experts, in 1983 the Council settled upon a consolidation strategy. Instead of distributing resources to dozens of semiconductor facilities scattered across the country, they would instead build two regional “bases” centered around Beijing and the southern city of Wuxi: concentrated geographic areas encouraging the exchange of human and material resources among a handful of firms. For the remainder of the decade, as the government divested from a growing number of semiconductor factories, it also granted the surviving operations a degree of autonomy and limited access to the open market. At the same time, the government remained deeply involved in the management of five key semiconductor enterprises and a new project to establish China’s first world-class integrated device manufacturer (I.D.M.) facility in Wuxi, where Mays conducted the bulk of her research. The I.D.M. initiative, also known as “Project 908” is the subject of Chapter 3. Intended to demonstrate that a Chinese enterprise could oversee all three phases of the semiconductor manufacturing process—design, fabrication, and packaging/assembly/testing (P.A.T.) —Project 908 revealed the challenges confronting state-owned enterprises engaged in large-scale technological projects. The semiconductor firm overseeing the project, Huajing, received approval to begin construction in 1990, but delayed government loans postponed the groundbreaking until 1995. Even if everything had proceeded on schedule, Huajing had previously specialized in low-end semiconductors and possessed only limited design capacity. Although treated as a failure by industry analysts, Mays proposes that Huajing’s responses to Project 908’s setbacks established a pattern for future semiconductor ventures in China. Recognizing the need for additional capital and technical expertise, for example, Huajing’s leaders formed a partnership with CSMC, a Taiwanese semiconductor firm. As part of this reorganization, Huajing abandoned its identity as an I.D.M. and reconfigured itself as a foundry, a standalone fabrication site capable of producing integrated circuit designs submitted by outside customers. The replacement of I.D.M.s with specialized production facilities (what Mays refers to as “de-verticalization” (p. 30) and state-controlled enterprises with foreign partnerships characterized the semiconductor industry in China throughout the 1990s. Chapter 4 explores these trends by highlighting two enterprises that the Chinese government designated as “national champions” during the late 1990s: Huahong-NEC and SMIC. In contrast to national champions in other East Asian countries, these enterprises were not intended to transform China into a self-sufficient semiconductor producer or limit the participation of foreign investors. Rather, the State Council proved extremely open to the prospect of international investment to improve their nation’s electronics infrastructure. Consequently, unlike the state-owned Huajing, Huahong-NEC started off as a Sino-Japanese joint venture from its inception. Moreover, where Huajing and Huahong-NEC both began as I.D.M.s before concentrating on fabrication, SMIC—a wholly foreign-owned enterprise with Taiwanese management—embraced the foundry model from the outset. Despite negotiations over subsidies, import tariffs, and debates over intellectual property, both enterprises thrived, providing policymakers with valuable insights as they pushed to further integrate the Chinese semiconductor industry into the global market. With Huahong-NEC and SMIC leading the way, semiconductor production in China boomed at the end of 1990s, as did the market for consumer electronics. Chapter 5 shows how those firms’ experiences prompted the creation of new industry-wide policies that spurred the growth and diversification of Chinese semiconductor enterprises after 2000. As international companies established operations in China to be closer to their customers and benefit from lower labor costs, the Chinese government recognized that one-off ventures such as Project 908 or the national champion enterprises could not address longstanding legal and institutional obstacles that complicated efforts to attract new businesses. To signal its ongoing commitment to semiconductor production, in 2000 China’s Ministry of the Information Industry issued a new policy, known as Document 18, which streamlined the process of enterprise formation, provided incentives for foreign investment, and opened new sources of domestic and venture capital. Mays observes that not all of Document 18’s provisions, such as its intellectual property guidelines or a value-added tax on imported semiconductors, proved effective, but it did contribute to the proliferation of Chinese fabrication and PAT sites. In addition, it set the stage for a new campaign to cultivate high-end design firms. Still, through a combination of top-down policymaking informed by the bottom-up guidance of industry leaders, China was able to become part of the global semiconductor value chain. Mays concludes her dissertation with a comparative analysis of the Chinese semiconductor industry and its counterparts in Japan, South Korea, and Taiwan. All of these countries, she observes, relied on government policies to foster the growth of high-tech industries, but China was unique in its heavy dependence on foreign investment and the rapid pace of its transition from state-owned enterprises to a wide range of businesses (e.g. joint ventures, foreign companies) participating in the international market. China’s relatively late entry into integrated circuit production also enabled it to capitalize upon the industry-wide shift away from I.D.M.’s by adopting the foundry model, which Taiwanese firms pioneered in the 1990s. This experience also left China poised to benefit from the ongoing de-verticalization of semiconductor manufacturing, as new software tools accelerated the creation of firms dedicated solely to integrated circuit design. Whatever course the Chinese semiconductor industry might take, Susan Mays has produced what may well become the definitive history of its late 20th century rise to global prominence. Through her exhaustive investigations of the documentary record and her conversations with industry personnel, she has effectively mapped the changing contours of the institutional and commercial environment within which integrated circuit production occurred in China over the past three decades. More importantly, she has reaffirmed the value of balancing official accounts of those changes with the perspectives of manufacturing personnel responsible for their implementation. In this fashion, she has not only provided a valuable contribution to the global history of electronics but also an important methodological model for subsequent historical investigations of technological innovation. Chemical Heritage Foundation Interviews with key figures in the Chinese semiconductor industry Project Records from Project 909 and Huahong-NEC compiled by Hu Qili (head of Huahong-NEC) Industry History by Wang Yangyuan (co-founder and former chairman of SMIC) Reports from CSIA (China Semiconductor Industry Association) and CCID (China Center for Information Industry Development) Archival documents obtained at Wuxi National Integrated Circuit Base Company (WXICC) Columbia University. 2013. 462 pp. Primary Advisor: Madeleine Zelin. Image: “Semiconductors, the ‘brains’ of electronic products and systems.” Photo courtesy of the Ridgetop Group.
<urn:uuid:8612534d-13e8-4225-a8d7-ae94578ebbd4>
CC-MAIN-2021-43
https://dissertationreviews.org/china-and-the-semiconductor-in-the-global-electronic-age/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00270.warc.gz
en
0.935861
2,122
2.6875
3
For businesses, a reliable IT infrastructure is just as integral to day-to-day operations as competent management, productive employees, and reasonable working conditions. A server outage is able to paralyze an entire company. Adequate preparation can help prevent some of the most common errors from occurring; unforeseen risks, however, will always remain a factor. In order to be on top of your... When it comes to online crime, businesses think primarily of economic espionage, stealing sensitive business data, and violating data protection. But increasing digitalization has meant that online attacks have attained and new level. More and more businesses depend on IT systems that link companies to public networks and provide hackers with opportunities to attack. If a cyber attack causes system failure, this could lead to interruptions that can prove costly. It only takes a few minutes for server failure to cause thousands of dollars’ worth of damage. Companies that could experience especially large losses are those whose servers host shopping software or provide a central database. However, server failures aren’t just caused by external sources; internal risks can also threaten the operation. In addition to protecting against external threats and the standard procedures relating to disaster recovery, a solid security concept also includes organizational and staffing measures. Countermeasures are generally based on compensation: technically, this is based on providing redundant hardware within the context of high availability, or to bypass downtime with stand-by systems. Data security can be ensured by backup and recovery software as well as by redundant memory architectures. The financial consequences of server failure can potentially be covered by insurance. - Failure scenarios at a glance - Consequences of system failure - Business continuity management (BCM) Failure scenarios at a glance Security experts differentiate the causes of server failure between internal and external threats. Internal threats include all scenarios where failures are caused by a company’s own IT infrastructure, utilities, or employee error. External threats, on the other hand, are deliberate external attacks or unpredictable events such as accidents or disasters. - Fire in the computer center - Power outage in the computer center - Hardware failure (hard drive crash, overload, overheating) - Software error (database failure) - Network issues - Human error - Infiltration (man-in-the-middle attack, phishing, social engineering) - Sabotage (attacks on SCADA systems) - Viruses, Trojans, and worms - Distributed denial of service attack (DDoS) - Hardware theft - Natural disasters (earthquakes, lightning, flooding) - Accidents (plane crash) As a rule, companies find it easier to prepare for internal security risks than external threats. The reason for this is that hackers always adapt their attack patterns to current security standards and continually attack corporate networks with new malicious programs or infiltration strategies. Companies can reduce the risk of internal dangers, on the other hand, through an uninterrupted power supply, fire protection measures, highly available servers, and comprehensive security training. Consequences of system failure The financial cost of server failure depends on several factors and also what kind of server is down; an e-mail server, web server, analytics server? How long the server was down also plays a part. If it was only a few minutes, it might not be worth calculating the loss, but for longer amounts of time it might make sense to work it out. If the server was being used by employees, you need to work out how much the employees were paid to technically do nothing, which obviously depends on their salary. If the culprit is an e-commerce server, it makes sense to calculate how many orders couldn’t be placed during the time the server was down. To do this, look at the time period e.g. Wednesday 5-7pm and compare it with how many orders you normally receive during this time. If an e-mail server was down, the cost depends on how much your company relies on e-mail traffic. Customers may be annoyed that they didn’t receive quick answers to their queries if they’re used to doing so. This could be enough for some customers to stop using your service or buying your products. Don’t forget the actual cost of fixing the server. It is of course always a good idea to have proper backups at the ready in case the server does go down. Whether server failure causes service interruption and to what extent, depends on the respective industry and the business model. To waste as little money as possible, you could start on other tasks when a server failure prevents you from doing your regular work; call meetings, make phone calls, or bring customer meetings forward. If your central process relies entirely on IT this could prove a bit more difficult. It costs the company an exceptional amount of money if customers aren’t able to place orders, or the SCADA (supervisory control and data acquisition) system failure paralyzes the production line. When calculating the cost of service interruption, in addition to taking into account the employees’ hourly salaries and losses due to fewer or no customer orders, you might also face contractual fines due to delays in delivery times. Your reputation could also be at risk, but it’s almost impossible to calculate such a factor. In order to counteract server failures, you need to implement some preventative measures. This usually refers to a series of infrastructural and organizational measures when selecting and designing the server room. A lot of helpful information on how to avoid as well as recover from server failure can be found on Oracle. Fire protection and utilities In order to prevent server failures due to physical influences such as fires, floods, power failures, or hardware sabotage, server rooms and data centers need to be configured accordingly. The first step is to decide where the server should be located. Basements are definitely not recommended since there’s the danger of them flooding during storms or natural disasters. In addition, access to the premises should be restricted to specialist personnel and be controlled by a security check. Server rooms are not recommended to be used as permanent workplaces. Fire damage can possibly be prevented by installing fire protection and extinguishing systems. This includes installing fire protection doors, fire detection devices, hand-held fire extinguishers, and automatic fire extinguishing systems (e.g. gas extinguishing systems). Further preventative measures include fire protection requirements for storing combustible materials correctly, using fire-resistant sealing on cables, and using suitable insulation materials for thermal insulation or soundproofing. Technical equipment converts electrical energy into heat. The temperature in the server room could rise due to the sun’s rays seeping in. In order to prevent server failures and data errors due to overheating and high humidity, you should use powerful ventilation and cooling systems. Optimal storage conditions for long-term storage media is a temperature of between 68°F and 72°F, and with 40% humidity. The basic pre-requisite for smooth server operation is a constant power supply. Interruptions as small as 10ms can lead to IT malfunctions. It is possible to bridge supply gaps and longer-lasting failures by using standby generators. These enable you to have a self-sufficient operation that is independent of the public electricity network, thereby helping to avoid interruptions in the usual operation. Medium-sized companies in particular underestimate the impact that IT interruptions have on business operations. A reason for this is the high reliability of standard components that are now used in corporate IT. Their availability is generally estimated to be 99.9%. This number might seem high, but if the system operates 24 hours daily for a year, this could result in a maximum downtime of almost 9 hours. If this ends up happening exactly during the peak sales period, a relatively short server failure could prove costly to the company. Highly-available IT systems with an availability of 99.99% are the standard now when it comes to supplying business-critical data and applications. In this case, a maximum downtime of 52 minutes per year is guaranteed. Some IT experts even believe 99.999% availability is possible, which would mean no more than 5 minutes of downtime annually. The problem with the information regarding availability is that it only refers to the reliability of server hardware. According to IEEE (Institute of Electrical and Electronics Engineers)’s definition, a system is highly-available if it can ensure the availability of its IT resources despite several server components failing. 'High Availability (HA for short) refers to the availability of resources in a computer system, in the wake of component failures in the system.' This is achieved, for example, by servers that are completely redundant. All operating components – especially processors, memory chips, and I/O units – are available twice. This prevents a defective component from paralyzing the server, but high-availability doesn’t protect against fire in the data center, targeted attacks by malicious software and DDoS attacks, sabotage or being taken over by hackers. When it comes to real operations, entrepreneurs should therefore expect significantly longer downtimes and take appropriate measures to prevent and limit damage. Other strategies to compensate for the failure of server resources in the data center are based on standby systems and high-availability clusters. Both approaches are based on networks of two or more servers, which together provide more hardware resources than are needed for normal operation. A standby system is a second server that is used to safeguard the primary system as soon as it fails due to a hardware or software error. This service takeover is known as failover and initiated automatically by cluster manager software without any administrator intervention. A structure like this, consisting of an active and passive server node can be considered as an asymmetric high availability cluster. If all nodes in the cluster provide services in normal operations, this is known as a symmetrical structure. Since a time delay occurs when a service is migrated from one server to another, short-term disruption to standby systems and high-availability clusters cannot be completely prevented. To counter hackers’ damaging influence, administrators use various software and hardware solutions that should detect, avert, register, and deflect their attacks. In order to protect a server against unauthorized access, critical systems with firewalls and demilitarized zones are closed off from public networks. Intrusion detection systems (IDS) enable automated monitoring of servers and networks, alerting users as soon as manual break-in attempts or automated attacks by malicious software are detected: a process based on pattern recognition and statistical analysis. If intrusion prevention systems (IPS) are used, automated countermeasures take place after the alert. It’s common to be connected to a firewall so that data packets can be discarded or suspicious connections can be interrupted. In order to keep hackers away from business-critical IT systems, administrators also use honeypots. These appear to hackers as supposedly attractive targets that run isolated from the productive system and therefore have no influence on the system’s functionality. Honeypots are constantly monitored and enable admins to respond quickly to failures and are able to analyze attack patterns and strategies used. Data backup and recovery In order to be able to quickly restore business-critical data even in the event of server failure, it’s recommendable to develop a data protection concept in accordance with international industry standards such as ISO 27001. This regulates who is responsible for the data backup and names the decision makers who can provide data recovery. Furthermore, the data backup concept determines when backups have to be created, how many generations should be saved, which storage medium should be used, and whether specific transport modalities are required such as encryption. In addition, the type of data backup is defined: - Full data backup: if all the data that you wish to back up is stored on an additional storage system at a certain time, this is referred to as a full data backup. Whether the data has changed since the last memory process is not taken into account in backups like these. A full data storage therefore takes a long time and has a high memory requirement, which is particularly important when several generations are stored parallel. This type of data backup boasts simple and fast data recovery since only the last memory state has to be reconstructed. However, companies lose this advantage when backups aren’t carried out regularly enough. If this is the case, a lot of effort is required to adapt subsequently modified files to the current state. - Incremental data backup: if companies decide on an incremental backup, the backup includes only those files that have changed since the last backup. This reduces the time needed to perform a backup, as well as meaning that the memory requirements for different generations are also significantly lower than with full data storage. An incremental data backup requires at least one back-up generated by a full-data backup. In practice, therefore, there are often combinations of both storage strategies. Several incremental backups are generated between two full-size backups. When it comes to data recovery, the last full data backup is used as the basis and supplemented by the data from the incremental storage cycles. As a rule, several data backups must be aligned one after the other. - Differential data backup: differential data backup is also based on full data protection. All data, which has changed since the last full data backup, is backed up. In contrast to the incremental data backup, these backups aren’t linked together. For data recovery, it’s enough to compare the last full data backup with the most recent differential backup. The storage strategy used in the company depends on the availability you require as well as economical aspects. Central influencing factors are tolerable recovery times, the frequency and time of the data backup as well as the relationship between the change volume and the total data volumes. If the latter are almost identical, it’s possible to save memory by incremental or differential methods. Information security methods can only be established throughout the company if all employees recognize and accept that they are partly responsible for the company’s economic success. Security awareness can be raised and maintained if the company provides regular training courses, aimed at familiarizing employees with internal and external risks, and possible scenarios. The basis of systematic training courses are the rules and regulations for handling security-relevant devices as well as a disaster recovery plan, which provides employees with instructions about which steps to take to restore normal operation as quickly as possible. A structured approach to creating appropriate concepts is provided by business continuity management. Business continuity management (BCM) In order to minimize damage caused by server failures, companies increasingly invest in preventative measures. The focus is on business continuity management (BCM). In the IT sector, BCM strategies are aimed at counteracting server failures in critical business areas, and ensuring immediate recovery if an interruption occurs. Business impact analysis (BIA) is the pre-requisite for appropriate emergency management. This helps companies identify critical business processes. A process is classed as critical when a server failure has a significant impact on operation. The BIA concentrates on the consequences of concrete damage scenarios. Causes of server failures, the probability that possible threats might occur, and countermeasures are recorded in the risk analysis. How BIA and risk analysis can be implemented methodologically within the framework of BCM is the substance of various standards and frameworks. The BSI Standard 100-4 is recommended as a detailed guide. Business Impact Analysis (BIA) The first step towards comprehensive business continuity management is the business impact analysis. Key questions regarding this analysis are: which systems are the most important in maintaining the core business? What does it mean for the operation if these systems fail? It is advisable to identify the company’s most important products and services as well as the underlying IT infrastructure. If a company primarily relies on internet sales, the servers that supply the online store and its associated databases definitely need to be protected. A call center, on the other hand, would classify its telephone system as critical for running its business. The BIA includes a prioritization of systems that are to be protected, a way of calculating losses, as well as informing which resources are required for system recovery. A risk analysis within the scope of emergency management enables you to identify internal and external risks that could cause a server to fail and consequently interrupt the operation. The aim is to make anysecurity risks and their causes known and to develop appropriate countermeasures in order to reduce any potential danger. An assessment of the risks can be made on the basis of the expected damage and the likelihood that it will occur. An example of a risk classification is shown in the following example from the BSI Standard 100-4: Recording the current state If the risks and damage potential in a server failure scenario were determined within the framework of the BIA and risk analysis, the third step on the continuation strategy journey is to record the actual state. Emergency measures that have already been established as well as current recovery times are important for this step. Recording the actual state enables companies to estimate the need for action in case of serious security risks and their associated investment costs. Selecting the continuation strategy As a rule, there are various strategies for the different internal and external hazards, which allow the operation to continue running despite disruptions, or promise a speedy recovery at least. When it comes to business continuity management, it is, therefore the decision maker’s responsibility to decide on the continuation strategy to be used in an emergency. The decision is based on a cost-benefit analysis that includes key factors, such as which financial resources are required, how reliable the solution is, and the estimated recovery time. There are several solutions available if you want to develop a continuation strategy to prevent a data center fire: minimal solutions include compensation for insurance paid out due to operational failures and a replacement center with a hosting service provider. It would be more costly to convert the existing server room so that it complies with modern fire protection standards. If larger investments are available, consequential damage can be reduced by building a second, redundant server room. Already established continuation strategies are defined in the emergency safety concept, which contains specific instructions for all relevant emergency scenarios.
<urn:uuid:1f54fc7a-2e78-497f-80a5-2915cb427c4e>
CC-MAIN-2021-43
https://www.ionos.com/digitalguide/server/know-how/server-failure-what-to-do/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00231.warc.gz
en
0.940988
3,691
2.546875
3
Por Alejandro Chafuen: Publicado en: https://reason.com/1987/08/01/what-st-bemardines-ass-could-t/ During the early 1400s, the city of Siena, Italy, was a leading commercial and industrial center, much like its northern neighbor Florence. And in this cradle of capitalism, the most popular figure was a Franciscan friar named Bernardine. His speeches so enraptured listeners that the town’s church could not accommodate the crowds, and listeners had to gather in Siena’s largest piazza. The noise of the multitude swiftly faded as Bernardine commenced his homily: “Have you heard the story about the donkey of the three villages? It happened in the Valley of the Moon. There was a large shed close to the windmill. In order to take the grain to the mill, three villages agreed to buy a donkey and keep him in the shed. “A dweller of the first town went for the donkey, took him to his home, loaded the animal’s back with a heavy bag of wheat, and led him to the mill. During the milling, he released the ass so he could graze, but the fields had become barren because of heavy treading. When the wheat was milled, he collected the flour, loaded it on the donkey, and returned home. The man unloaded the ass and brought him to the shed, muttering to himself, ‘He who used him yesterday must have given him a lot of grass. Surely, he is in no need now’ and left the donkey. ‘The following day, a villager from the second town went for the donkey. He took him to his farm, placed on him a heavier burden than the day before, and—without feeding him—led the animal to the mill. With the milling over and the flour already at home, the villager returned the donkey to the shed thinking that yesterday’s user must have treated the animal well. And, yes, he left the donkey, saying, ‘Oh, I am very busy today.’ Two days had passed, and the donkey still did not have a bite. “On the third day, someone from the third village arrived for the donkey and burdened him with the heaviest load yet. ‘This donkey is owned by the Municipality,’ he remarked, ‘so it must be strong.’ And he took him to the mill. But on the way back, with the wheat already milled, the donkey was sluggish and often halting.The villager had to whip him, and after a strenuous effort, they arrived at the shed. The villager complained, ‘What an ass this Municipality bought to serve three towns! He is a piece of trash!’ That day also the donkey was not fed. “Do you want to know how it ended? The fourth day, the poor beast collapsed and was torn to bits.” When the majority of U.S. Catholic bishops voiced their disapproval of the market economy in last year’s pastoral letter, they exhibited not only a lack of understanding of how markets work but also an ignorance of their own religious heritage. For Catholic teaching includes a vital, though too often ignored, strain of free-market thought—that of late-medieval theologians like St. Bernardine. Perhaps St. Bernardine’s religious education, with its understanding of human imperfections, explains why he never regarded the authorities or the people as angels. He saw private property as the way to ensure that, in a nonangelical community, goods would be used for the betterment of society. Nor was he alone. During the later middle ages, many leading churchmen hailed free market principles. These were the Scholastics, or Schoolmen, “part-time” priests and full-time academicians who followed the Aristotelian, rationalist tradition of St. Thomas Aquinas. Most Scholastics were, like St. Bernardine, members of religious orders—Dominicans, Franciscans, Jesuits, or Augustinians—and taught in ecclesiastical schools. Their work concentrated on ethical questions—What is good? What is just?—and their goal was to formulate a corpus of thought applicable to all areas of life. To clarify such issues as whether high taxes are good or bad, for example, they first analyzed the causes and effects of taxation. In answering such questions, the Scholastics contributed to the development of economic knowledge and left behind an intellectual tradition far more compatible with prosperity, freedom, and even virtue than that preferred by too many of today’s clerics. For example, Francisco de Vitoria, a Dominican of the early 1500s, argued that if goods were commonly owned, evil men and even thieves and misers would profit most. They would take more from the common barn and put in less, while good men would do the opposite. Consistent with their defense of private property, several Schoolmen were strong critics of government abuses and often confronted the authorities. The outspoken jesuit Juan de Mariana, who lived from 1535 to 1624, is beyond a doubt the best example—his criticisms landed him in jail. In a superb portrayal of bad governments, he described how the “rich and the good” become their prime victims. Tyrants “drain individual treasures. Every day they impose new taxes.…They construct large, monstrous monuments; but at the cost of the riches and over the protests of their subjects.” In 1619, another Scholastic, Pedro Fernandei Navarrete, chaplain to the Spanish king, argued that poverty was caused by the government’s “great and wasteful spending on nonsensical factories, exquisite banquets.…and continuous spectacles and parties.” He criticized the enormous number of bureaucrats “sucking like harpies” on the government’s wealth while poor workers could hardly maintain themselves. He concluded that “the only agreeable country is the one where no one is afraid of tax collectors.” Mariana, too, had few qualms about debunking bureaucrats. “We see ministers, recently risen from the dust of the earth, suddenly loaded with a thousand ducats in rent,” he wrote. “Where is this money coming from, if it is not from the blood of the poor and the flesh of businessmen?” He foresaw that a huge debt, oppressive taxes, and inflation were the natural outcome of big government. His analysis of how governments inflate their way out of their debts—a process he regarded as “infamous systematic robbery”—would later influence Adam Smith’s analysis in The Wealth of Nations in 1776. If Mariana could read the bishops’ pastoral letter on the U.S. economy, he would be amazed to see the major cause of poverty (creating dependence on government spending) touted as the solution (more welfare!). Wages, profits, and rents, the Schoolmen determined, are not for the government to decide. Profits are justified when they are obtained by buying and selling at just prices—market prices arrived at without fraud, force, or monopoly. Duns Scotus, an influential Scholastic theologian who wrote in the late 13th century, had taken a different approach. After demonstrating the usefulness of merchants and businessmen, he recommended that the good prince take steps to ensure adequate prices to cover both their costs and their risks. In response, most Late Scholastics agreed that, while it is legitimate for manufacturers and tradesmen to earn a profit, it is impossible to establish an absolute level of the “just profit.” St. Bernardine, for instance, cited the example of a merchant who buys a product in a province where its price was 100 and takes it to another province, where the current price is 200 or 300. “You can legally sell at that price which is current in that community,” he declared. In the opposite case of buying at 100, then finding that the price has dropped to 50, St. Bemardine recognized that “it is the nature of business that sometimes you win and sometimes you lose.” Actions such as Lee Iacocca’s or the semiconductor industry’s requests for help from the government when their businesses are in danger would have been challenged by many Scholastic moralists. Juan de Mariana, for one, argued that entrepreneurs who, when confronted with losses, “cling to the magistrates as a shipwrecked person to a rock, and attempt to alleviate their difficulties at the cost of the state are the most pernicious of men…[and] must be rejected and avoided with extreme care.” Moralists though they were, the Scholastics extended their economic principles to practices they themselves thought immoral. Several Schoolmen concluded, in fact, that sinful or ignoble activities may be marketable and that those who were promised a reward for such activities are entitled to it and can even claim it in court. One of the most colorful issues the Scholastics explored is whether a prostitute is entitled to keep the payments for her services. Their answer was cautious. As moralists, they condemned the act of prostitution. But they stated that such women do have the right to receive monetary compensation for their services. This attitude toward immoral acts put into practice Aquinas’s principle that not every prohibition or recommendation of moral law needs a temporal law to enforce it. St. Antonio of Florence, a 15th-century Dominican, noted that many sinful contracts are permitted for the good of the republic—although this does not mean that the acts are good. Prostitutes sin by prostituting themselves, he said, but not by receiving payment for doing so. And, reasoned Jesuit Antonio de Escobar a century later, although the sale of a prostitute’s favors is evil, it causes pleasure, and things that cause pleasure merit a price. Furthermore, a prostitute’s fee is freely rendered—no one can claim to be forced to go to a brothel. Noting that most other Scholastic authors shared this conclusion, Escobar stated that we must reason in the same way when analyzing other types of profit obtained without fraud, lies, or extortion. This leads me to reflect upon the tragedy of drug abuse. I can only speculate that, confronted with the issue, these Scholastics would first explain that the abuse of chemicals can be poisonous and therefore should not be done, then proceed to ask the following questions: Should we ban the sale of poisons? If we ban the sale of dangerous drugs, would that prevent people from acquiring them? Who would profit from such prohibition? They would then proceed to recommend courses of action consistent not only with their belief in the sacredness of the human body but also with the conclusions of rational analysis. As moralists, the Schoolmen were concerned with the question of how man should act. As economists, they understood that a “means” is that which serves the attainment of a goal and that the only way to judge the means is to see whether or not it is suitable to attain the end. Thus, when they opposed mandatory “family wages,” it was not because they lacked concern for the family. Rather they saw that, from a legal and economic point of view, “need” could not be considered the basis for salaries. When they affirmed that prostitutes had a right to claim the agreed-upon price, they were not condoning immorality—they were stating that society would be impossible if the attempt were made to outlaw all vices. Civil authorities, they said, should endeavor to balance budgets, cut spending, reduce subsidies, and encourage development by keeping taxes moderate. Navarrete, perhaps the original “supply sider,” realized that excessive taxation could reduce the king’s income, as few people would be able to pay such high rates. The Late Scholastics opposed price controls on wheat because, as the Jesuit Luis de Molina wrote, “we know that in times of scarcity the poor can rarely buy the wheat at the official price. On the contrary, the only ones who can are the powerful and the public ministers, because the sellers cannot resist their requests.” And they opposed import duties on food because they reduced the standard of living of the poor. Today, when the church has again joined the economic debate, one of the few authoritative voices heard in the Vatican pleading for free markets is that of Cardinal Joseph Höffner, the Archbishop of Cologne and president of the German Bishops Conference and, not surprisingly, an expert on Scholastic economics. But the importance of the Scholastics extends beyond the church. F.A. Hayek, the Nobel laureate economist, has suggested that they can be considered the founders of modern free market thought. All those concerned with the moral foundations of a free society can benefit from the teachings of these proficient theologians. Alejandro A. Chafuén es Dr. En Economía por el International College de California. Licenciado en Economía, (UCA), es miembro del comité de consejeros para The Center for Vision & Values, fideicomisario del Grove City College, y presidente de la Atlas Economic Research Foundation. Se ha desempeñado como fideicomisario del Fraser Institute desde 1991. Fue profesor de ESEADE. Síguelo en @Chafuen
<urn:uuid:5956c90b-2e7f-4e44-a5c9-37d67608a5cb>
CC-MAIN-2021-43
https://eseade.wordpress.com/2019/05/20/lo-que-el-asno-de-san-bernardo-pudo-ensenar-a-los-obispos/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.967235
2,846
3.265625
3
Does your garden look a bit gloomy? If it is, you need to think about growing black-eyed Susan plants in the garden. The yellow flowers of these plants will create a cheerful garden. They look beautiful and quite enchanting when the flowers cover a large area. Moreover, one of the best things about the plants is that they are pretty easy to grow. - 1 What is Black-Eyed Susan? - 2 Growing Condition for Black-Eyed Susan - 3 Cultivars - 4 How to Grow Black-Eyed Susan from Seeds - 5 Caring for the Plants - 6 Dealing with Pests and Diseases - 7 Benefits of Growing Black-Eyed Susan What is Black-Eyed Susan? Sunny, happy, and carefree are some of the attributes that you will see in black-eyed Susan flowers. The plants that bear these flowers are scientifically named Rudbeckia hirta. Some people also call them brown-eyed daisy or gloriosa daisy. They are indigenous of North America and grow mostly in USDA zones 3 to 9. These are some fact about this plant: 1. It is a wildflower Many gardeners usually consider Rudbeckia as wildflowers since they are native to their area. In addition, they can be quite aggressive. If you do not take care of them properly, they may invade your garden and cause other plants to suffer from lack of nutrition. 2. Annual, Biennial, and Perennial Plants Some of the Rudbeckia cultivars are annual plants. However, you can also find biennial or perennial Rudbeckia. In consequence, you can find ones that suit your gardening style. If you don’t want to sow Rudbeckia seeds every year, you can choose the perennial variety. However, if you love to have different flowers each year in your garden, then you must choose the annual ones. This flowering plant is mostly propagated from seeds. However, you can also transplant the divided plants into new plants. The seeds of Rudbeckia can take a long time to germinate. However, when they germinate successfully, the sprouts will grow into strong and easy to maintain plants. 4. Plant Description Black-eyed Susan plants can grow up to 0.45 m wide and 0.9 m tall or even taller. Their leaves are hairy. They are medium size, between 10 cm and 18 cm long. The plants bloom between late summer and early fall. The blooms are daisy-like with bright petals and dark brown center. At a glance, they look like the miniature of sunflowers. The Rudbeckia flowers are about 7 to 10 cm wide. Most of these flowers are yellow. However, certain cultivars will give you orange, golden, and red flowers. It doesn’t take long for you to enjoy this beauty since Rudbeckia will bloom in the first year. Growing Condition for Black-Eyed Susan Rudbeckia plants are easy to grow. However, they will not grow well if you do not plant them in the right environment. These flowering plants are well adapted to various types of soil. You can grow them in sandy, loamy, or clay soil. However, they will have growing difficulty if the soil is poor. Therefore, you need to make sure that the soil in your garden is rich in organic material. In addition, the soil must be well drained and neutral (pH level of 7). Black-eyed Susan plants tend to love warm temperature. The seeds germinate best in temperature of 21oC. At this temperature, they will sprout roots in 1 to 2 weeks. However, in lower soil temperature, it can take up to 30 days for them to germinate. Therefore, it is better to start the seeds in between March and May. The best place to grow Rudbeckia is at a sunny spot. Getting 6 hours of sun is important for the plants to grow. Sun will also speed up the germination process. These flowering plants can adapt to partial sun. However, they may bloom less in this kind of environment. Some of these flowering plants do not tolerate drought very well. They require moist soil to thrive. However, you need to make sure that it is not too wet. Black-eyed Susan that grows in damp environment will have higher risk of suffering from plant diseases or affected by pests. There are various cultivars of Rudbeckia that you can choose: This dwarf type of black-eyed Susan is perfect for you who have limited space in the garden. Rudbeckia hirta sonora is only 30 cm tall. Most importantly, it has stunning flowers. The center is dark and the petals are yellow with reddish brown inner ring. This short-lived perennial will come back each year with those wonderful flowers. Sonora grows best in USDA hardiness zones 4 to 9. Moreover, it is drought tolerant. Therefore, it is okay to have dry soil in between watering. Sonora needs full sun to thrive. In winter, its roots need some protection from the cold temperature. The best way to protect them is mulching the soil in the fall. 2. Irish eyes This cultivar grows up to 0.75 m tall and offers you with attractive flowers. They have yellow petals and green eye. That’s why it is called Irish eyes. This perennial Rudbeckia can survive the winter in USDA zones 4 to 9. If you want to have it in your garden, plant the flowering plant in a sunny spot and water it once a week. 3. Cherokee sunset This annual black-eyed Susan is almost in the same size of Irish eyes. However, this plant has attractive flowers, unlike common black-eyed Susan. The flowers look like dahlia with orange and yellow petals and a dark center. You can see the plant flowering in early summer until the first frost. Cherokee sunset must be planted in full sun. In addition, it is also drought tolerant. Though it is not perennial, you will likely watch the plant grows in your garden each year without you sowing the seeds. It is because Cherokee sunset will self-seed if you let the seeds dry on the plant. 4. Indian summer This type of Rudbeckia is commonly used in landscaping. Its flowers bloom in the first year. They are bright yellow with dark center. You can grow Indian summer as short-lived perennial if you live in USDA zones 3 to 7. However, you can also grow it as annual. This plant is drought tolerant, once established. It can stand heat very well too. Indian summer grows well in sunny a spot. Moreover, if you grow these flowering plants in your garden, you need to space them properly. This compact cultivar is good for small garden. It is about 30 to 60 cm high and 30 cm wide. You can even try to grow it in a container. If you grow it in a pot, you must provide a large enough pot because the roots can grow quite long. Rudbeckia toto requires full sun, moderate watering, and moist and well-drained soil. Once established, the plant can tolerate drought very well. Toto has golden yellow flowers with chocolate cone. They are about 8 cm in diameter. Similar to other cultivars, toto blooms between the summer and the first frost. It usually offers large number of flowers that you cannot almost see its foliage. For best result, you must grow it in hardiness zones 5 to 8. If you want unique black-eyed Susan flowers, you must go for Rudbeckia Maya. The shape of these flowers is almost similar to mums. They have golden yellow frilly petals that are arranged in layers. The plant is quite compact since it only reaches 45 cm tall. Another good thing about Rudbeckia Maya is that it is resistant to diseases. In addition, this plant is also tolerant to dry environment once established. This cultivar will be flowering from midsummer to fall. In the right environment, Maya will grow as biennial. However, the plant sometimes grows as annual and self-seeds easily. 7. Cherry brandy If you want to create a more vibrant garden, you have to grow Rudbeckia cherry brandy. The flowers of this cultivar have vibrant red petals with dark cone. You will see the plant blooms numerous flowers continuously during the early summer to early fall. Cherry brandy is tolerant to dry environment. It requires weekly watering. Moreover, this cultivar of Rudbeckia grows well in hardiness zones 4 to 7. This short-lived perennial can grow up to 60 cm tall. How to Grow Black-Eyed Susan from Seeds Some gardeners recommend you to grow this flowering plant directly to the ground. It is because some of the cultivars cannot grow well in containers due to their root characteristic and size. However, you can choose the dwarf varieties to grow in containers. For this kind of planting, choose a container that is wide and deep. To grow black-eyed Susan, you need to perform these steps: In this step, you need to prepare the planting spot. First, choose a sunny spot to grow this flowering plant. After that, you need to loosen the soil and add compost or other type of fertilizer to the soil. If you want to grow black-eyed Susan in containers, you must first get containers, high quality potting mix, slow release fertilizer, and gravel. The containers for the Rudbeckia must have drainage holes and big enough for the plant. Add a layer of gravel at the bottom of the pots. After that, fill in the pots with the potting mix and add the slow release fertilizer into the potting mix. 2. Planting the seeds Once the site or container is ready, you can start planting the Rudbeckia seeds. You must not plant them too deep because they require sun to germinate. Place the seeds on the surface of the soil and cover them loosely with soil. Keep the soil moist until the seeds germinate. 3. Thin the seedlings In about a month or less, you will see the seedlings appear. Thin the seedlings when they are about 5 cm tall. Thin them so that the seedlings are 30 cm apart. When you are performing this step, you need to remove the weakest seedlings. Thinning is essential in growing Rudbeckia plants because they need excellent air circulation. Caring for the Plants Black-eyed Susan requires low maintenance. These are what you need to do in caring for the plant. 1. Deadheading the flowers If you want the Rudbeckia to bloom continuously until fall, you need to deadhead the flowers once a week. When you see some of the flowers start dying off, you must cut them immediately. If your black-eyed Susan only grows one flower on one stem, you must cut the stem closer to the base. However, if there are multiple flowers on one stem, you only need to cut the flowers. 2. Weekly watering Most Rudbeckia varieties require moderate watering. They love the soil moist but not wet. In consequence, you need to water the plant once a week. If the weather is hot and the soil is dry, you must water the soil more often. Rudbeckia that grows on a pot also need more frequent watering. 3. Removing the seed heads This flowering plant self-seeds easily. If you want to curb their growth in the next spring, you had better remove the seed heads before they are completely dry. Therefore, there will be no seeds left in the soil. However, if you want to have them again next year, you don’t need to remove the seed heads. 4. Dividing the plants For you who grow perennial Rudbeckia, it is essential to divide the plant in every 3 years. If you do not perform this step, the plant will have less blooms in the next years. Root division must be done in early fall or the spring. Before you do this step, you need to prepare a new planting site before you divide the plants. After it is ready, you can dig the soil around the plants and remove them carefully. Next, you must clean the roots from the soil. Choose a plant that has at least 3 healthy shoots and grow it in the new site. Dealing with Pests and Diseases Rudbeckia is prone to suffer from diseases that are caused by fungi. Some of those diseases are powdery mildew and leaf spot. To prevent the fungal infection, you must apply fungicide early in the season. In addition, you must make sure that the soil is not wet and the plants have good air circulation. Moreover, slug sometimes eat the base of Rudbeckia. They will attack this plant when the soil is wet. Therefore, you must be careful in watering the plant. Keeping the soil moist is important but you must not cause the soil to be wet. Benefits of Growing Black-Eyed Susan 1. Attract pollinators Rudbeckia flowers attract pollinators. When they bloom, butterflies, bees, and other pollinators will come to these flowers. Therefore, it is a great idea to plant Rudbeckia on the borders of your kitchen garden. The pollinators that come because of these flowers will help the pollination of other flowers. 2. Excellent cut flowers The flowers of this plant are excellent cut flowers. They are beautiful and cheerful. Arranging them in a vase along with other flowers will make your interior looks more attractive. Another good thing about the flowers is that they can last up to a week in your vase. 3. Black-eyed Susan root tea Herbalists believe that black-eyed Susan dry root tea is great to improve the immune system. Drinking the tea will help you to fight common colds. You can also use it to treat cuts, dropsy, and earache. Black-eyed Susan may be a native wildflower. However, its beauty will lighten up your garden beautifully from summer to fall. In order to make your garden looks more attractive, you can grow a combination of several Rudbeckia varieties that have different colors of flowers.
<urn:uuid:1f030f75-4408-4452-82f1-1ba3427f7dc8>
CC-MAIN-2021-43
https://urville.com/black-eyed-susan/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00710.warc.gz
en
0.948544
2,962
3.125
3
Plasma is the fluid part of blood and makes up the bulk of the volume. It contains substances that can be used to treat a number of different conditions. Blood is made up of four separate components, which each perform a different function. They are: red blood cells - carry oxygen around the body and remove carbon dioxide white blood cells - help the body fight infection platelets - tiny cells that trigger the process that causes the blood to clot (thicken) plasma - yellow fluid that transports blood cells and platelets around the body and contains a number of substances, including proteins What is plasma? Plasma is the largest component of blood, making up about 55% of its overall content. It's mainly made of water and surrounds the blood cells, carrying them around the body. Plasma helps maintain blood pressure and regulates body temperature. It contains a complex mix of substances used by the body to perform important functions. These substances include minerals, salts, hormones and proteins. Three important proteins found in plasma are: clotting (coagulation) factors How plasma components are used Plasma components can be used in a number of different ways, depending on the condition they're being used to treat. The two main methods of using plasma are: fresh frozen plasma transfusion - where plasma is separated from donated blood and frozen until needed; it's then thawed under controlled conditions and transfused to the recipient plasma exchange - where a special machine is used to remove plasma from the patient's blood; it's then replaced with a substitute plasma component from donors The following plasma components can be used to treat a variety of different conditions. Albumin cleans the blood, carries substances around the body, and helps maintain the correct amount of fluid circulating in the body. Human albumin solution can be used as a treatment to help people with severe burns, sepsis, liver disease or kidney disease. Clotting (coagulation) substances called clotting factors help control bleeding and work together with platelets to ensure the blood clots effectively. Fresh frozen plasma and clotting factors can be used to treat bleeding disorders. For example, they can be used in severe injuries when there's a lot of bleeding. In the UK, specific clotting factors are also used to treat haemophilia, an inherited condition that affects the blood's ability to clot. Immunoglobulins are part of the immune system (the body's natural defence against infection and illness). Immunoglobulins are antibodies that the body produces to fight a variety of infections. For example, they're used to fight health conditions such as: chickenpox - a serious but usually shortlived viral infection hepatitis - a viral infection that causes the liver to become inflamed (swollen) rabies - a very rare infection of the central nervous system that's passed on to humans from infected animals Normal human immunoglobulins can be used to support people who have conditions where their immune system is having difficulty producing antibodies. Plasma is the source of anti-D immunoglobulin, a substance often given by injection to pregnant women with a rhesus-negative blood group (RhD negative) whose unborn baby may have a rhesus-positive blood group (RhD positive). This treatment prevents the mother becoming sensitised to the baby's blood and stops immune anti-D developing. Immune anti-D can causerhesus disease in subsequent pregnancies, which is a potentially fatal condition. Risks of using plasma components Some people can experience problems related to a plasma transfusion. These can vary in severity from a slight increase in temperature to the development of a serious condition called variant Creutzfeldt-Jakob disease (vCJD) in very rare cases. How plasma and plasma components are used Plasma and plasma components can be removed from blood in a number of different ways, allowing them to be used to treat a variety of conditions. The main ways plasma is used include: plasma exchange (plasmapheresis) Plasma components can be used to help prevent health problems occurring in conditions such as rhesus disease (where antibodies in a pregnant woman's blood destroy her baby's blood cells). They can also be used to prevent bleeding in people withhaemophilia (an inherited condition that affects the blood's ability to clot). Fresh frozen plasma To obtain plasma for transfusion, a donation containing all the components of blood (whole blood), including plasma, is taken from one person. The plasma is separated from the red cells and frozen, becoming fresh frozen plasma. When needed, it's thawed and given as a transfusion to another person. For example, a person may be given a plasma transfusion if they're bleeding after having a serious accident or major surgery, where clotting factors need to be replaced in addition to red blood cells. Before someone is able to donate blood for transfusion, they have to comply with a strict set of guidelines about their medical, travel and sexual history. This ensures it's safe for them to donate and that their blood is safe to be transfused. As with red blood cells, plasma is always checked for viruses to make sure it's as safe as possible to use. Most people receiving plasma receive fresh frozen plasma. This is stored frozen at -25C for up to three years, so it needs to be carefully thawed before use. Pathogen-inactivated fresh frozen plasma Plasma transfused to people born after January 1 1996 comes from donors outside the UK to reduce the risk of variant Creutzfeldt-Jakob disease (vCJD), a rare and fatal brain condition. There are two products available. One is produced by UK blood services and has been treated with a chemical called methylene blue (an additional step to make plasma safer). The alternative is a batched mixed plasma that's been treated with solvent detergent to make it safer. Cryoprecipitate is plasma that's been specially treated so it's rich in certain proteins, including fibrinogen (a special protein that helps blood clot). Plasma products made by fractionation Many of the components found in plasma can be separated and removed so they can be used to treat specific problems. Some plasma donations are mixed (pooled) and subjected to a number of different heat and chemical treatments. The various proteins are then separated out in a complex process known as fractionation. All blood donations used to make plasma pools for fractionation have to be checked for viruses to make sure they're as safe as possible to use. The pooled plasma is also carefully filtered and "cleaned" using heat, detergents and solvents to remove any viruses that may be present. After the fractionation process has been completed, the plasma products are either kept as a liquid or freeze-dried as a powder for reconstitution before use. They're then packaged, ready for distribution to clinics, surgeries and hospitals. There are numerous plasma components, but the three main ones are: human albumin solution clotting (coagulation) factors normal human immunoglobulin Plasma exchange, also known as plasmapheresis, is a procedure where a machine called a cell separator is used to separate plasma from the other components of a person's blood. During the procedure, the plasma is removed and replaced with a substitute (usually human albumin solution), and the red cells, white cells and platelets are returned to the patient. Plasma exchange is often used to treat rare blood conditions. Some of these are briefly outlined below. Thrombotic thrombocytopenic purpura Thrombotic thrombocytopenic purpura is a rare clotting disorder affecting the platelets, where microscopic blood clots form and damage organs and red blood cells. Plasma exchange separates and removes the plasma from the rest of your blood and replaces it with solvent detergent plasma. This replenishes levels of a vital enzyme that controls the platelet clotting and removes the antibodies responsible for the condition. Multiple myeloma and Waldenström's macroglobulinaemia Multiple myeloma and Waldenström's macroglobulinaemia are both rare types of bone marrow cancer where abnormal bone marrow cells create large amounts of a protein called a paraprotein (immunoglobulin). If the protein levels in the blood become too high, the blood can thicken, which is known as hyperviscosity. The symptoms of hyperviscosity include: Plasma exchange reduces the amount of abnormal protein in the blood, which helps relieve the symptoms. However, the process does not prevent the production of immunoglobulin. Other treatments, such as chemotherapy, may be required to achieve this. Plasma exchange procedure During plasma exchange, a machine called a cell separator is used to separate the plasma from the rest of your blood. A needle is inserted into a vein in the arm and the blood is removed and passed through the cell separator. The plasma is separated from the rest of your blood and a plasma substitute is added before the blood is returned through a needle in a vein in the other arm. Plasma exchange takes about two hours to perform. During the process, only a small amount of blood (less than 100ml) will be outside the body at any one time. This is because the blood is being removed and returned at the same rate. The amount of plasma exchanged will depend on factors such as: how viscous (thick) your blood is The number of plasma exchanges needed will depend on your symptoms and how well you are responding to your other treatments. Feeling faint or lightheaded are both possible side effects of a plasma exchange. If you feel faint or lightheaded, you should tell the healthcare professional treating you. The symptoms can usually be effectively treated by changing to a lying down position. Ensuring you have something to eat on the day of the procedure will also help prevent these symptoms. During a plasma exchange, you may also experience numbness or a tingling sensation around your nose and mouth and in your fingers. This is caused by a substance called citrate, which is added to your blood as it goes through the machine to prevent it clotting. The citrate may affect the levels of calcium in your blood. Let the healthcare professional treating you know if you experience numbness or tingling sensations. They may stop the plasma exchange for a few minutes until your body adjusts to the increased citrate levels in your blood, or they may increase the level of calcium in your blood. Adverse reactions to plasma components Plasma components can save lives, but their use isn't without risk. In some cases there can be adverse reactions, which can differ in severity. Adverse reactions that you could experience after having a plasma transfusion include: a slight rise in temperature itching and sometimes a rash (hives) – this can occur within a few minutes of starting a plasma transfusion, but can usually be cured by slowing down the rate of transfusion or by taking an antihistamine (medication to treat mild allergic reactions) anaphylaxis – a rare but life-threatening allergic reaction The risk of developing an infection after receiving plasma is very small. All blood donations used to make plasma are carefully screened for viruses to make sure they're safe. However, as is the case with most medical procedures, there are possible risks associated with receiving plasma. Some of these are outlined below. Transfusion-related acute lung injury (TRALI) Transfusion-related acute lung injury (TRALI) is a reaction that can occasionally occur in someone who receives a plasma transfusion. During or shortly after the transfusion the person will have breathing difficulties, which can sometimes be severe. The reaction is thought to occur because the donated plasma contains antibodies (proteins produced by the donor's immune system), called HLA antibodies, that react with your white blood cells. This occurs more often when plasma has been donated by a female donor who's had pregnancies in the past and whose immune system produced the antibodies as a response to pregnancy. Antibodies are usually produced by the immune system to fight organisms in the blood that the body regards as "foreign", such as bacteria. However, in pregnancy they have a protective role. To minimise the risk of TRALI, plasma from male donors is usually used to make fresh frozen plasma and other plasma-containing blood components and products used for transfusion. Fractionated plasma products don't cause TRALI. Variant Creutzfeldt-Jakob disease (vCJD) Variant Creutzfeldt-Jakob disease (vCJD) is the human form of bovine spongiform encephalopathy (BSE), commonly known as mad cow disease. First identified in 1996, vCJD is a rare neurological illness that causes brain damage. It occurs as a result of eating the meat of cattle infected with BSE. The risk of developing vCJD after having a blood transfusion is very small, but there's currently no test available to screen donated blood for the prion protein that causes vCJD. Each year in England, approximately 2 million units of blood are transfused. To date, there have only been a few cases where patients are known to have become infected with vCJD after having a blood transfusion. removing all white cells by filtering cellular blood components (red blood cells and platelets) importing fresh frozen plasma from countries where there have been no cases of vCJD for transfusion to those born on or after January 1 1996 using pooled plasma for fractionation from countries where there have been no cases of vCJD, and using recombinant clotting factors (produced in a laboratory using DNA technology) for treating people with haemophilia, where these products are available only using plasma transfusions when absolutely necessary
<urn:uuid:62ed4aa0-941b-49c9-b783-055e4ef94bee>
CC-MAIN-2021-43
https://www.knowyourdoctor.com.cy/medical/plasma-components/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00350.warc.gz
en
0.940948
2,931
4.09375
4
The alarm goes off, and we wake up to a brand new day. We should be grateful and happy because we are alive! We have been given another day. Right? Well, if you are like me, happiness is not exactly the first feeling I have when I wake up. But, if we think about it, every morning when we open our eyes, we should imagine God opening the door to a brand new day telling us: “Go and Enjoy.” Well, ideally, that is the way it should be. However, we all know life is not easy and that some days look more like our worst enemy opening the same door and saying in a sarcastic tone: “Go and Enjoy muahaha, muhaha.” Let’s face it. It is a reality. We all have good days and bad days, or maybe terrible days, where we think the word “enjoy” doesn’t fit anywhere. The good news is that even in those terrible days, we can still have something to enjoy, and that is what we need to find and focus on always. What is the Meaning of the Word “Enjoy”? After reviewing the definition from several dictionaries, I decided to go with the one I found on learnersdictionary.com: To take pleasure in something To have or experience something good To have a good time So, How do We Enjoy Something? Reflecting on the definition, the word “enjoy” can relate to many things: - Things we do: Everything related to action words like traveling, playing, eating, sleeping, talking, listening, watching, working, helping, exercising, etc. For example, I enjoy traveling or, I enjoy drinking a glass of wine - Things we have: Usually material things like a house, a boat, a car, etc. For example, I enjoy my house because it has everything I need - Experiences we have: These are moments that involve feelings like celebrations, achievements, relationships, etc. For example, I enjoy being with my family; I enjoy when I accomplish something, As we can see, “enjoy” comes from something we like. Therefore, having someone who tells us “go and enjoy” is the best thing that can happen to us. Isn’t it? Just think about how you would feel if your boss, your husband or wife or your kids tell you everyday “go and enjoy.” Wouldn’t that be great? So, if this “enjoy thing” is so good, why don’t we do it more often? What Keeps Us From Enjoying? I think there is only one answer to this question, and that is: Yes, we are the only ones who keep us from enjoying things we do, things we have, moments, experiences, etc. Why? Well, that is the one million dollar question. I am a mother of teenagers, and like every teenager, their mood is like a roller coaster. That means that they can enjoy something, and the next minute, the enjoyment disappear without any explanation. Why? Doctors attribute this to hormones. So let’s say they have an excuse. Now, what about us, adult people, what is our excuse? I tried to find an answer, and everything I read got me to conclude that unless we suffer from depression or a major physical or mental illness, we all should be in the capacity to enjoy things, moments and experiences. It is only up to us to decide to enjoy each of them, and we should not have any excuse for not doing it. 12 Tips to Enjoy More - Let’s start by looking at our day by moments, not as a 24 hours day. Pretending to enjoy our whole day is maybe impossible. Our days are usually like teenagers mood. There are a lot of ups and downs in one day. So let’s focus on enjoying the up moments we have through our day. - Enjoy little things as well as big things. Even better, let’s enjoy the little things as if they were big things. - Be aware and conscious of what we are enjoying. If we are not conscious that we are enjoying something, we can not enjoy it. Just like I explained in my post about “Awareness is the Key to Everything,” we need to know that something exists so we can do something about it. - Live in the “Here and Now.” We need to live in the moment. If we are distracted when we do something we enjoy, then we cannot enjoy it. We need to have our mind in what we are doing so we can enjoy it. - Slow Down. Being in a rush deprives us to enjoy whatever we are enjoying. So let’s practice to take our time whenever we are enjoying something. It is not the same to drink our morning coffee in a hurry while walking to our car than to take the time to drink our coffee and flavoring it sitting in our kitchen with our spouse. Have you heard about the Slow Movement? Take a look at this interesting approach that talks about “time poverty.” - Make a list of all the things we enjoy. We can separate them by things we like to do, things we have and the experiences we like. Let’s add little things like having a coffee in the morning and big things like traveling. This list will help us to be aware of these things when they happen. - Every time we are doing something, let’s ask ourselves: “Do I enjoy doing this?” This question will help us to get conscious about the things we enjoy. If our answer is yes, then we know what to do: enjoy it - Do more of what we enjoy. This means being pro-active and adding more things that we enjoy to our day, not wait for them to happen. i.e: - Set achievable goals for each day. Every one of us enjoys that moment when we reach a goal. For example, one of my favorite things to do is to scratch out something from my to-do list. - Share what we enjoy with others. Most of the times telling others about something we enjoy makes us enjoy it again. It is like when we tell someone about a trip we had. Telling the story makes us enjoy the whole trip again. - When reflecting about our day or someone ask us about our day, let’s try to mention all the things we enjoyed. This will make us focus on what we enjoy and on the positives of our day. - Try new things as I suggest in my post “Say Yes to an Adventurous Life.” By being more adventurous, we can discover new things to enjoy Oh...The Things I Enjoy: While writing this post, I decided to make my list of the things I enjoy the most. During 5 minutes, I tried to put everything I remembered: big things, little things and crazy things too. I came up with a long list of 84 things and I know I have plenty more. I will only share a few here to help you think about the things you enjoy: - Sleep late and staying in bed for a while after the alarm goes off - Spending time with my kids when their mood is up, of course. Listening to their stories and silly things and watching them smile is extremely enjoyable. - Eating a vanilla icecream cone every Sunday after mass - Working on my blog: this is very new to me, but I enjoy it because it has everything I like to do: creativity, design, reading, and listening about topics I like, learning new things, sharing my ideas, etc - Going for a walk with my husband: I enjoy that time because we talk about anything and everything. It is a great time to connect. - The beach or the lake with my family, friends and a cold beer in my hand The One Thing I Enjoy that Would Like to Do Every Day After reviewing my list, I realized that there are a lot of things that I enjoy that are easy to do frequently, but for some reason, I don’t do it. One of them is reading a good book. From now on, I will make an effort to get into the habit of reading again. To enjoy our life more, the first thing we have to do is to make the decision every morning to open the door of a new day and tell ourselves: “Let’s Go to Enjoy my Day” Making this important decision can change the outcome of our days. Remember, “Enjoy” is all about good things and good times, and it is essential for a happy and healthy life. When we put the word “Enjoy” into action, we get: - More energy - The wish to share - To be more grateful - To be more productive - To reduce our stress level - Good moments and memories - To smile often - To make others smile As I said before, we all have good days and bad days, but even in those bad days, almost always, we can have things, moments, and experiences that we can enjoy. By living in the here and now and being conscious of what we enjoy, we will be able to add more of those things we enjoy to our days. I have two questions for you? - What are a few things you enjoy the most? - What is that one thing you enjoy that can be easily added to your every day? Please share your answers. Below is a link of TEDx Talk about Slowness and the Slow Movement that I really enjoyed. It made me think about the importance of slow down to enjoy life. I will be sharing things I enjoy on Instagram and Facebook. If you are curious, check them out!
<urn:uuid:8578a105-ce1e-46a1-b5f8-eca6455b74f5>
CC-MAIN-2021-43
https://solowords.com/lets-go-and-enjoy-we-are-alive-12-tips-to-enjoy-more/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00669.warc.gz
en
0.965042
2,064
2.734375
3
Dr. James Hildreth PhD MD, proposed that “the virus is fully an exosome in every sense of the word.” What Are Exosomes? Exosomes are membrane bound extracellular vesicles (EVs) that are produced in the endosomal compartment of most eukaryotic cells. The multivesicular body (MVB) is an endosome defined by intraluminal vesicles (ILVs) that bud inward into the endosomal lumen. If the MVB fuses with the cell surface (the plasma membrane), these ILVs are released as exosomes. In multicellular organisms, exosomes and other EVs are present in tissues and can also be found in biological fluids including blood, urine, and cerebrospinal fluid. They are also released in vitro by cultured cells into their growth medium. Since the size of exosomes is limited by that of the parent MVB, exosomes are generally thought to be smaller than most other EVs, from about 30 to 150 nanometres (nm) in diameter: around the same size as many lipoproteins but much smaller than cells. Compared with EVs in general, it is unclear whether exosomes have unique characteristics or functions or can be separated or distinguished effectively from other EVs. EVs including exosomes carry markers of cells of origin and have specialized functions in physiological processes, from coagulation and intercellular signaling to acidic waste management of the intravascular and interstitial fluids of the Interstitium – the largest organ of the human body. Are Exosomes Viruses? There is NO scientific evidence from ANY research (published or otherwise) from ANY scientist or group of scientists any where in the World validating the existence of the so-called invisible virus or that exosomes have been proven to be the existence of any virus! Exosomes are created endogenously by the cells, even the red blood cells as a means of mediating or buffering metabolic, environmental, dietary and/or respiratory acidic waste in order to maintain the delicate pH balance of the intravascular fluids, the interstitial fluids and the intracellular fluids of the body cells at 7.365. Are Exosomes the Agents to the Activation of the Immune System and a Defense Against Metabolic Acids? What may appear as viral particles are many times indistinguishable from exosomes. Exosomes are natural micro-vesicles produced by cells; they carry messages from cell to cell, and to other tissues, and possibly to other people. They are essential to health because they carry acidic waste out of damaged cells and trigger the lymphocytes to release reduced oxygen (SO-) and reduced hydrogen (OH-) molecules to buffer metabolic acidic cellular waste to prevent the death of the body. Are COVID-19 and HIV Exosomes? Based upon electron microscopy the so-called COVID-19 virus and the the so-called HIV virus are 100 nm in diameter and appear identical to exosomes. In January 18th of 2020, three scientists published a scientific paper describing the protective purpose of exosomes, entitled, “Exosome-Mediated Transfer of ACE2 (Angiotensin-Converting Enzyme 2) from Endothelial Progenitor Cells Promotes Survival and Function of Endothelial Cell.” Research on Exosomes and Their Support of the Lymphocytes (Immune System) in Reducing Cancer-causing Acidic Waste Exosomes from red blood cells contain the transferrin receptor which is absent in mature erythrocytes. Dendritic cell-derived exosomes express MHC I, MHC II, and costimulatory molecules and have been proven to be able to induce and enhance antigen-specific T cell responses in vivo in reducing metabolic acidic waste. What Is the Relationship Between Exosomes and COVID-19 They both contain the ACE2, or angiotensin converting enzyme-2 receptor and visually, using an electron microscope measure the same size. The exosomes or should we say the COVID-19, ACE2 receptor chops up two forms of a protein called angiotensin to keep blood pressure stable by protecting cell membranes from cellular breakdown from metabolic, dietary, environmental and respiratory acidic waste. So What is Causing the Symptoms of COVID-19 and the Release of Exosomes into the Extracellular Matrix? It is a four letter word – ACID! Where is the ACID coming from? The major contributing factors that cause cellular breakdown and the release of exosomes into the extracellular matrix are as follows: 1. Electro-magnetic pulsating frequencies ranging from 1GHz to 600GHz. 2. Carbon Dioxide and Monoxide Poisoning. 3. Glyphosate Acid Poisoning from non-organic fruit and vegetables. 4. Lactic Acid Poisoning from diet and metabolism.21] 5. Uric, Nitric, Sulphuric and Phosphoric Acid Poisoning from eating the flesh and blood of animals. 6. Genetically Modified Organisms in our food supply and vaccines. 7. Aluminum Oxide Poisoning from vaccination and chem trails. 8. Antibiotic Poisoning. 9. Acidic Polluted Water, Alcohol, Coffee, Black tea, Soda drinks, Sport drinks. 10. Sugar in all of its form or any word that ends in ‘ose’. How Can I Support the Alkaline Design of the Body and Reduce Metabolic, Dietary, Environmental and Respiratory Acidic Waste that is Making Me Sick and Tired? First, read five books by Dr. Robert O. Young to start with – 1. Sick and Tired, Reclaim Your Inner Terrain 2. The pH Miracle revised and updated 3. Chlorine Dioxide (CLO2) As a Non-Toxic Antimicrobial Agent for Virus, Bacteria and Yeast (Candids Albicans) 4. Alkalizing Nutritional Therapy in the Prevention and Treatment of any Cancerous Condition, and, 5. Second Thoughts about Viruses, Vaccines, and the HIV/AIDS Hypothesis. Second, follow the protocol as outlined in Chapter 5 and 11 in The pH Miracle Revised and Updated for at least 12 weeks. Third, If you need further clarification and support you can setup a consultation with Dr. Robert O. Young by clicking here: https://www.drrobertyoung.com/services-page Fourth, you can attend a pH Miracle Retreat and immerse yourself in a paradise of alkalinity. To learn more go to: www.phmiracleretreat.com Théry C, Witwer KW, Aikawa E, Alcaraz MJ, Anderson JD, Andriantsitohaina R, et al. (2018). “Minimal information for studies of extracellular vesicles 2018 (MISEV2018): a position statement of the International Society for Extracellular Vesicles and update of the MISEV2014 guidelines”. Journal of Extracellular Vesicles. 7 (1): 1535750. doi:10.1080/20013078.2018.1535750. PMC 6322352. PMID 30637094. Yáñez-Mó M, Siljander PR, Andreu Z, Zavec AB, Borràs FE, Buzas EI, Buzas K, et al. (2015). “Biological properties of extracellular vesicles and their physiological functions”. Journal of Extracellular Vesicles. 4: 27066. doi:10.3402/jev.v4.27066. PMC 4433489. PMID 25979354. van Niel G, D’Angelo G, Raposo G (April 2018). “Shedding light on the cell biology of extracellular vesicles”. Nature Reviews. Molecular Cell Biology. 19 (4): 213–228. doi:10.1038/nrm.2017.125. PMID 29339798. van der Pol E, Böing AN, Harrison P, Sturk A, Nieuwland R (July 2012). “Classification, functions, and clinical relevance of extracellular vesicles”. Pharmacological Reviews. 64 (3): 676–705. doi:10.1124/pr.112.005983. PMID 22722893. Keller S, Sanderson MP, Stoeck A, Altevogt P (November 2006). “Exosomes: from biogenesis and secretion to biological function”. Immunology Letters. 107 (2): 102–8. doi:10.1016/j.imlet.2006.09.005. PMID 17067686. Spaull R, McPherson B, Gialeli A, Clayton A, Uney J, Heep A, Cordero-Llana Ó (April 2019). “Exosomes populate the cerebrospinal fluid of preterm infants with post-haemorrhagic hydrocephalus”. International Journal of Developmental Neuroscience. 73: 59–65. doi:10.1016/j.ijdevneu.2019.01.004. PMID 30639393. Dhondt B, Van Deun J, Vermaerke S, de Marco A, Lumen N, De Wever O, Hendrix A (June 2018). “Urinary extracellular vesicle biomarkers in urological cancers: From discovery towards clinical implementation”. The International Journal of Biochemistry & Cell Biology. 99: 236–256. doi:10.1016/j.biocel.2018.04.009. PMID 29654900. Wang J, Chen S, Bihl J, “Exosome-Mediated Transfer of ACE2 (Angiotensin-Converting Enzyme 2) from Endothelial Progenitor Cells Promotes Survival and Function.” Oxid Med Cell Longev, 2020 Jan 18;2020:4213541. doi: 10.1155/2020/4213541 Mignot G, Roux S, Thery C, Ségura E, Zitvogel L (2006). “Prospects for exosomes in immunotherapy of cancer”. Journal of Cellular and Molecular Medicine. 10 (2): 376–88. doi:10.1111/j.1582-4934.2006.tb00406.x. PMC 3933128. PMID 16796806. Rubik, B. Bioelectromagnetic Medicine. Administrative Radiology Journal XVI(8), August 1997, 38-46. Young, R.O., “The Effects of ElectroMagnetic Frequencies (EMF) on the Blood and Biological Terrain.” https://www.drrobertyoung.com/…/the-effects-electromagnet-f… Young, R.O., “Adverse Health Effects of 5G Mobile Networking Technology Under Real-Life Conditions.” April 19th, 2020. https://www.drrobertyoung.com/…/adverse-health-effects-of-5… NOAA. (2016). In a high carbon dioxide world, dangerous waters ahead. (accessed on August 6, 2019) NOAA. (2018). What is Ocean Acidification? (accessed on August 6, 2019) National Geographic. (2017). Ocean Acidification. (accessed on August 6, 2019) NOAA. (2010). Ocean Acidification, Today and in the Future. (accessed on August 6, 2019) Young, R.O., Young, S.R, “The pH Miracle Revised and Updated.” Hachett Publishing, 2010. Are the Interstitial Fluids Raining Acid on YOUR Lung Cells? (December 17th, 2019) Young, R.O., “Sick and Tired.” https://www.phmiracleproducts.com/…/books-audio-video/produ… Young, R.O., Young, S.R. “The pH Miracle Revised and Updated.” Grand Central Publishing, NY, NY, 2010. https://www.phmiracleproducts.com/…/the-ph-miracle-revised-… Young, R.O.,”Chlorine Dioxide (CLO2) As a Non-Toxic Antimicrobial Agent for Virus, Bacteria and Yeast (Candids Albicans),” Hikari Omni Media, August 2nd, 2016. https://www.phmiracleproducts.com/…/chlorine-dioxide-clo2-b… Young, R.O., Migalko, G., “Alkalizing Nutritional Therapy in the Prevention and Treatment of any Cancerous Condition.” Hikari Omni Media, August 1st, 2016. https://www.phmiracleproducts.com/…/alkalizing-nutritional-… Young, R.O., “Second Thoughts about Viruses, Vaccines, and the HIV/AIDS Hypothesis,” Hikari Omni Media, August 2nd, 2016. https://www.phmiracleproducts.com/…/second-thoughts-about-v…
<urn:uuid:4f2e8a32-7c1a-4de5-9ceb-278eede6d541>
CC-MAIN-2021-43
https://phoreveryoung.wordpress.com/tag/exosome/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00070.warc.gz
en
0.79578
2,861
3.578125
4
Many people simultaneously claim to support the Second Amendment while insisting the federal government should be able to ban “military-style weapons.” These are actually mutually exclusive positions. In fact, the whole purpose of the Second Amendment was to ensure the people would always have access to “weapons of war.” On February 14, 2018, Nicholas Cruz shot and killed 17 students from Marjory Stoneman Douglas High School in Parkland, Fla. Cruz, 19, had been suspended from the school for disciplinary reasons. Despite a long history of bad behavior, as well as attention from law enforcement, Cruz was not treated as a legitimate threat. In an attempt to reduce the school-to-prison pipeline, the district failed to report activities and generally kept him under the radar of local law enforcement agencies. Attorney General Jeff Sessions also admitted that the FBI failed to act on numerous reports of erratic and threatening behavior on the part of Cruz. Despite government failures at both the local and federal level, the public debate predictably turned to the issue of gun control with specific focus on the ban of “military style” rifles, or “assault rifles,” as they are often called. In one school, students were instructed to write letters to representatives asking them to implement stricter gun control regulations. A common refrain from both sides of the debate is, “No one is saying that military weapons should be in the hands of civilians.” Former President Barack Obama said, “Weapons of war have no place on our streets,” and, “our law enforcement officers should never be outgunned.” Many conservative media pundits agree. In the process, they concede a crucial point that was a central reason for the ratification of the Second Amendment: The People must have the means by which they can resist a tyrannical government – means rendered ineffective if we surrender the right to be on a level playing field when it comes to firearms. As Ryan McMaken explained in a recent article, the origins of the militia trace back to 17th century England when Americans resisted the standing army sent by the king to crush dissent. McMaken shares the insight of British historian Marcus Canliffe regarding the origins of American military institutions and the compromises reached between a centralized military capable of suppressing dissent and a reasonable force needed to maintain order: “A compromise was reached. First, a small regular force was to be maintained: this was the actual foundation of the British standing army. Second, there was to be a nationwide militia, composed of civilians who would — as in earlier days — be summoned in time of need. The militia, however, was to be under civil law, and to be organized locally by the lord lieutenant of each county. It was thus decentralized and divorced from royal control.” In the colonies, standing armies were viewed with skepticism (hatred, actually), and this was especially so after the British Army was sent as an ultimate enforcement mechanism for the various taxes and other acts imposed on the colonies by the Crown. As the Constitution was being discussed and debated, one of the most hotly contested objects relating to the powers of the new Congress was its ability to raise armies. During the Virginia Convention, Patrick Henry famously observed that “A standing army we shall have, also, to execute the execrable commands of tyranny; and how are you to punish them? Will you order them to be punished? Who shall obey these orders? Will your mace-bearer be a match for a disciplined regiment?” Henry went on to say that “the clause before you gives a power of direct taxation, unbounded and unlimited, exclusive power of legislation, in all cases whatsoever, for ten miles square, and over all places purchased for the erection of forts, magazines, arsenals, dockyards, &c. What resistance could be made?” Henry’s denunciation clearly and emphatically rejected the centralization of power. He feared to give what he called a “central government” the power over both the sword and the purse. He observed that any attempt to restrain government in such an instance “would be madness” and thundered “you will find all the strength of this country in the hands of your enemies; their garrisons will naturally be the strongest places in the country. Your militia is given up to Congress, also, in another part of this plan: they will therefore act as they think proper: all power will be in their own possession. You cannot force them to receive their punishment”. Henry concluded this opening barrage by asking “of what service would militia be to you, when, most probably, you will not have a single musket in the state? for, as arms are to be provided by Congress, they may or may not furnish them.” This proclamation by Henry provides a succinct summation of the general beliefs of a large portion of Americans during the ratification period, a belief which led directly to the establishment and adoption of the Second Amendment. Most States, Virginia included, ratified the Constitution on the basis that “further declaratory and restrictive clauses” upon the general government should be added. And among the specifics, Virginia asserted in her ratification instrument that “the people have a right to keep and bear arms; that a well regulated Militia composed of the body of the people trained to arms is the proper, natural and safe defence of a free State.” [Emphasis added] Virginia declared as a condition of ratification that “standing armies in time of peace are dangerous to liberty, and therefore ought to be avoided, as far as the circumstances and protection of the Community will admit; and that in all cases the military should be under strict subordination to and governed by the Civil power.” [Emphasis added] Likewise, New York, as a condition of ratification, insisted very similarly to Virginia that “That the People have a right to keep and bear Arms; that a well regulated Militia, including the body of the People capable of bearing Arms, is the proper, natural and safe defence of a free State; That the Militia should not be subject to Martial Law except in time of War, Rebellion or Insurrection. That standing Armies in time of Peace are dangerous to Liberty, and ought not to be kept up, except in Cases of necessity; and that at all times, the Military should be under strict Subordination to the civil Power.” Clause XIII of the Pennsylvania Declaration of Rights guarantees “That the people have a right to bear arms for the defence of themselves and the state; and as standing armies in the time of peace are dangerous to liberty, they ought not to be kept up; And that the military should be kept under strict subordination to, and governed by, the civil power.” [Emphasis added] Suffice to say that the founding generation had an immense and universal fear of standing armies. James Madison explained that “a standing military force, with an overgrown Executive will not long be safe companions to liberty.” St. George Tucker wrote the first systematic commentary on the Constitution. He provided further context to the right to bear arms and its role in preventing standing armies when he pointed out, “Wherever standing armies are kept up, and when the right of the people to keep and bear arms is, under any color or pretext whatsoever, prohibited, liberty, if not already annihilated, is on the brink of destruction.” A more thorough discussion of these concepts can be found HERE. The Second Amendment thus came about as a means of preventing the need for standing armies by keeping “the militia,” or as George Mason asserted “every able bodied person,” under the auspices of the individual states – in other words, out of the immediate control of the general government. In his book The Founders’ Second Amendment, Stephen P. Halbrook describes Pennsylvania Senator William McClay’s concerns as written during notes from debates that would result in the enactment of the 1792 federal Militia Act; namely, that Alexander Hamilton and his faction were instigating war with American Indians and foreign nations to justify raising an army that would “awe our Citizens into submission.” Roger Sherman of Connecticut commented that man had an essential right “to resist every attack upon his liberty or property, by whomsoever made.” [Emphasis added.] The intent of the Constitution and historical background are irrefutable: a civilian, decentralized force was viewed as the optimal means by which forces loyal to a king, or central government could be held in check, should they become tyrannical in nature. The government, the media and the education system have successfully indoctrinated the people into believing that their safety lies in their ability to defend themselves but only extends so far as to render us subservient to the capabilities of the government to defend itself from us. In other words, the polar opposite of the purpose of the Second Amendment. In 1794, George Washington marched troops into Pennsylvania, absent the required request from the governor, to quell a rebellion over a Whiskey Tax. The Whiskey Rebellion came about after the urban/Hamiltonian faction of government used its power to impose a tax on its opposition, the agrarian/Jeffersonians. When the rebellion occurred, the federal government used its might to suppress opposition in a manner in contravention to the law. From Wounded Knee to Waco, we can see what happens when a civilian population is unable to defend itself from government. Worldwide, over two hundred million lives have been lost after governments disarmed their citizenry. Yet we continue to buy into the insane notion that “We the People” are incapable of bearing arms equal in power and effectiveness of that of the military, the standing army. Even the definition of commonly-used firearms has been changed to fit the modern anti-gun narrative: In his 1828 dictionary, Noah Webster described a musket, the weapon used by colonial militias, as “a species of firearms used in war.” In other words, it was once a given that civilians, i.e. the militia, would have the very same firearms as the military. Merriam Webster recently changed its definition of “assault rifle” to the following: any of various intermediate-range, magazine-fed military rifles (such as the AK-47) that can be set for automatic or semiautomatic fire; also : a rifle that resembles a military assault rifle but is designed to allow only semiautomatic fire (emphasis added). A federal court in Massachusetts recently held that such rifles are not protected by the Second Amendment and may lawfully be subject to regulation, and even an outright ban. The ruling is problematic, however, for several reasons, the most blatant being a federal court was ruling on a state firearms law. Under the Constitution, as ratified, they have no legal authority to do so. According to the Tenth Amendment, this matter was reserved to the states when they ratified the Constitution. Second, the Second Amendment doesn’t apply to any model or type of weapon; it applies to the general government, meaning Congress. The Amendment is not a means by which the right to keep and bear arms was granted to the people and the states. It is a restriction against the general government prohibiting it from regulating firearms at all. Note the language used in the above case by Massachusetts Attorney General Maura Healey: “Today’s decision upholding the Assault Weapons Ban vindicates the right of the people of Massachusetts to protect themselves from these weapons of war…and we will not be intimidated by the gun lobby in our efforts to end the sale of assault weapons and protect our communities and schools.” Those seeking to restrict the right to “weapons of war” cite D.C. vs. Heller, in which Justice Antonin Scalia explained that the Second Amendment “protects an individual right to possess a firearm unconnected with service in a militia, and to use that arm for traditionally lawful purposes, such as self-defense within the home.” What the Court is implying is that “assault rifles” are not subject to Second Amendment protection, which is, again, a total fallacy given the history and intent of the Constitution as ratified. Rifles that simply look like military rifles are banned in many states, or at least must be registered. And now we are seeing bans in local communities as well. We have become so conditioned to this false notion that civilians should not be equally as well armed as the standing army the founders so distrusted, that now we are accepting bans on guns that simply resemble such weapons. Patrick Henry’s worst fears have materialized and the general public, uneducated on their own history, is largely clueless. Tenche Coxe writing in The Pennsylvania Gazette, Feb. 20, 1788, asked “Who are the militia? Are they not ourselves? Is it feared, then, that we shall turn our arms each man against his own bosom?” He continued by affirming that “Congress have no power to disarm the militia. Their swords, and every other terrible implement of the soldier, are the birthright of an American…The unlimited power of the sword is not in the hands of either the federal or state governments, but, where I trust in God it will ever remain, in the hands of the people.” With the average American gleaning their “understanding” of the Second Amendment in particular, and American history in general, from agenda-driven academics and talking heads in the media and on talk radio, such basic arguments are being capitulated. As a result, we are in the process of surrendering a fundamental tenet – our ability to defend ourselves from personal assault as well as from a proven threat by the very government imposing these unlawful restrictions. Our founders are rolling in their graves. Note: Carl Jones contributed to this article. He is a former active duty U.S. Marine, and a Certified Firearms Instructor. He is a contributing writer for the Abbeville Institute and a member of the Society of Independent Southern Historians. - Federalism Gets the Bird for Thanksgiving - November 28, 2020 - A Blow to Gun Manufacturers But a Win for Federalism - November 16, 2019 - “Law and Order” Conservatives Reject the Rule of Law in Favor of Federal Death Penalty - July 30, 2019
<urn:uuid:21dca853-d556-4db9-91d5-f3cc124d0137>
CC-MAIN-2021-43
https://tenthamendmentcenter.com/2018/04/21/muskets-to-ar-15s-weapons-of-war-or-enemies-of-tyranny/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00271.warc.gz
en
0.964359
2,984
2.546875
3
Dodgy wind? Why "innovative" turbines are often anything but Virtually every week there are articles about new and innovative methods for harvesting wind energy. And every week more megawatts of capacity from three-blade horizontal-axis wind turbines (HAWT) becomes operational, despite all of the contenders. Why aren't these innovative new products knocking the iconic HAWT off its perch? Is it possible to tell which are likely to be viable? These eight points are a useful way to assess which technology has potential, and which are likely just hot air. 1. Do they claim to exceed Betz’ limit? In 1919, Albert Betz calculated that the maximum energy that could be extracted from the wind regardless of the type of device used was 59.3 percent. This number, Betz' limit, has stood the test of time. No one in independent testing has exceeded it, yet many continue to claim that they do. Most recently, a much hyped technology and company, Saphon, has made this claim, but there appears to be no evidence that this is actually the case. If you see a new-fangled wind turbine claimed to exceed Betz' Limit, think of it as a red flag which could indicate that something is amiss. On a swept-area for swept-area basis, Betz' Limit still holds as the comparison has to be the total surface area of the wind capture device presented to the wind, whether we're dealing with funnels, sails or blades, and whether they're square or round. 2. Is it an old technology pretending to be a new technology? Harvesting energy from the wind is not new. Talented and intelligent tinkerers and engineers have been working on improving gains from wind energy for thousands of years and as a result almost every possible approach has been tried. The vast, vast majority have been discarded. Some people who don't seem to know how to use Google continue to reinvent old technologies and pretend that they are somehow new.Savonius and Darrieus wind turbines have been around for centuries, but only acquired their names about 100 years ago via re-inventors who actually made the monikers stick. Savonius wind turbines are basic torque engines which have a maximum rotational speed of the velocity of the wind. This makes them excellent for higher torque applications such as pumping water, but it makes them poorer generators of electricity. Darrieus wind turbines have aerodynamic blades, but the blades are only flying in clean air at the optimum angle for power generation 15–30 percent of the time. Between this and other challenges common to vertical axis wind turbines, their generation is better than Savonius turbines but has never come close to three-blade HAWTs. One product took the Savonius, applied a lot of lipstick: golf-ball like dimples on the leading edges, a hollow interior claimed to double the surfaces for the wind to work on, stackable modules and magnetic-levitation bearings. All of these "innovations" merely made it a more expensive ineffective generator of electricity, based on my assessment of the details of the proposed investment and collateral. If a Savonius turbine is appropriate for a niche, the question is how cheaply and simply it can be constructed, not how to make it more productive. Ducted and venturi-effect wind generators also resurface with disarming regularity. In these, some sort of shroud or funnel focuses the airflow on a smaller wind turbine, increasing the velocity of a given volume of air using well-known principles. Most recently the Invelox has been capturing attention with its device. Turbines of this type have been tested extensively in the past and have never overcome the inefficiencies introduced by vortices created by the funnels, despite hypothetical improvements which ignore real world fluid dynamics. They are all less efficient at generating electricity from a given volume of air than a three-blade HAWT of similar surface area to the funnels. And they introduce a much heavier, bulkier and more susceptible to gusts shroud to the equation, as can be seen from the FloDesign cowled turbine: 3. Is the product just a design concept as opposed to at least a working and tested prototype? Many people have ideas. Some of these people have access to reasonable quality graphics tools. They create fascinating-looking devices, often accompanied by some pseudo-scientific statement about which technical effect they are expecting to harness. If there isn't a working, tested prototype, red flags should be popping up that this is not a likely technology. The Strawscraper concept is typical. It's solely an architectural rendering with added bafflegab about the piezo-electric effect. 4. Are the only test results from tests that they have performed as opposed to independent, third-party labs, and do they publish the numbers? There are several independent, reputable testing companies around which can do well formulated, proven and credible testing of wind generation devices. The best known of these is Sandia Labs, which has been testing and researching wind generation devices for decades. If a company doesn't have independent tests and is making extravagant claims, be skeptical. Apparently Sheerwind, the company behind the Invelox system, has so far not allowed anyone outside the company to test the system. As a positive example, consider the STAR (Sweep Twist Adaptive Rotor) blade, currently being incorporated in a portable wind generator by the start-up Uprise Energy in California. On its website, Uprise Energy publishes a link to Sandia Labs' independent testing of its adaptive rotor blade documenting the 12-percent improvement in performance for this small generator technology. The principal of the company has a solid background of entrepreneurialism in related fields and they are targeting specific niches for portable small generation. This company is a good bet, unlike many of its competitors. 5. Are claimed patents for devices other than the one they are demonstrating? If patents are claimed to be pending or accepted that reflect the technology and its advances, it's worth having a look at them. Saphon's patent, for example, is for a device very different than the one that they show pictures of. This doesn't stop them from using the patent in their marketing hype. Here's what the patent, WO/2012/039688, says: "The invention consists of a system for converting wind energy (SCEE) into mechanical and then electrical energy. This system (SCEE) is not subject to the theoretical Betz limit (59 percent). The system (SCEE) has a wheel (F) equipped with a series of blades arranged all around it." A series of blades does not describe the disc-and-sail gizmo that Saphon shows. 6. Are efficiency claims based on ISO standard lifecycle accounting that has been independently assessed? Anyone can claim greater efficiency, but what do they mean by it? The gold standard is Levelized Cost of Energy (LCOE), in which all of the costs associated with raw material, manufacturing, transportation, construction, operations and maintenance are factored into a cost per kWh based on expected output over the life of the device. ISO standards exist around the LCAs and the attendant calculations so that apples-to-apples comparisons can be done across different forms of generation. Anything less than that requires attention to exactly what is being claimed, what specific device the claims are about, and in fact whether they spell out what they mean at all. Invelox, for example, makes claims of 81–660 percent efficiency gains. It appears that the company hasn't done a full LCOE, and it appears that it hasn't had independent testing done. The explanation is that it took the relatively high-speed optimized small wind turbine that it uses in its device, left it in the open, and compared that to the same wind turbine in its device. This has multiple failings including the lack of a comparison to a wind turbine appropriate to wind speeds at the same height and with the same surface area as its funnel openings. All that it appears to have proved is that it can make a small wind turbine generate more electricity by putting a great huge whopping funnel on top and exploiting the Venturi effect. This has been well understood for decades, and is understood to not justify the cost, complexity and lack of scalability of the device. 7. Are they claiming to integrate storage into their wind generation device without a market niche need? Many wind energy innovators speculate that they will be easily able to incorporate storage into their devices based on their unique characteristics as if this is an advantage. Once again, Saphon is a poster child for this claim, stating that its very lossy hydraulic system could include a storage reservoir. For context, wind energy is on track to exceed nuclear generating capacity in the next 3-4 years with virtually no storage, which only off-grid applications require, usually as a separate component and almost invariably in the form of an electric battery. Organizations such as GE are an exception. It knows the market, it knows what is required, and its devices are producing a significant percentage of the total world wide wind energy every day. When it announces integrated storage, as it did recently with its Brilliant turbine, it's for an understood target market, engineered and scaled appropriately. 8. Does the product introduce major new liabilities? High-altitude wind energy is a constantly recurring meme in wind generation. As people continually point out, the wind is stronger and more consistent the further you get from the ground. That's why HAWTs keep getting taller and people are looking at finding ways past current height limitations. Many different groups and individuals are looking at flying generators of one sort or another higher into the atmosphere attached to cables that are attached to winches on the ground. The varieties include airfoil kites, blimp-style Savonius generators, flying cowled turbines where the cowl is a blimp, and small solid kitecraft with generating propellers on them (Skysails, Altaeros and Google's new acquisition Makani respectively). These are some of the more visible examples from recent years. These devices are likely to remain a niche for the simple reason that putting lots of them up high in the atmosphere would require 1–4 km long, effectively invisible cables which stretch over a broad and shifting downwind range. This would require a significant area to be declared a no-fly zone for most forms of aviation, although passenger jets could still fly overhead. If the system failed, and the device fell from the sky, it would drape those kilometers of cables over everything downwind, including roads and buildings, requiring that a large area downwind be fairly free of any human structures. And for the solid flying wing, a very heavy object with rotating propellers would fall out of the sky somewhere between 1 and 10 km downwind in the event of a failure. That's why, after a period of assuming they these could make major onshore contribution, most of these devices are now aimed at servicing remote locations or offshore sites. It doesn't help that the lighter-than-air variants require increasingly rare helium which is also required for other, arguably much more valuable, uses including as a coolant in medical imaging machines. There are significant scalability issues with such turbines, and, given the increasing height of HAWTs, they are dealing with diminishing returns in any event. There are about 240,000 HAWTs worldwide in sizes ranging from a few kilowatts to 7 MW capacity, both onshore and offshore, in rural and urban areas. Four out of five of the top selling small wind turbines are horizontal axis two- and three-blade wind turbines (with only a single two-blade one, which is a reasonable choice at this scale.) They are generating all but a tiny fraction of a percentage of the electricity harvested from the wind in the world. They are undergoing constant incremental improvements in design including: - Low-wind vs. high-wind models - Variable pitch blades - Gearless vs. geared nacelles - Slight variants of blade design for aerodynamic efficiency - Leading edge coatings - Tower design - Base design – rock-anchor vs. concrete-base vs tethered floating vs. bottom-mounted offshore As examples of the types of innovations that are constantly appearing, yet aren’t particularly sexy, here are two recent stories. In the first, Magdy Attia and Marko Ivankovic of Embry-Riddle Aeronautical University realized that they had a design for a gearbox that would last longer than current gearboxes and are looking to put it into wind turbines. In the second, software-based predictive maintenance that has been used for years in other industries is being applied to wind farms to optimize maintenance schedules, purchases and hence costs. These may not be as eye-catching as a newfangled wind turbine, but there is enormous money in shaving a percent here or a percent there off of costs when the costs are in the billions. The wind industry is disruptive because it is supplanting fossil fuel generation at a reasonable cost. That reasonable cost is due to decades of incremental improvement and major supply chain and business innovations, not radical technical innovations. The most effective technology was chosen a few decades ago, and it's been getting steadily better ever since. If someone is selling on a "new" wind generation technology, be aware. The wind industry is unlikely to be disrupted by someone with an idea and a Powerpoint pitch.
<urn:uuid:9c1bad3b-2291-4db6-8ef5-cbf4d192b6f4>
CC-MAIN-2021-43
https://newatlas.com/dodgy-wind-turbines/27876/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585837.82/warc/CC-MAIN-20211024015104-20211024045104-00710.warc.gz
en
0.95903
2,798
2.9375
3
- The Facts:Angus Barbieri, a man who, in June of 1965, began a fast under medical supervision for exactly 382 days. He remained completely healthy for the duration of the fast. - Reflect On:Today, it’s firmly established in scientific literature that fasting can have tremendous benefits, if done correctly. It can also be used to treat a variety of diseases. Perhaps it’s not emphasized because you can’t make money off of not eating? A study published in the Post Graduate Medical Journal in 1972 brought more attention to a gentleman by the name of Angus Barbieri, a man who, in June of 1965, began a fast under medical supervision for exactly 382 days and, at the time the study was published, had since maintained his ordinary weight. In his case, “prolonged fasting had no ill effects.” Barbieri’s weight decreased from 456 to 180 pounds during the fast. This isn’t the only example that’s available in the literature, it’s similar to an earlier patient prior to Barbieri who reduced his weight from 432 to 235 pounds during 350 days of intermittent fasting (Stewart, Fleming & Robertson, 1966). Researchers have also fasted patients for 256 days (Collison, 1967, 1971), 249 and 236 days (Thomson et al., 1966) as well as 210 days (Garnett et al., 1969; Runcie & Thomson, 1970), all of which are cited in the 1972 study. Since the publication of this time, there are many documented examples of prolonged fasting done by highly obese people. Here’s one recent example of a man who fasted for 50 straight days, while being medically supervised and tested the whole time. When you fast, your body switches from burning glucose, to burning fat. Fasting lowers insulin levels which allows the body to access its fat stores for energy. When you eat, food is converted into glucose and that’s what we usually burn. This is why fasting has become a therapeutic intervention for many people with type two diabetes, and more doctors, like Dr. Jason Fung, a Toronto Based nephrologist, are having great success with utilizing fasting as an appropriate and necessary health intervention. Fung has many great articles regarding the science of fasting, you can access them here if you’re interested in learning more. This article references some of the leading scientists in the field so you can learn more by looking them up as well. The graph below depicts what happens to your protein while fasting. Interesting isn’t it? People often believe that if you fast, you will experience a tremendous amount of muscle loss during fasting, but that’s simply not true. This graph is from Kevin Hall, from the NIH in the book “Comparative Physiology of Fasting, Starvation, and Food Limitation.” “It seems that there are always concerns about loss of muscle mass during fasting. I never get away from this question. No matter how many times I answer it, somebody always asks, “Doesn’t fasting burn your muscle?” Let me say straight up, NO.” – source Dr. Jason Fung But what about Angus Barbieri? Obviously we’re not saying long term fasts for this long are healthy, obviously for many people they will probably be unhealthy and unsafe unless medically supervised. In the 1972 study doctors measured a number of concentrations within the body. For example, plasma potassium concentrations over the first four months decreased systematically. As a result, they provided a very small daily dose that increased his potassium level. After another 10 weeks, no potassium was given, and from there on in until the end of the fast, plasma potassium levels remained normal. Cholesterol concentrations also remained around 230 mg/ 100 ml until 300 days of fasting, but increased to 370 mg/100 ml during refeeding. Plasma magnesium levels decreased over the first few weeks of the fast but then went up and stabilized. This is interesting to note as there is nothing going into the body, yet levels still stabilized after the initial decrease. Normal plasma magnesium concentrations, despite magnesium ‘depletion’ in muscle tissue, have been described (Drenick et al., 1969) during short-term fasting (1-3 months). The only other relevant report is a remark (Runcie & Thomson, 1970) that one patient who fasted 71 days had a normal plasma magnesium level of 2-2 mEq/l at the time when she developed latent tetany. The decrease in the plasma magnesium concentration of our patient was systematic and persistent. The excretion of sodium, potassium, calcium and inorganic phosphate decreased to low levels throughout the first 100 days, but thereafter the excretion of all four urinary constituents, as well as of magnesium, began to increase. During the subsequent 200 days sodium excretion, previously between 2 and 20 mEq daily, reached over 80 mEq/24 hr, potassium excretion increased to 30-40 mEq daily and calcium excretion increased from 10-30 mg/24 hr to 250- 280 mg/24 hr. Magnesium excretion (which was not measured during the first 100 days) reached 10 mEq/ 24 hr between Days 200-300. Phosphate excretion, which had decreased to under 200 mg/24 hr, also increased to around 800 mg/24 hr, even exceeding 1000 mg/24 hr on occasion. Peak excretions of all these constituents were seen around Day 300, after which there was a marginal decrease, but excretion remained high. Obviously, this is an extreme fast and such fasts have only been tested on people of tremendous obesity, and it shows that people with a high body fat percentage have the ability to fast longer simply because their body has more stores to pull from. The study concluded in 1972 that: We have found, like Munro and colleagues (1970), that prolonged supervised therapeutic starvation of the obese patient can be a safe therapy, which is also effective if the ideal weight is reached. There is, however, likely to be occasionally a risk in some individuals, attributable to failures in different aspects of the adaptative response to fasting. Until the characteristics of these variations in response are identified, and shown to be capable of detection in their prodromal stages, extended starvation therapy must be used cautiously. In our view, unless unusual hypokalaemia is seen, potassium supplements are not mandatory. Xanthine oxidase inhibitors (or uricosuric agents) are not always necessary and could even be potentially harmful (British Medical Journal, 1971) perhaps particularly in the long-term fasting situation. It’s almost 2020, and the literature, studies and research that’s been published since 1972 is vast. We’ve learned a lot more about it and if done correctly it can be extremely beneficial. Shot term fasting presents minimal to no health risks, and so does long term fasting that lasts more than 24 hours, that is unless a person already has an underlying condition. That being said, it’s not easy to start. Most people are used to eating three meals plus snacks every single day, therefore they are never adapted to burning their fat stores, something that appears the human body was meant to do. “Why is it that the normal diet is three meals a day plus snacks? It isn’t that it’s the healthiest eating pattern, now that’s my opinion but I think there is a lot of evidence to support that. There are a lot of pressures to have that eating pattern, there’s a lot of money involved. The food industry — are they going to make money from skipping breakfast like I did today? No, they’re going to lose money. If people fast, the food industry loses money. What about the pharmaceutical industries? What if people do some intermittent fasting, exercise periodically and are very healthy, is the pharmaceutical industry going to make any money on healthy people?” – Mark Mattson (source) Fasting has also been shown to be effective as a therapeutic intervention for cancer. Fasting protects healthy cells while ‘starving’ cancer cells, it’s now being used as an intervention that’s being combined with chemotherapy. Fasting has also been shown to greatly reduce the risk of age related diseases like Parkinson’s Disease, and Alzheimer’s disease. Mark Mattson, one of the foremost researchers of the cellular and molecular mechanisms underlying multiple neurodegenerative disorders has shown through his work that fasting can have a tremendous effect on the brain, and can even reverse the symptoms of multiple neurodegenerative disorders. You can watch his interesting TED talk here. Scientists have also discovered strong evidence that fasting is a natural intervention for triggering stem cell-based regeneration of an entire organ or system. Fasting has actually long been known to have an effect on the brain. Children who suffer from epileptic seizures have fewer of them when placed on caloric restriction or fasts. It is believed that fasting helps kick-start protective measures that help counteract the overexcited signals that epileptic brains often exhibit. (source) The list goes on and is quite long. At the end of the day if you do your research, fasting, under proper medical supervision, can have tremendous health benefits that go far beyond what’s mentioned in the paragraph above. Every single study that has looked at fasting as a therapeutic intervention for several diseases has shown nothing but positive benefits. Even studies conducted regarding caloric restriction, something completely different than fasting, have shown promising results in all animal models. According to a review of fasting literature conducted in 2003, “Calorie restriction (CR) extends life span and retards age-related chronic diseases in a variety of species, including rats, mice, fish, flies, worms, and yeast. The mechanism or mechanisms through which this occurs are unclear.” Since this study was published, a great amount of research has been conducted from many researchers, and the mechanisms are being discovered and have become more clear. If you want to further your research, apart from the names listed above, Dr. Valter Longo and his research is another great place to start. The body has a tremendous amount of storage, and it hangs on to what it needs during a fast, and uses up ‘bad’ things, repairs damaged cells, and more. When you fast and deplete all your glycogen, your body is going to start using fat for energy, it’s going to use damaged cells for energy, it’s basically going to use all of the bad things first, before it gets to the good thing…Your body will not burn protein, as protein is not a fuel source while fasting. I bring this up because it’s interesting to see what the body loses and hangs on to during a fast. The truth about fasting is that it’s not dangerous at all. Intermittent fasting and short term fasting can be done by just about anybody. From what we’ve seen with regards to prolonged fasting, it’s also not very dangerous when it comes to obese people doing it under medically supervised conditions. Theoretically, based on the science alone, any relatively healthy human being should be able to do a prolonged fast without any harmful consequences. Obviously, prolonged fasts that are not medically supervised can be very detrimental. We are obviously not recommending this and you must do a lot of research and talk to your doctor if you’re interested in fasting, before trying it. For starters, a little bit of intermittent fasting here and there is a no brainer, and not dangerous at all if you have no underlying health conditions, but everybody’s body is different. Fasting is making a lot of noise, and has been making a lot of noise within the health community, but it’s still not appropriately taught and used by the mainstream medical industry. Why is this so? The answer is simple, you can’t make money off of fasting.
<urn:uuid:6390d49f-d71d-4a95-8e0f-52084e54a281>
CC-MAIN-2021-43
https://www.soulask.com/man-fasts-for-382-days-straight-loses-276-pounds/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00390.warc.gz
en
0.963857
2,504
2.65625
3
If you use the term “cloud computing” at your next barbeque, owing to the invention of smartphones, you might find yourself in a conversation about privacy. If you mention the term “big data”, you might also get a flicker of recognition and end up in roughly the same place. However, if you drop the word “blockchain” into the discussion you’re almost guaranteed a blank stare. That is unless you happen to be talking to an executive of a major bank or other financial institution. This word has fast become a regular part of the lexicon of decision-makers in banking and a rapidly growing number of sectors involved in the trade of almost anything of value. Broadly speaking, the word refers to a methodology of settling transactions by using a so-called “distributed ledger” that can be cryptographically protected to ensure, theoretically, that everyone involved in its use has frictionless, automated means of conducting transactions and an immutable source of truth that hackers or other miscreants can’t tamper with. Some, like Zhenya Tsvetnenko, founder of fintech start-up Digital X, believe blockchain use has the potential to eliminate vexatious and expensive commercial legal disputes. The Commonwealth Bank hinted to iTnews that it could help address problems like corruption when conducting trade in developing countries. More pessimistically, lawyer and blockchain expert, Mark Toohey, believes it has the disruptive power to decimate jobs in the financial sector. However, Steve Wilson, privacy specialist with Silicon Valley firm Constellation Research, while not critical of projects inspired by blockchain, says many of its champions have become overly mesmerised by it. He argues it simply can’t deliver the cryptographic miracle for which many hold hope. What is it? No discussion of the blockchain is possible without referring to Bitcoin – the controversial crypto currency developed by an anonymous security expert known only as Satoshi Nakamoto. Nakamoto wanted to demonstrate that it was possible for consumers to exchange value entirely digitally and securely without the need for centralised fiduciary infrastructure provided by banks. It was an experiment and a nose-thumbing exercise born out of dissatisfaction with the banking status quo. And it succeeded. Bitcoin creates new digital (or cryptographic) currency as part of an ongoing group exercise that simultaneously preserves the integrity of a publicly shared distributed ledger of Bitcoin transactions. The currency itself and the record of its transaction is little more than numbers (long ones) shared securely between mostly anonymous parties. The shared ledger merely reflects the change of ownership of bitcoins. The true magic of Bitcoin is the way it incentivises everyone involved in the generation and use of the currency to preserve the integrity of the shared ledger – particularly so-called Bitcoin miners. A Bitcoin miner’s ultimate goal is to earn more of the currency by converting Bitcoin transactions into digital records that comprise each new page or “block” in the shared electronic ledger. Thousands of these miners compete at nodes on the network at any one time. Once they’ve converted a block of transactions, the Bitcoin protocol requires them, essentially, to use brute force computing power to make millions of guesses at a number that satisfies certain conditions of an algorithm built into the system. This number crunching and computing process is referred to as “proof of work”. It’s often likened to a game of digital bingo or a lottery, and the first miner to guess the number correctly gets to add their block of transaction to the ledger, collect 25 new digitally-minted Bitcoins and take a small transaction fee for every new trade in the new block. The new block contains information mathematically linking it to the previous block, hence it’s a chain of blocks or “blockchain”. If the Bitcoin system detects that an unusually large amount of computing power is being thrown at it, increasing the rate of correct guesses, it simply increases the difficulty of guessing the number with the aim of keeping the rate at which new blocks are created to around one every ten minutes. It is possible for an adversary to attempt to corrupt the blockchain with a new block containing fraudulent transactions (say, one that reverses transactions in a previous block). However, such an adversary would have to convince every other miner that its block was valid and continue to add new blocks on a “fork” in the chain in an attempt supersede the “honest” chain. The Bitcoin protocol dictates that only 21 million Bitcoins will ever come into existence – and gradually the reward for adding new blocks will decrease. However, they are divisible into smaller denominations and valued against any other validly traded currency. What's in it for banks? Clearly, creating a new digital currency isn’t the draw for the finance industry, and the level of effort and cooperation required to get banks globally to represent all their conventional currency in a form suitable for blockchain settlement would be colossal. However, Bitcoin’s blockchain method of using a distributed ledger to provide a faster, more secure and virtually incorruptible means of settling transactions has clearly caught their attention. MasterCard was one of the first major financial service brands to go public with its interest in blockchain in 2015. However, since then it has been more cautious with its public statements on the technology. Earlier this month, Microsoft announced a strategic partnership with a consortium of 40 banks known as R3 to accelerate development of distributed ledger technologies. The Commonwealth Bank of Australia – another institution that has been vocal about its experimentation with blockchain – is part of the group that started with just nine banks in September 2015, as are NAB and Westpac, among others. In late April, ASX-listed ownership broking specialist, Computershare, announced a joint venture with UK-based blockchain specialist SETL to establish a securities settlement system based on the technology. In Australia, the ASX became something of a poster child for blockchain when it revealed late last year that it was considering the technology to replace the exchange's ageing CHESS securities clearing house. Cliff Richards, ASX’s general manager of equity post-trade services, believes the exchange has played a major role in boosting blockchain’s credibility in Australia. “The technology has the potential to deliver improvements in latency, certainty [of title to assets], auditability and provide a single source of truth available to all participants so that reconciliation activities [across different versions of what should be the same data] are reduced," Richards says. "All this lowers risk, which can lead to significant cost savings in operations, margin and capital requirements." The ASX is yet to provide a great deal of detail on how it will implement the technology, but sources told iTnews it’s more likely to simply be a private distributed database infrastructure rather than anything as radical the Bitcoin blockchain implementation. Dilan Rajasingham, CBA's executive manager of technology innovation, says the bank is experimenting with blockchain technology in trade finances and payments and at least three other areas, but was reluctant to discuss in detail. Rajasingham said the bank’s experiments were designed to prepare it for any “over the horizon disruption”. However, he believes there has been too much focus on the blockchain’s impact on financial services and not enough recognition of its potential positive contributions to society. For instance, the bank is working with TYME, a South African start-up seeking to financially assist millions of un-banked individuals living in poverty in developing countries. Here, he says, blockchain’s immutable and transparent qualities have the potential to address endemic corruption when governments in those countries attempt to financially assist their populations. “A lot of what’s done now is done using manual processes, which are, like any manual process, open to interpretation and open to corruption,” Rajasingham says. “Why won’t we talk about making a difference to a billion peoples’ lives?” When it comes to getting banks to join the public conversation around blockchain technology, the CBA is something of an outlier. Most banks iTnews spoke to were prepared to share very little, if anything at all on the topic. National Australia Bank chief executive Andrew Thorburn has said publicly that blockchain could underpin a “virtual banking system”. However, the bank, which joined the R3 group in October last year, was not prepared to elaborate on its plans for the technology when contacted by iTnews. In a short statement attributed to NAB Labs general manager, Jonathan Davey, the bank said it was involved in shaping the use of blockchain and that it believed “in the long-term this work has the potential to deliver improved security, reduce processing effort and deliver savings, which will benefit NAB and our customers". And interest in blockchain appears limited to the large well-resourced banks. Smaller second tier banks that iTnews contacted were not prepared to enter the conversation at all. Those banks that are involved in blockchain development could find that they are competing with Australia’s premier technology commercialisation engine, Data61. Established with equal commitment to generating financial returns for its parent CSIRO as to exploring pure research science, Data61 has vigorously flagged its interest in blockchain. Data 61’s principal software systems researcher Mark Staples recently wrote in The Conversation that the organisation planned to “identify, develop and evaluate some ‘proof of concept’ systems using blockchains to investigate them”. Data61 is working with the federal Treasury to review how blockchain can be used in both the public and private sectors, through a nine-month process intended to result in practical use case scenarios for pilot. Staples said blockchain ledgers could eliminate fraud in the transfer of anything of value, whether it’s digital or physical. He held up Everledger - the creation of Australian entrepreneur Leanne Kemp and which uses blockchain to track and record ownership of diamonds and other valuable assets - as an example. “Here, rather than the blockchain recording transfers of digital currency, it records transfers of ownership of identified physical assets," Staples wrote. “This globally accessible provenance trail could reduce fraud and theft, and enable new or improved kinds of insurance and finance services. "The same general idea could be used for any supply chain, such as in retail, agriculture or pharmaceuticals." Is blockchain overhyped? Read on to find out what the experts say.... But privacy expert Wilson and blockchain expert Toohey believe the likes of Staples are misguided in their expectations of blockchain applications outside of its original role creating the Bitcoin crypto currency. Wilson says the argument that blockchain could be used to account for exchanges of other valuable items comes unstuck once it’s recognised that cryptographic certificates still need to be bound to those items by some form of trusted third party. That, he argues, goes directly against the original intent of the pseudonymous Nakamoto’s original white paper, which led to the creation of Bitcoin. Citing Staples’ example of Everledger, Wilson points out that the diamonds can’t physically be on the blockchain in the way Bitcoins can be; a third party still has to map them to it in some form of database. He argues what’s being proposed is that massive amounts of compute power be thrown at problems better solved by simple spreadsheet software. “I really laugh. It’s humongous over-engineering when you bring it down to earth in the real world,” Wilson says. Wilson is not entirely derisive of all distributed ledger projects inspired by blockchain, but says the conversation around it needs to be grounded in reality. “It’s a bit like the Wright Brothers’ flyer. It could fly 30 metres but couldn't hold its own weight in fuel. But you looked at it and you said ‘okay, powered flight is possible’ so why don’t we work on progressive generations of the idea and see how it goes," he said. "Bitcoin was a proof-of-concept for something once thought impossible but you can’t put health records on the blockchain. It just can’t be done." Toohey also believes the technology doesn’t currently warrant the attention it’s getting, agreeing many of its suggested applications would be better handled with conventional databases. Replacing human workers However, once its true disruptive strengths are realised, Toohey argues there will virtually be no stopping the flow of white-collar blood as the technology threatens workers in equities and commodity trading. “Blockchain is going to disrupt anyone that’s an intermediary and that means anyone that’s got the word 'broker' after their job title such as 'stock broker' or 'insurance broker', and anyone that sits at their desk and has an in-tray and an out-tray," he said. "Their jobs are all imperilled. Anyone who can be relaced by a bot will be." Digital X's Tsvetnenko says early implementations of blockchain technology are more likely to involve removing the day-to-day grind of paper-based invoicing and fulfilment systems, given the technology's potential to eliminate disputes and legal action. “With the blockchain there’s no arguing about the format of the invoice or whether it's there or not. It’s impossible to argue with mathematics,” he says. He remains sceptical that groups like R3 will be able to reach a level of cooperation required for wholesale change across the thousand of banks in the global system system that currently rely on the SWIFT international payment network. “The word on the street is that it’s not going to be successful because it’s hard enough working with one bank. Imagine 40 banks? How are you going to satisfy 40 banks that all have different ideas about what they want and how it should work,” he said. CBA's Rajasingham is more upbeat. “SWIFT didn’t start with a thousand banks but it grew to that. But you need a critical mass and that sort of critical mass can be provided by groups like R3,” he says
<urn:uuid:0ee73bd1-c42b-4ef0-b8f9-38170f9c58cc>
CC-MAIN-2021-43
https://www.itnews.com.au/feature/why-are-australias-banks-so-interested-in-blockchain-419397/page0
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00271.warc.gz
en
0.949353
2,963
2.609375
3
The following post originally came from Appendix §30 of Fredrick J. Long, Koine Greek Grammar (Wilmore, KY: GlossaHouse, 2015) and was written by alumni member T. Michael W. Halcomb, Ph.D. In the same way that it is in our best interest to learn the grammatical and syntactical ins and outs of Koine Greek, as this book has helped us do, it is to our benefit to have some understanding of the issues surrounding the matter of pronunciation. Because the majority of English-Greek grammar books employ the so-called Erasmian Pronunciation (I say “so-called” because Erasmus himself did not adopt it), and because professors have been using such textbooks for the last several hundred years, the overwhelming majority of students have accepted this framework without much question. Indeed, many have been taught that recovering any semblance of how Koine originally sounded is beyond possibility. Such a claim, however, simply misses the mark. The reality is that we can know how Koine sounded. There are a number of resources readily available and at our disposal that can assist us in this regard. Before I mention just a couple of those, however, it will be helpful to understand a bit about the context out of which “Erasmian” took root and grew. For me this historical data is important and should not be divorced from discussions about whether Erasmian should continue to be used. At the same time it is not the “nail in the coffin,” so to speak, or the strongest bit of information we have to move away from Erasmian to the Koine Era Pronunciation (KEP). With regard to context, the 1400s-1600s A.D. in Europe are worthy of note, especially the locales of Greece and England. Given that I cannot provide an in-depth discussion of every significant event or person worthy of mention here, I must be selective. I want to draw our attention first, then, to the fact that in the years preceding the 1400s French and Latin were prominent across Europe but French was the language of power, politics, and social prestige. There came a shift around the 1500s, however, when French began to be replaced by English. While there were many dialects of English a standard began to emerge as it was developed at the behest of royalty. The chancery (the chapel of the king) consisted of scribes and writers who worked at creating an English standard among themselves. Eventually this standard began to proliferate as it was used increasingly outside of the chancery. As English replaced French as the norm and as the chancery’s English standard gained momentum, other institutions, especially the academy, began to take note and follow suit. These changes happened quite organically and, relatively speaking, over a period of hundreds of years. This move toward an English standard also played a role in what is known as The Great Vowel Shift. I cannot explain the shift here at length but it is worth pointing out that basically the vowels a, e, i, o, and u, along with ai, all shifted and took on a different sound. The influence of this change is hard to overestimate because even today’s English remains directly affected by it. As it was occurring across the late 1400s to mid 1600s those living at the time were also dramatically affected by it. We need to realize that Erasmus himself lived during this period, a period when matters pertaining French, Latin, and English, especially the latter, were very socially and politically charged. The pronunciation of English was at the forefront of many debates and discussions. But this brings us to another matter, namely, the pronunciation of Greek. Following the Turkish invasion and conquering of the Greek-speaking Byzantine Empire in 1453 A.D., for the first time a sharp distinction was beginning to be made between Ancient Greek and Modern Greek. Prior to this point no one had ever really differentiated the two in such a substantive way and in such an aggressive historical manner. In the minds of many, the political misfortunes of the Greeks confirmed that they were weak and intellectually backward; this caused non-Greeks to despise them and avoid their language. This also caused Greeks to strive to “maintain their ethnic identity,” which led them to turn in upon themselves, “jealously preserving their language and culture.” As one author says, “The use of the Modern Greek pronunciation for the ancient language was only part of this larger phenomenon.” Thus, for the Greeks, the pronunciation of the language was a matter of national pride. Yet, here, for the first time, Ancient Greek—and for our purposes, Koine Greek—was essentially declared dead. What had existed unbroken for thousands of years despite its various permutations and changes was now considered deceased. But the question must be asked: Who declared it a dead language? And the follow-up question: Why? We cannot necessarily pin the event of rendering Koine a dead language on one person. But when we look to figures such as the Spanish humanist Antonio Nebrija, who asserted that Hebrew, Greek, and Latin had ran their courses, and who spoke of “national awakening in all parts of the West,” we learn that he may have been an early catalyst for changing the pronunciation of Greek. Nebrija knew Erasmus and, in fact, Erasmus may have first heard of the non-historical pronunciation from Nebrija. It should be pointed out here that Erasmus himself never adopted what later became known as the “Erasmian Pronunciation.” In fact, Erasmus held to a Modern Greek pronunciation. What happened was that Erasmus wrote a fable about a lion and a bear using different Greek pronunciations, one which was based on Modern Greek and the other which was based on English, and this tale became widely popular. As matters of language change were on the rise and as Greeks were ousted from their academic teaching posts in ancient literature departments and replaced by non-native Greek speakers, as the historian and grammarian A.N. Jannaris notes, “The first act … was to do away with the traditional pronunciation—which reflects perhaps the least changed part of the language—and then to declare Greek a dead tongue.” Many jumped on the bandwagon with this thinking. Then, with enough academic elites and social powerhouses on board, the new English-based pronunciation began to spread quickly. Friedrich Blass, a professor and author living in the 1800s, who, even in his time referred to the Greeks of his day as half-barbarians and their pronunciation as barbaric, along with numerous other leading thinkers such as Martin Luther, “Philipp Melanchthon, Johann Sturm, and their many associates and followers,” had “adopted Erasmus’ teaching methods and textbooks as the basis of their educational reforms.” To be sure, Erasmus talked about pronunciation in some of his works, especially the aforementioned fable. This led people to believe that he himself was an advocate of the pronunciation that became attached to his name. These circumstances reveal that the socio-political climate of the day was ripe for the proliferation of the Erasmian pronunciation. Thus, there was not simply one person responsible for the so-called death of Koine, but rather many in the academy. Declaring Greek dead was a socio-political move; indeed, it allowed the academy to drive a wedge between Ancient and Modern Greek. In doing so, the academics could refer to Ancient Greek as “their Greek,” while the Modern Greeks could deal with Modern Greek. This division––a false historical dichotomy between Ancient and Modern Greek––has persisted even until today in the academy; the main progenitors of it have been Western colleges, universities, and seminaries. In my opinion, it would not only be a just act but also a historically responsible one to move away from Erasmian to the KEP. And in spite of the oft-heard claim that we cannot know it, we surely can, as I have suggested already. One of the main ways that we can recover the KEP is by comparing “orthographical substitutions,” that is, spelling interchanges between documents containing the same text or the same words across different documents. I prefer to call these spelling differences “interchanges” rather than “mistakes” or “errors” as some like Bart Ehrman do, because they were in fact not errors. To arrive at such a conclusion one must force modern expectations about reading and writing back on to ancient authors and scribes. Before the rise of modernism, what was written (literary works, letters, documents, etc.) was meant to be read aloud and were composed for the ear. Thus, as long as what was on the page produced the proper sounds and words when spoken, it was considered good, acceptable, and meaningful. To use a very simple example from English, we might say that when spoken aloud, the word “meen” in the statement “The boy is meen” produces the correct sound to hearers, although it is (mis)spelled “meen” rather than our modern standard of “mean”; yet “meen” would nonetheless have been understood by hearers. In fact, if one were to write an entire lecture with words whose spellings were considered atypical, the audience would likely never know about the spelling interchanges. The only way they would know is to look at the manuscript. If they were to view the manuscript, they would then see the non-standard spellings rather than the well-known standard spellings. If listeners were to do this, they would realize that in English “ee” and “ea” make the same sound and are, to the ear, completely interchangeable. This is actually one way that we can go about figuring out how Koine sounded, too. If we compare how words were spelled in ancient writings to a more common standard spelling, we can recover which letters sounded alike or different. For instance, one ancient work spells the number three as τρις. When we compare this with the standard spelling τρεις, we learn that Koine ι and ει were often interchanged and thus sounded (nearly) exactly alike. In addition to comparing non-standard spellings with standard spellings, we can often just compare words across a single document. For instance, in Papyrus 66 the scribe used both τρις and τρεις; even though they are spelled differently in the document, they made the same sound when read aloud and were thus considered good and acceptable. Beyond this type of analysis, many other ways to recover the KEP exist: We can read, for example, ancient texts that talked about pronunciation; we can look for rhyme and assonance in poetry (this gives us clues as to which letters and syllables sounded alike); we can use tools from the field of historical phonology/linguistics to help chart both synchronic and diachronic sound change. At the end of the day, it is simply erroneous to claim that we cannot know how Koine sounded. The bald claim that such a task is beyond recovery, finally needs to be put to rest. As scholars, researchers, teachers, and learners, our role should not be to regurgitate statements we may have read or heard along the way without checking to see whether or not they can be substantiated. Instead, if we are in the business of teaching truth and doing so in a true manner, then we will let the evidence lead us. I am convinced with regard to the pronunciation of Koine that such evidence abounds; for this reason I have left Erasmian behind and embraced the KEP. T. Michael W. Halcomb, “Never Trust A Greek…Professor: Revisiting the Question of How Koine Was Pronounced,” paper presented at the annual meeting of the Stone-Campbell Journal Conference, Knoxville, Tenn., 14 March 2014. A.N. Jannaris, An Historical Greek Grammar Chiefly of the Attic Dialect As Written and Spoken From Classical Antiquity Down to the Present Time: Founded Upon Ancient Texts, Inscriptions, Papyri and Present Popular Greek (London: Macmillan and Co., 1897), viii. Attributed to F. Blass in Chrys C. Caragounis, “The Error of Erasmus and Un-Greek Pronunciations of Greek,” Filología Neotestmentaria 8 (1995), endnote #12. I was unable to gain access to the cited source firsthand.
<urn:uuid:b3eb464b-0f10-4d8b-a960-f6b2a99125a9>
CC-MAIN-2021-43
https://grksociety.com/2017/02/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585280.84/warc/CC-MAIN-20211019171139-20211019201139-00470.warc.gz
en
0.975435
2,680
2.734375
3
Simulation is any artificial construct that represents a real-world process. Technology-enhanced simulation in modern health care is growing exponentially, but the basic concept of “practice” to enhance real-world performance in medicine dates back centuries. For example, in the 1600–1700s, birthing simulators were built from human bones, leather, wood, and wicker to train midwives how to manage common birthing emergencies . Today, simulation sessions can be loosely categorized into three formats: 1) imaginative scenario-based work (paper or tabletop simulations), 2) individual versus team-based manikin simulations or standardized patient-based simulations, and 3) specific task-based trainers. Modern surgical simulation focuses largely on the latter two approaches and is typically differentiated from other forms of medical simulation by the use of specific surgical models or tasks during the simulation. However, despite the rapid growth of simulation in health care, operative training models that replicate the practice of surgery remain underdeveloped. Currently, there is a robust discussion in the simulation community regarding center-based versus in situ simulation locations. Center-based simulation typically occurs outside of a patient care facility. These centers are flexibly designed to accommodate a wide variety of participants from all health care groups, as well as different patient care settings [i.e., inpatient wards, outpatient clinics, emergency department bays, radiology imaging suites, or operating rooms (ORs)]. While such flexibility allows for a wide range of participants and settings, it often limits exact replication of any one particular environment’s physical space and equipment. In situ simulation is defined as simulation in the learners’ actual work environment. Advantages of in situ locations include lower equipment and overhead costs, ability to train more participants in a given amount of time, enhanced simulation realism, identification of latent system errors, and improved convenience for both the learners and faculty . However, in situ locations generate unique logistical and organizational challenges. In this article, we describe our experience in integrating simulation into the existing OR complex at Massachusetts General Hospital (MGH). We hope this description fosters increased medical collaboration with engineering communities, not only with respect to the advancement of realistic surgical modeling, but also in designing integrated systems to serve both clinical and educational needs in the health care environment. Surgical History of MGH The MGH in Boston currently performs over 37,000 operations a year where more than 1,000 medical students, residents, and fellows learn. Inside the MGH Bulfinch Building is the Ether Dome, which is the site of the first public demonstration of ether anesthetic for a surgical operation in the United States. On 16 October 1846, William T.G. Morton, a Boston dentist, successfully administered ether to Gilbert Abbott, which allowed MGH cofounder, Dr. John Collins Warren, to painlessly remove a tumor from Mr. Abbott’s jaw. This landmark event at MGH started a long tradition of performing operations that select members of the public and medical community could observe from a platform surrounding the OR table. This open environment ceased with increased understanding of germ theory and the importance of sterility to prevent surgical infection. However, in 1939, MGH designed enclosed surgical observation decks for the opening of the ORs in the White Building. These former “state-of-the-art” two-story, sealed observation decks overlooked several OR theaters (see Figure 1). Except for the one overlooking OR#5, these historical decks were ultimately closed, and most were demolished. In 2012, a wing of new ORs opened in the MGH Lunder Building, allowing three of the original White Building ORs, including OR#5 and its observation deck, to be used for simulation training. A New Vision for OR Training The vision for the new in situ OR simulation suites was to provide a physical space to conduct high-fidelity OR team training simulations; to allow practice opportunities for individual operative skills; and to test new OR devices, procedures, and policies. This vision was integrated with overall institutional goals of discovering latent organizational risks, supporting residency training and nursing education programs, and promoting the continued professional development of health care providers through high-quality educational offerings. We also recognized that the ultrahigh-fidelity nature of this simulation space would be an ideal laboratory to develop and test new assessments of individual and teamwork performance within an OR environment. The concept of creating simulation training opportunities within an active OR complex presented exciting opportunities as well as important challenges. In addition to its inherent authenticity, the idea of an in situ surgical simulation space had definite historical appeal, harkening back to MGH’s use of observation galleries as an educational tool. Financially, there were also real cost savings to using existing facilities rather than building a brand new surgical simulation center. Also, OR staff participating in simulation sessions saved considerable time by eliminating travel to an off-site training locale. The participants could then return promptly to their clinical responsibilities, thereby increasing overall employee productivity for their departments. However, utilization of the existing physical OR space did not allow for renovations typical of an off-site simulation center (removing/moving walls, adding one-way mirrors, etc.). In addition, management of the operational workflow of an in situ simulation center internal to a live operating suite was critical. Some of the critical questions considered included the following: Would all equipment used in the simulation center be part of the active OR inventory? Would equipment ever be used that was not part of the active OR inventory? How would sterile supplies be maintained? Would we use real or simulated medication vials? If using real medication vials, how would we balance this against the ongoing national medication shortages? How would actual patients presenting for surgery and passing through the hallway adjacent to a simulation perceive their experience? What would happen if a simulation participant left the simulation environment to obtain help from nonsimulation participants? These questions helped us develop guiding principles and policies to ensure that the in situ simulation supplies, equipment, and processes were “sealed” and that no training equipment or processes “leaked” into the real ORs. To this end, we created OR simulation policies, a standardized orientation for all simulation participants, reviewed simulation safety protocols at the beginning of each session, installed explicit signage for all simulation spaces, and attached “simulation-only” labels for all supplies and equipment. As an additional fail-safe to avoid cross-contamination of any “mock” elements into the real-world environment, we stocked and maintained only clinical-grade equipment, supplies, instrumentation, and medications in each simulation room. We worked closely with our perioperative biomedical engineering and pharmacy groups to ensure that these elements are maintained regularly and match the real ORs. We also carefully designed several engineering work-arounds of the existing phone/overhead intercom, anesthesia machine, manikin, clinical monitoring, and computer systems, so that our training activity would not interfere with routine OR operations. The observation deck acts as a control area to operate the manikin and other simulation elements; it also provides an opportunity for individuals to view the proceedings, out of sight of the simulation participants. Similarly, we worked closely with our perioperative information technology group to create patient names and medical record numbers that would only be used for the patients in the simulation scenarios. This opened up the potential capability to track supplies, equipment, and medication costs for the simulated patients, just as we would in a real case. We also worked with the MGH Laboratory for Computer Science to create a radio-frequency identification badge reader system that allowed simulation participants to sign in for each simulation session with a simple badge swipe. This system allowed us to easily see who was assigned to take care of a simulated patient for that session documented in our OR computer systems. The simulated patient also appeared on the real OR schedule so that the OR community could see when the rooms were being used and what “operation” the teams were going to perform. These entries served to highlight the concept of continuous education as a key element of daily operations and emphasize the connection between simulation and actual patient care. Training Experience and Future Needs We started using our new in situ simulation ORs for operative training in late 2012. Since that time we have conducted nearly 400 OR simulations for many specialties, including general surgery, pediatric surgery, burn surgery, laryngology, oral maxillofacial surgery, urology, obstetrics and gynecology, orthopedics, thoracic, and cardiac surgery. Our multidisciplinary approach has included surgeons (attending/resident physicians), anesthesia providers (attending/resident physicians and certified registered nurse anesthetists), registered nurses, and surgical technologists, which helps to enhance and enrich the educational opportunities. Over 1,000 health care staff have participated in OR simulation experiences as a partial fulfillment of various curricula and continuing education initiatives including medical malpractice discounts, continuing education credits, Accreditation Council for Graduate Medical Education milestone achievement, Anesthesia Crisis Resource Management curriculum, and the American College of Surgeons and the Association for Program Directors in Surgery National Surgical Skills Curriculum. While the cognitive and teamwork experiences in our simulations are quite realistic, there remain technical gaps that present challenges unique to the in situ simulation environment. Only close collaboration with inventors and engineers will yield high-fidelity surgical models that can be fully integrated into an already complex operative environment. For example, our manikins were not originally designed for regular positive pressure ventilation from an anesthesia machine, so several modifications were made to ensure adequate pulmonary compliance to avoid false alarms from the anesthesia machine for low tidal volumes/pressures and air leaks. The manikins were also modified to prevent internal damage from irrigation fluids, sharp cutting instruments, or simulated blood. Various surgical models, such as an abdomen for laparoscopic operations and a bleeding tumor, had to be engineered and placed on top of the manikins due to lack of internal space. New modifications are continually needed to create OR situations that are realistic enough to engage the surgical team (surgeons, scrub technologists, circulator nurses, and anesthesiologists) without being excessively bulky or damaging to the manikin. In particular, computerized algorithms that could mimic ventilator characteristics and response to medications would remove the need for pumps and bellows within the manikin’s thoracic and abdominal cavities. The removal of this equipment would enhance internal space within the manikin and allow direct internal integration of surgical models to enhance the scenario realism and the overall engagement of surgical teams. In the future we hope to make simulation staffing and resources even more accessible to surgical teams as part of their routine OR workday. Previous research on the effectiveness of dress rehearsals prior to actual surgery has been shown to benefit teams who routinely practice with simulation . We also feel that in situ simulation allows teams to practice in their own environment as opposed to traveling to a simulation center. This benefit appears to enhance accessibility and overall learner satisfaction with the simulation sessions. In summary, the development and integration of three in situ simulation ORs into the working OR clinical environment has required advanced vision, creative use of OR space, and ongoing collaboration across hospital services. Beyond their profound educational value, these rooms have provided robust opportunities to refine OR policies and procedures, enhance the OR safety culture, and support collaborative research opportunities, all of which help us to continuously improve patient care. We are currently exploring the development of simulation-based credentialing metrics that could be incorporated in the Joint Commission processes. We look forward to expanding our work with engineering communities to help advance the field of surgical simulation modeling and operations. - A. A. Wilson, “New synthesis: William Smellie,” in The Making of Man-Midwifery: Childbirth in England 1660–1770. London: Univ. College London Press, 1995. - K. M. Ventre, J. S. Barry, D. Davis, V. L. Baiamonte, A. C. Wentworth, M. Pietras, L. Coughlin, and G. Barley, “Using in situ simulation to evaluate operational readiness of a children’s hospital-based obstetrics unit,” Simul Healthc. vol. 9, no. 2, pp. 102–11, Apr. 2014. - J. D. O’Leary, O. O’Sullivan, P. Barach, and G. D. Shorten, “Improving clinical performance using rehearsal or warm-up: an advanced literature review of randomized and observational studies,” Acad Med., vol. 89, no. 10, pp. 1416–22, Oct. 2014.
<urn:uuid:6a4aa702-8e57-4039-8926-d7a59fce4d13>
CC-MAIN-2021-43
https://www.embs.org/pulse/articles/making-it-real/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00510.warc.gz
en
0.943456
2,608
2.921875
3
What is the Difference Between Curing, Managing, and Treating a Condition? There’s some confusion about the difference between a treatment and a cure. For example, in a Washington Post fact-checking article, two sources use the terms “cure” and “treatment” interchangeably. The group Vote for Cures says, “10,000 diseases, only 500 cures.” Later on in the article, the Post quotes FasterCures, an affiliate of the Milken Institute, as saying, “10,000 diseases. Only 500 treatments. We have work to do.” This is important because the Post was fact-checking the claim that there are 10,000 diseases and 500 cures. If the claim that there are 500 treatments gets thrown into the mix, then we’re talking about something else entirely. A world where there are only 500 treatments for 10,000 diseases is a scary one indeed. What’s the difference between a treatment, a cure, and symptom management, and why does it matter? To properly pin this down, we have to start with the definition of a cure. What is the Definition of a Cure? A “cure” is a remedy for a disease that eliminates the disease. A cure can work on an individual level or on a group level. If a cure works on a group level to the extent that there is no trace of disease left in the population, we can say the cure eradicated the disease. In a Psychology Today article on the difference between healing and curing, psychologist Lissa Rankin defines curing as “eliminating all evidence of disease.” This definition is problematic for one reason: no “evidence” of a disease doesn’t always mean the disease is gone. Typically when we say “cure”, we’re referring to the elimination of a disease, and not just the suppression of its symptoms. However, the medical/scientific community speaks in terms of evidence, and complete certainty is rare. True cures that eliminate diseases are also rare. This is probably the reason why no real doctors weigh in on the Quora question, “What is the difference between a cure and a treatment?” One responder points out that the majority of medical dictionaries don’t define “cure.” Still, for our purposes, we’ll rely on the Merriam-Webster definition of cure, which is “recovery or relief from a disease,” or, “a complete or permanent solution or remedy.” Can Diseases or Illnesses be Cured? Some diseases or illnesses can be cured. According to Katherine J. Wu, a Harvard Ph.D. candidate, we could cure a fifth of the world’s population of infectious tropical diseases, such as “river blindness,” a painful parasitic disease that eventually causes irreversible blindness. Satoshi Omura and William Campbell won the Nobel Prize in 2015 for inventing a “miracle drug” called ivermectin to treat and cure river blindness. The drug can completely rid sufferers of the disease and can stop them from going blind if they take it soon enough. Malaria is a curable disease that kills a child every 45 seconds, with 90 percent of the deaths occurring in Africa, where mosquitoes transmit malaria parasites. The CDC points out that curing malaria is a matter of diagnosing and treating the disease promptly and correctly. Lyme disease, which is another insect-borne illness, is also curable. According to Daniel Kuritzkes, MD, chief of the Division of Infectious Diseases at Brigham and Women’s Hospital in Boston, “Lyme disease is always curable.” The disease affects some 30,000 people per year, but like malaria, it can be cured with proper diagnosis and treatment. Annals of the Rheumatic Diseases reveals that the most common kind of arthritic disease, gout, is a curable disease. There are a number of curable diseases. However, like tuberculosis, for one reason or another, many curable diseases continue to be a problem. What is an Incurable Condition? People often hear the word ‘incurable’ and think ‘terminal,’ but that’s not what the word means. An incurable condition is a health issue the medical community has not been able to find a solution to yet. For example, a person’s leg could be badly broken and shattered to the extent that no amount of surgery can heal it. This person would have an incurable condition that requires them to get a prosthetic leg. This is different than an incurable, chronic disease — an illness that affects the person for a prolonged period of time, but does not necessarily cause death. Wikipedia provides a long list of incurable diseases, but there’s no indication as to which of them are fatal conditions. Various forms of mental illness, such as schizophrenia, are incurable but they don’t cause death. Additionally, many skin conditions, such as psoriasis and eczema, are treatable with topical creams and oils, but are not curable conditions. Moreover, if we’re talking about mental illnesses that qualify as an incurable ‘disorder’ or condition, there’s a difference between an incurable disorder and a chronic disease. John Cooper of the World Psychiatry Association points out that “disorders are different from diseases,” in that “currently recognised disorders are no more than symptom clusters.” Many incurable or chronic conditions have treatment options that can greatly alleviate symptoms. Asthma, HIV, epilepsy, and various forms of visual impairment, such as glaucoma, are all considered incurable. However, treatments are available that can ameliorate or slow the onset of symptoms. What is the Medical Definition of a Treatment or Symptom Management? The word “treatment” has several definitions: “the management and care of a patient”, or, “the combating of a disease or disorder.” Doctors and nurses engage in both kinds of treatment; if you’re treating a disease, you’re fighting it; if you’re treating the symptoms, you’re performing symptom management. The National Cancer Institute (NCI) defines symptom management as “Care given to improve the quality of life of patients who have a serious or life-threatening disease.” According to NCI, the goal of symptom management is to “prevent or treat as early as possible the symptoms of a disease, side effects caused by treatment of a disease, and psychological, social, and spiritual problems related to a disease or its treatment.” The NCI defines symptom management in terms of “serious or life-threatening disease” because the NCI’s focus is cancer. However, doctors, nurses, and psychiatrists help patients manage symptoms for conditions that aren’t necessarily life-threatening. For example, a doctor may prescribe a sleeping pill to a patient with insomnia. The prescription is a form of symptom management because it’s not an attempt to treat the root cause of insomnia, which may be psychological, physical, or genetic. In 2018, the FDA approved an oral CBD solution for the treatment of seizures caused by two rare and severe forms of epilepsy. Since seizures are a symptom of epilepsy, this cannabidiol solution helps to manage the symptoms of an incurable disease. Cannabidiol (CBD) is one of the active compounds in cannabis plants, and the medical community is starting to look more seriously at using CBD as a therapeutic treatment solution for a variety of illnesses. What Does Remission Mean? When it comes to chronic diseases or conditions, remission occurs when the symptoms or problems go away. There are stages of remission and doctors consider it an ongoing state of amelioration — a de-escalation or lessening of the problem. Simply put, remission is relief from symptoms; however, it’s not meant to signify that a disease or condition is cured. Common Alternatives for Symptom Management People often seek out their own symptom management solutions in addition to, or instead of, pharmaceutical prescriptions. A person suffering from anxiety and insomnia can try mindfulness meditation, which could help alleviate mental stress. Since chronic pain is a complex condition, people often try a variety of alternative treatment options, including acupuncture, therapy, exercise, supplements, vitamins, chiropractic care, and sometimes even cannabis. CBD is becoming a popular alternative for symptom management, though it hasn’t yet been approved by the FDA. In a study, about 62 percent of CBD users employed some form of CBD in an attempt to treat medical conditions such as pain, anxiety, and depression. CBD can be derived from hemp, which is a legal type of cannabis that doesn’t cause the user to get high. People can make their own CBD products, but many choose to rely on professionally produced products that have the highest stamp of quality. CBD is not a treatment or cure for disorders or diseases, and, until more research is done and studies are completed, it shouldn’t be considered one. Nevertheless, CBD has shown promise and potential for symptom management and relief; only time will truly tell. What is the Difference Between Disease and Disorder? As mentioned earlier, there’s a difference between disease and disorder, but what, exactly, is the difference? According to Merriam-Webster, a disease is a condition that “impairs normal functioning and is typically manifested by distinguishing signs and symptoms.” A disorder, on the other hand, has to do with abnormal versus normal functioning of the body. Conduction disorders are a good example of how abnormal functions may or may not manifest. One type of conduction disorder, a “bundle branch block”, causes electrical impulses to take a different path through the heart’s ventricles. However, it’s possible for someone with a bundle branch block to experience an absence of symptoms their entire life. Even if it is asymptomatic, a bundle branch block is still considered a “disorder” because this is not the way most people’s ventricles conduct electricity. Therefore, a disorder is a defect or disturbance in function that may or may not manifest itself through distinguishing signs or symptoms. As such, a person could have a disorder their whole life and no one would know it. When symptoms are present, doctors can look for a cure or treatment for diseases; with disorders, they can try medications, therapies, implants, or surgery to try and put things back in order. In either case, symptom management is necessary for persistent conditions with symptoms that are tough to bear. CBD oil has shown promise as an additional or substitute method of managing symptoms of multiple diseases and disorders. With more research and testing, CBD may soon be accepted as a medical treatment option, or even as an alternative treatment. What is Comorbidity? Comorbidity occurs when a person is suffering from more than one condition at the same time. Sometimes, but not always, these conditions can play off of and worsen each other. Comorbid insomnia and sleep anxiety, for example, can create a snowballing, downward spiral of sleep loss. Other conditions are commonly experienced comorbidly. For example, about 60 percent of people who suffer from anxiety also suffer from depression. Many people who suffer from mental illness also suffer from a substance use disorder. Chronic pain and chronic fatigue comorbidity is also a problem. Like disease or disorder, comorbidity is a problem anyone can experience during their lifetime because of the world we live in and the genes we’re given. There may not be 10,000 cures for every 10,000 diseases, but there are many treatments that can help make symptoms easier to bear.
<urn:uuid:e7d33c3b-0e56-4b36-8818-5f1b662ea225>
CC-MAIN-2021-43
https://www.corecbd.com/definition-of-cure-vs-managing-disease-medical-treatment/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00510.warc.gz
en
0.945848
2,496
3.15625
3
In this article we are going to look at the British Pound, the world’s oldest, still used currency. We will explore the Pound in a historical context, looking at its roots in the Forex market, we will follow the Pound into the modern era and on into the recent Brexit saga. We will also look at the events that can impact the Pound on a day to day basis and in the longer-term, and also what lies ahead for the U.K.’s currency. - What is Forex? - The Pound defined - What impacts the GBP Forex rate? - The Pound in a historical context - GB Pound in the modern era - The Pound in the Brexit era - The future for Sterling What is Forex? Forex Market Definition: A market to exchange one currency for another for immediate or future delivery. Before we can take a look at the GB Pound specifically, we should first place the Pound in the context of the financial market that we are looking at, the Forex market. The Forex markets are also called FX, foreign exchange and currency markets. They are non-centralized, global markets for trading different currencies. You can see our broad History of the Forex Markets here which should help with the historical view of how the Foreign Exchange markets developed. Trading forex comprises buying one currency and simultaneously selling another. The primary centre of the FX Market has been London since the end of Bretton Woods given its central geographic position. Forex is traded over the counter (OTC) and on exchange (FX futures after traded on the Chicago Mercantile Exchange (CME)). It features a large turnover, between $5 and $7 trillion per day and is very liquid, meaning that two-way prices are available, and it is relatively easy to open and close market positions. The FX market is global and is a 24-hour market, 5.5 days a week (opening in Asia on Monday morning and closing in the US on Friday evening). It also has low margins, tight spreads and a large scope and variety of participants. These would include; Central Banks, Investment Banks, Multinational Companies, Hedge Funds, Investment Managers, FX Brokers, Importers and Exporters, speculators/ leveraged Retail Investors. The Pound defined The Pound is the common name for the currency of the United Kingdom (U.K.), which is officially known as Pound Sterling, sometimes just Sterling. In Forex terms its ISO Code is GBP. The Pound is the fourth most traded currency globally on the Forex market, after the so called G3 currencies, the US Dollar the Euro and the Japanese Yen. Alongside these three currencies and the Chinese Yuan, the Pound makes up the five currencies that input into the calculation for the International Monetary Fund Special Drawing Rights. Furthermore, the Pound comes in at number five on the list of currencies held in global reserves. The symbol for the Pound is £, which derives from the letter L, from the Roman word libra, meaning pound (in weight). In Forex markets, the Pound against the US Dollar exchange rate is often referred to as “Cable”. This comes from the fact that in the 1800s the GBPUSD Forex rate was transmitted between the US and U.K. via a transatlantic cable. What impacts the GBP Forex rate? The five main factors influencing the short-term value of the Pound (GBP) in Forex markets are: - U.K. economic conditions and data - Global economic conditions and data - U.K. interest rate moves and expectations - Interest rate differentials - Geopolitical influences A stronger U.K. economy, or the perception of a stronger economy as indicated by short term economic data has a direct influence on the currency, with an improving economy encouraging a stronger Pound. Conversely, a weakening economy or weaker economic data tends to lead to a weaker Pound. Furthermore, changes in the global economic outlook can also impact on the Pound Sterling, though these impacts can sometimes be difficult to predict and will depend upon the relative position of the U.K. economy, in relation to the global economic backdrop. In addition, the level of U.K. interest rates, which are set by the U.K. central bank, the Bank of England also impact on the value of the Pound in Forex markets. It tends to be the case that higher interest rates, or the perception of higher interest rates will lead to an appreciation in the Pound. Conversely, a lowering of interest rates or the expectations of interest rate cuts will tend to see a depreciation in Sterling on Foreign Exchange markets. Therefore, monitoring the activity of the Bank of England alongside speakers from the Monetary Policy Committee (MPC) of the Bank of England, is essential in understanding movements in the Pound on currency markets. In addition, all Forex market are relative value, that is to say one currency is exchanged for another currency. This means that the differential between U.K. interest rates and interest rates of other countries influences the level of the Pound against those other currencies. Finally, geopolitical impacts from the likes of politics, trade wars, terrorism and the weather can impact all global financial markets, with the Foreign Exchange markets and the Pound no exception. In the longer term, the Pound Sterling’s value is impacted by: - The longer-term U.K. economic condition - The longer-term global economic condition - Long-term geopolitical shifts The longer-term influences on the Pound are similar to the short term influences, but we would be looking at the longer-term shifts in the U.K.’s economic outlook and also the longer-term global conditions. In addition, bigger picture, structural geopolitical shifts could have an impact on the Pound’s value against global currencies. The Pound in a historical context We have produced a comprehensive History of the Forex Markets here, which you should read to give an understanding of where the Pound sits in the broader framework of Forex Markets. As the planet’s oldest still used currency, the Pound has seen the comings and goings of The Gold Standard Monetary System, the Bretton Woods System, the Smithsonian Agreement, the Plaza Accord, the European Exchange Rate Mechanism, the birth of the Euro and the Brexit divorce from the EU. In this section, however, we are going to look more specifically at the historical context for the Pound from ancient times to decimalisation in the early 1970s. The Pound dates back to the 8th Century, back to Anglo-Saxon times, where 240 silver pennies was the equivalent of one pound in weight of silver. This then developed into the current U.K. currency, the Pound Sterling, from the weight of the Sterling Silver. Silver was the legal basis for the currency until 1816, though during Tudor times the silver coins were debased on a number of occasions. From 1816, the Gold Standard was officially adopted, with the Bank of England issuing legal tender notes valued relative to gold. The Pound was used extensively on a global basis during the 18th and 19th Centuries as the British Empire grew. In Canada, Australia, India, the Caribbean and South Africa. The Gold Standard was suspended at the start of World War One in 1914, with a new version reinstated 1925, but then abandoned in 1931 during the Great Depression. During World War Two, in 1940, the Pound was pegged to the US Dollar, at $4.03, eventually devalued in 1949 to $2.80. GB Pound in the modern era The Pound has also been on a rollercoaster in the more modern era, which really kicked off with decimalisation on 15th February 1971. After the move to a decimal system, the Pound then went on to free float against global currencies from August 1971 after the breakdown of the Breton Woods System. Sterling then managed to ride out the 1976 Sterling Crisis, then an aggressive rally and selloff in the Thatcher era of monetary policy that saw GBPUSD rally to $2.40 in 1979 and plunge to $1.03 in 1985. Then saw a move to shadow the German Mark from 1988 and then shadow the European Currency Unit (ECU) as part of the European Exchange Rate Mechanism (ERM). However, the Pound crashed out, leaving of the ERM on Black Wednesday, 16th September 1992 after aggressive speculation against the Pound, famously by the speculator George Soros. The U.K. government then decided to opt out of adopting the Euro, which fully came into being on 1st January 1999. The Pound also suffered during the first part of the 2008-2009 global financial crisis, plunging in value against both the US Dollar and Euro. The Pound in the Brexit era On the 23rd June 2016 the U.K. European Union Membership Referendum took place, now commonly known as the Brexit vote. The result was for the U.K. to leave the European Union, which immediately established a new era for the Pound’s relationship with global currencies. The Pound immediately weakened against the Euro by 5% with GBPEUR moving from 1.30 down to 1.23, then by October lower to 1.1450, a 14% depreciation. On the same timelines, the Pound versus the US Dollar, the Cable Forex rate (GBPUSD) went from 1.46 to 1.37 overnight and to 1.22 by October 2016, a 16% decline. After winning a landslide victory in the December 2019 general election under Boris Johnson, the Conservative Party secured a Brexit deal with the EU on 31st January 2020. Despite some concerns throughout 2020, the UK completed its separation from the EU on 31st December 2021. During this phase, the Pound has gained versus both the Euro and the US Dollar, despite a significant sell off in Q1 2020 in reaction to a strong US Dollar in the wake of the global spread of COVID-19. GBPUSD strength resumed in 2020 and has carried forward into 2021, with the extremely impressive rollout of the UK vaccination program. The future for Sterling The short-term future for the Pound is unclear. Although the UK COVID-19 vaccination rollout has been impressive and has allowed the UK economy to re-open quicker than European economies, much of this positivity has been built in. In the medium-term, what will be key is the start of and the pace of the unwind of the super easy monetary policy that is in place globally in reaction to the COVID-19 pandemic. The relative differences in the undoing of this easy monetary policy across differing nations will lead to changing interest rate differentials and will be key to Forex levels into the 2020s. Where the Pound sits in this environment, only time will tell. In this article we have reviewed what the foreign exchange market is and what the Pound Sterling is. We have also explored what it is that impacts on the value of the Pound against other currencies on Forex markets and looked at the U.K. currency in a historical context, during the modern era and during the current Brexit phase. As for the future for the Pound, one thing is almost certain, it will NOT be dull!
<urn:uuid:f33f3d40-9795-424a-9ce4-7f9f0e9ccfb9>
CC-MAIN-2021-43
https://www.fxexplained.co.uk/forex-education/fundamental-and-macroeconomic-analysis/the-history-of-the-pound/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00071.warc.gz
en
0.945684
2,329
3.0625
3
Posted by Brent Wilson on 8/9/2016 to Fertilizing & Watering Tips Nandina, commonly called heavenly bamboo or sacred bamboo by some, are immensely popular, versatile evergreen shrubs with colorful, lacy foliage that resemble...you guessed it, bamboo. When established Nandina are exceptionally drought-tolerant. Bugs aren't a problem and when planted properly they have no disease problems. Although the standard Nandina domestica grows as a tall green shrub that turns a little red in winter and produces white flower spikes and clusters of red berries, other new Nandina varieties offer different growth habits, foliage colors and don't produce flowers or berries. When planted right, and in the right spot, Nandina plants are exceptionally versatile and easy to grow and care for. Here's a breakdown of what you need to know for how to plant and care for Nandina plants... Nandina is not picky about soil type, however prefers a somewhat loose, fertile and well-drained soil. As with so many other types of ornamental plants, constantly soggy or wet soil can and often will cause root rot and other harmful plant diseases. How To Test Soil Drainage If you are uncertain about soil drainage in the area you intend to plant your Nandina, it's well worth taking the time to test the drainage before planting. To test soil drainage, dig a hole 12" wide by 12" deep in the planting area. Fill the hole with water and let it drain. Then, after it drains, fill it with water again, but this time clock how long it takes to drain. In well-drained soil the water level will go down at a rate of about 1 inch an hour. A faster rate, such as in loose, sandy soil, may signal potentially dry site conditions. A slower rate indicates poor draining soil and is a caution you need to improve drainage, plant in a raised mound or bed, or look for plants that are more tolerant of wet or boggy conditions. Soil pH is a measurement of the alkalinity or acidity of soil, which is measured on a scale of 1-14, with 7 as the neutral mark. Any measurement below 7 indicates acid soil conditions, and anything above 7 indicates alkaline. Nandina grow best in a moderately acid to slightly alkaline soil ranging from 6.0 to 7.5 on the pH scale. Most average garden soils fall between a pH range of 6.0 to 7.0. If you're unsure about the pH of your soil, or whether or not it's suitable for growing Nandina, it's a good idea to test the soil pH in the planting area. You can quickly test soil pH with an inexpensive soil pH testing kit or probe. To raise the pH (make more alkaline) you can add pelletized limestone to the soil. To lower the pH (make more acid) you can apply Soil Sulfur, Aluminum Sulfate, or Chelated Iron. Adding organic compost to the soil or using compost as mulch can also help to increase acidity and maintain acid soil conditions. When it comes to light, Nandina are exceptionally versatile. Plant them in sun or shade and they'll do fine. That said, foliage colors will be more intense with more sun. Planting Nandina In The Ground Scroll down for container planting instructions Start by digging your planting hole at least two to three times as wide and no deeper than the rootball of your Nandina plant. The wider the hole the better. Place native soil removed from planting hole around the perimeter of the hole, in a wheel barrow, or on a tarp. Depending on the type, fertility and porosity of the native soil in the planting area it might be beneficial to amend the native soil. When planting in dense clay or other compacted soils it is beneficial to thoroughly mix in some bagged top soil and/or a good planting mix at a 50/50 ratio with the soil removed from the planting hole. When planting in very sandy, quick-draining soil you might want to consider mixing in some top soil, peat moss and/or compost to help retain moisture. When planting in fertile, loamy, well-drained soil there is no need for adding a soil amendment. Be very careful when removing your Nandina plant from the nursery pot it was growing in. Gently try to lift the plant from the pot. If the rootball is stuck in the pot, to avoid damaging the plant, cut the container away. After having removed the plant from the container, use your fingers or a claw tool to gently loosen some feeder roots around the surface of the root ball. When Mass Planting Dwarf Nandina Varieties If you are mass planting Nandina plants as a groundcover over a large area, to determine how many plants you will need to fill the planting area, it is helpful to determine total square feet of the planting area. Once you have the square footage of the planting area, and have determined how far apart you will space plants, you can calculate how many plants it will take to fill the planting area. (Under the description tab on every Nandina plant page in Wilson Bros Gardens you will find spacing recommendations.) Set and space all plants in the planting area before starting to plant. Alternatively, you can use marking paint to mark the spot where each plant will go, which is often necessary when planting on steep slopes, where plants in containers will not stay put. If there will be more than one row of plants, begin by setting out or marking one straight row of plants. It's best to start along the edge of the planting bed making sure to space plants at a distance far enough from the edge of the planting bed to allow for future spreading. For example, plants with a recommended spacing of 24" apart should be spaced at least 12" from the edge of the bed (or surfaced area) to the center of the plant. After setting out the first row, stagger the plants on the second row and so on until the space is filled. If you are planting in well-drained soil set your Nandina in the planting hole so that the top edge of the rootball is at or slightly above ground level. If your soil drains slowly, holding water for an extended period of time after rainfall or irrigation, the top of the root ball should be 2 to 3 inches above ground level, as shown in the planting diagram below. If necessary, add some backfill soil mixture to the bottom of the hole to achieve proper planting height. NOTE: If the soil is poorly drained (constantly soggy or wet) improve drainage, plant the root ball in a raised mound entirely above ground level, or select a different plant species more tolerant of wet soils. After setting your Nandina in the planting hole, use one hand to hold the plant straight and your other hand to begin back-filling your soil mixture around the root ball, tamping as you go to remove air pockets. When you have filled the hole to the halfway point you can soak the soil. Then continue back-filling to the top edge of the root ball. If you are planting higher than ground level taper your soil mixture gradually from the top edge of the root ball to the ground level, as shown in the planting diagram above. To avoid suffocating your plant, avoid placing any soil on top of the root ball. Step 6 (Optional) When planting Nandina in a location that is far from a water source, you can use remaining soil mixture to build a water retaining berm (catch basin/doughnut) around the outside perimeter of the planting hole. Only build this berm if the soil is very well-drained. This basin will help to collect water from rainfall and irrigation often reducing the need for hand-watering. The berm can be removed after a year or so when the plant has established itself. Next, deeply water the planting area, including the root ball, to a depth equal to the height of the root ball. For an extra boost, and to stimulate early root formation and stronger root development, you can also water you newly planted Nandina with a solution of Root Stimulator, which reduces transplant shock and promotes greener, more vigorous plants. Spread a 1- to 2-inch layer of shredded or chipped wood mulch or a 3-4-inch layer of pine straw around the planting area to conserve moisture and to suppress weed growth. As the mulch decomposes it will add vital nutrients to the soil that your plants will appreciate. Avoid using freshly chipped or shredded wood for mulch until it has cured in a pile for at least 6 months, a year is better. Avoid placing or piling mulch directly against the base of your plant as this could cause the bark to rot. Container Planting Instructions Nandina are ideal for use in container gardens. When growing in pots they appreciate a moist but very well-drained soil. Constantly soggy soil can and often will cause root rot or other harmful plant diseases. Therefore, make sure the planting pot has a drainage hole(s) and use a quality potting soil or potting mix, or a 50/50 combination thereof for planting. You can also add some perlite or pumis at a 10 to 20% ratio to the soil mix to help with drainage. Choose a container that is large enough to allow for 2 to 3 years of growth before shifting up to a larger size container. This might mean your planting pot would be 6 inches or more in width than the root ball of your plant. If you'll be growing other plants in the same container up the size of the container. Container color will matter as well. Not only will you want to pick a color of container that goes well with the flower, foliage and berry colors of your Nandina, you'll also want to pick a container that matches the style of your home or other structures and plants in the surrounding environment. Many nursery & garden centers offer a wide variety of containers to choose from. Before heading out to buy a container take pictures of your home and the surrounding environment. Doing so will help you to choose just the right color and style. Before filling your container with the soil mix, we recommend lining the bottom with shade cloth or a porous landscape fabric. This will keep the drain holes from becoming stopped up with soil. If you use stones or other materials in the bottom of the container lay the fabric on top of it. Be very careful when removing your Nandina plant from the nursery pot it was growing in. Gently try to lift the plant from the pot. If it is stuck in the pot, to avoid damaging the plant, cut the container away. After having removed the plant from the container, use your fingers or a claw tool to gently loosen some feeder roots around the surface of the root ball. Pour a small amount of your soil mixture in the bottom of the container. Set your Nandina plant in the container and make necessary adjustments by adding or removing some soil so that the top edge of the root ball will sit 1" or so below the rim of the container. Backfill with your potting soil around root ball, tamping as you go, until the level of potting soil is even with the top edge of root ball. Water thoroughly until water starts to drain from the holes in the bottom of the container. Add more potting mix if settling occurs during watering. Step 6 (Optional) Apply a 1/2" layer of pine bark or wood chips to soil surface to help conserve moisture and prevent weed growth. Caring For Nandina Plants How To Fertilize Nandina Nandina are light feeders, however will benefit from fertilization. To maintain good foliage color and support growth and overall health of the plant, feed Nandina in spring with a slow-release shrub & tree food. Alternatively, you can feed with a natural organic plant food. To avoid stimulating new growth that could be damaged by an early frost, cease fertilization of Nandina two months prior to the first frost date in your area. Feed Nandina growing in containers as directed on product label with a timed-release or water soluble fertilizer listed for use in containers. Soil pH - Nandina grow best in a slightly acid to slightly alkaline soil ranging from 6.0 to 7.5 on the pH scale. Most average garden soils fall between a pH range of 6.0 to 7.0. Learn More: What is Soil pH and How To Test & Adjust It? How To Water Nandina Nandina are exceptionally drought tolerant when established. That said, in the absence of rainfall, young Nandina will require some moisture during the first year while establishing a root system. As with so many other plants, constantly soggy or wet soil can cause harmful root diseases and even death. So be careful not to over-water them! Immediately after planting your Nandina deep soak the soil in the planting area, including the rootball, to a depth equal to the height of the root ball. An application of Root Stimulator will provide an extra boost to stimulate early root formation and stronger root development. Root Stimulator reduces transplant shock and promotes greener, more vigorous plants. During First Growing Season In average garden soil you should not have to water your newly planted Nandina every day. More often than not, this causes soggy soil conditions that can lead to root rot and other harmful plant diseases. In the absence of sufficient rainfall, water your Nandina plants only as needed to keep the rootball and surrounding soil damp to moist. Keep in mind that deep soaking less frequently, allowing the soil to dry out somewhat before watering again, is much better than splashing just a little water on the plants every day. Shrubs planted during the winter dormant season, when plants are not actively growing and evaporation is much slower, will require much less water. So, be extra careful not to overwater during winter! When established, Nandina are exceptionally drought and will only require supplemental irrigation during a prolonged period of drought. If you see new leaves wilting or turning pale during a drought this could be a sign your plants could use a good deep soaking. How To Prune a Nandina Nandina do not require pruning for health or performance except to remove damaged or dead plant parts or to remove a stray branch that is spoiling the shape of the plant. Compact selections of Nandina (growing under 3 feet in height) remain tidy with little or no pruning. Sometimes, taller growing Nandina varieties will become bare at the bottom over time and pruning restores a full and compact look. When to Prune Any significant pruning for shape or size should be conducted in the late winter or early spring before new growth appears. Only older, neglected plants will require pruning. How to Prune Before new growth emerges in spring, use sharp bypass hand or lopper pruners (never hedge shears) to renew older clumps by cutting one-third of the main stalks to the ground. Do this for a total of three years and you will have restored your old Nandina. Also, remove old and weak branches to encourage new growth. Any time of year, you can cut back a branch or two to use in flower arrangements or wreaths. Note: You can maintain a natural appearance by pruning each stalk to a different height, cutting back to a tuft of foliage. Plant Long & Prosper! Questions? Contact Us!
<urn:uuid:4c4968a7-ecd8-4bf9-9527-26266d3fe16e>
CC-MAIN-2021-43
https://www.wilsonbrosgardens.com/nandina-planting-and-care-tips.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00550.warc.gz
en
0.925347
3,260
2.546875
3
Healthy fats & oils 4. October 2021 Which offer special health benefits? Which of them offer special health benefits? You can’t cook without using fats and oils, but we have to ask ourselves how healthy and digestible they really are. We use oils almost every day in every type of cooking, so let’s take a look at which of them are healthy and which could be harmful to you. Even in ancient times, the Greeks and Romans used olive oil for frying and to refine dishes. But cooking oil didn’t reach Germany until the 20th century. Until then, the primary fat used in this country was butter. Cold-pressed oils versus refined oils We should use cold-pressed oils to prepare cold dishes and refined oils for cooking with heat. With a few restrictions, however, cold-pressed cooking oils can also be used for frying and deep-frying. Cold-pressed oils (also called native oils) are not refined, but instead remain in their original composition after the oilseeds have been compressed. This explains why, for example, rapeseed oil, walnut oil and pumpkin seed oil retain theirnutty flavour. Among vegetable oils, cold-pressed rapeseed oil is a true all-rounder. It has the lowest proportion of saturated fatty acids, a high proportion of monounsaturated fatty acids and contains a lot of unsaturated omega-3 fatty acids and vitamin E. Refined oils, on the other hand, are obtained through the application of heat and chemicals. Most oils come in two varieties: as cold-pressed (native) oil and as refined oil. Olive oil, for example, is available in both variants: refined olive oil is chemically processed and retains almost none of its healthy ingredients. Cold-pressed oils are healthier than refined oils because the manufacturing process is much gentler, meaning that they retain more vitamins and flavour molecules, as well as the essential fatty acids. Particularly healthy oils include coconut oil, rapeseed oil, olive oil, sunflower oil, linseed oil and walnut oil. There is even such a thing as grapeseed oil, which as the name suggests is obtained from the seeds of grapes. Rapeseed oil is particularly healthy because it is rich in unsaturated fatty acids and has a particularly favourable ratio of omega-3 to omega-6 fatty acids. Linseed oil is also rich in omega-3 fatty acids, and has the best omega-3/omega-6 ratio. Cold-pressed rapeseed oil, soybean oil and safflower oil also have a very good fatty acid composition.Other suppliers of omega-3 and omega-6 fatty acids include walnut oil and hemp oil. But watch out: too much of any type of oil is not necessarily healthy. Oils provide us with a lot of energy, meaning that they are high in calories. Omega-3 and omega-6 fatty acids Why are they so important? Omega-3 fatty acids contain eicosanoids, which significantly reduce the risk of cardiovascular diseases through their effect on the blood vessels. The Federal Ministry of Education and Research writes: “Omega-3 and omega-6 fatty acids serve as precursors for messenger substances and tissue hormones. While omega-3 fatty acids tend to contribute to the production of anti-inflammatory fat hormones, omega-6 fatty acids often serve as precursors for the body’s own synthesis of inflammatory fat hormones. In order to keep the right balance, it is not so much a question of the total amount of oils consumed, but rather of maintaining an optimal ratio between the omega-3 and omega-6 fatty acids consumed. The ratio should be around 1:5″. Prof. Gerhard Jahreis, Emeritus at the Institute for Nutritional Sciences at the University of Jena, explains: “The decisive factor is the content of saturated and unsaturated fatty acids in the oils. A ratio of 1:2 is optimal. Of this, more than a third of the fat consumed should consist of monounsaturated fatty acids, preferably oleic acid. The polyunsaturated fatty acids should contain as many omega-3 fatty acids as possible.” Omega-3 fatty acids also have a positive effect on cholesterol levels and strengthen our immune system. Very important: the human body cannot produce omega-3 and omega-6 fatty acids itself. The only way to supply the body with them is through our nutrition, which is why they are referred to as “essential fatty acids“. Particularly beneficial in treating rheumatism Because of their anti-inflammatory properties, omega-3 fatty acids are particularly important for people who suffer from rheumatic diseases, who often also have cardiovascular diseases. Omega-3 fatty acids expand the blood vessels, thus reducing the risk of thrombosis. They also reduce blood lipid levels. Vitamins in edible oils You should know the following in advance: secondary plant substances, minerals and vitamins are only contained in cold-pressed edible oils. Compared to other foods, edible oils contain a particularly large amount of vitamin E and other minerals. Olive oil contains a range of vitamins, in particular vitamin A. It is rich in unsaturated fatty acids, and also contains abundant quantities of vitamin E. For these reasons, it is considered a particularly healthy oil. Cold-pressed sunflower oil contains many unsaturated fatty acids and is also rich in vitamin E and lecithin. Up to 73 percent of walnut oil consists of polyunsaturated fatty acids (linoleic acid and alpha-linolenic acid), and it also contains a lot of lecithin, vitamin E and B vitamins, especially vitamin B6. Cold-pressed wheatgerm oil also contains a lot of vitamin E, and is rich in unsaturated fatty acids such as linoleic acid and linolenic acid. Hazelnut oil is rich in unsaturated oleic acids and vitamins D and E. Pumpkin seed oil also has a comparatively high vitamin E content. Cold-pressed corn oil contains many essential fatty acids, especially linoleic acid and oleic acid. It also contains plenty of vitamin E. But we shouldn’t kid ourselves: we should consume a maximum of two tablespoons of cooking oil a day, so we cannot get all the vitamins we need from it. This is why we need to keep eating plenty of fruit and vegetables. What is each type of cooking oil best for? Cold-pressed, unrefined rapeseed oil, for example, goes wonderfully with salad. It contains a lot of monounsaturated and polyunsaturated fatty acids, and also has a delicious flavour. Rapeseed oil is also a real kitchen all-rounder; refined rapeseed oil is ideal for frying. The Stiftung Warentest even writes that rapeseed oil is healthier than olive oil. Compared to olive oil, it contains more vitamin E and a higher content of omega-3 fatty acids. Cold-pressed extra virgin olive oil is particularly suitable for cold dishes: it can really show off its full-bodied aroma with salads or antipasti. It also goes well with all vegetables and pasta dishes. Olive oil and rapeseed oil both have high levels of oleic acid, which reduces the amount of unwanted LDL-cholesterol in the blood. If you suffer from high cholesterol, the Heart Foundation recommends rapeseed oil, walnut oil and olive oil in particular. Oils with a high proportion of linoleic acid, such as sunflower oil or safflower oil, are less suitable. Refined olive oil is the best choice for frying fish. If you use sunflower oil for frying, so-called aldehydes are formed in the pan, which are considered toxic in larger quantities, as scientists from the University of the Basque Country have discovered and the dpa has reported. The study was published in specialist journal Food Research International. Pan-frying and deep-frying Refined vegetable oils such as corn oil, rapeseed oil, sunflower oil and peanut oil are suitable for deep-frying between 175 and 190° Celsius. Only heat-stable oils with a smoking point of over 160°C should be used for frying. These are mainly refined oils such as peanut oil, olive oil, rapeseed oil and sunflower oil. Linseed oil is not suitable for pan-frying, deep-frying or baking. Safflower and sunflower oil should only be used for cooking if you have chosen a special oil suitable for frying which has been produced from specially cultivated varieties of oil rich in oleic acid. The ideal temperatures are 130-140°C for pan-frying and 160-170°C for deep-frying. You should note: in general, refined oils are almost odourless and neutral in flavour. They have a longer shelf life than cold-pressed oils and are also cheaper. For deep-frying French fries, sunflower oil, peanut oil and sesame oil are ideal, as they have a neutral flavour and can be heated to 210 degrees. Coconut oil can also be used for deep-frying, but coconut oil, palm oil and palm kernel oil contain large amounts of saturated fatty acids, which have a particularly negative effect on our blood lipids. Argan oil, the most expensive oil in the world The most expensive oil in the world is argan oil. It is obtained from the seeds of the argan tree’s ripe fruit. The oil is expensive because the entire production, from picking to pressing the fruit, is done by hand. Argan oil is only found in Morocco, and is considered the country’s liquid gold. Argan oil is used in gastronomy as well as in cosmetics and hair care. The oil has a very high proportion of natural antioxidants that protect against free radicals. You should also note: all oils should be stored in a cool, dark place, and you should never pour fats or oils down the drain or flush them down the toilet. Fat is not soluble in water, so it remains in the drainage pipes and can cause blockages. So, let’s head to the kitchen and make a nice, tasty salad.
<urn:uuid:37d74388-f2c6-4272-a180-8c2f6faac2de>
CC-MAIN-2021-43
https://www.medisana.com/healthblog/healthy-fats-and-oils/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00031.warc.gz
en
0.959089
2,121
2.875
3
Diseases & Conditions Joint Replacement Infection Knee and hip replacements are two of the most commonly performed elective operations. For the majority of patients, joint replacement surgery relieves pain and helps them to live fuller, more active lives. No surgical procedure is without risks, however. A small percentage of patients undergoing hip or knee replacement (roughly about 1 in 100) may develop an infection after the operation. Joint replacement infections may occur in the wound or deep around the artificial implants. An infection may develop during your hospital stay or after you go home. Joint replacement infections can even occur years after your surgery. This article discusses why joint replacements may become infected, the signs and symptoms of infection, treatment for infections, and preventing infections. Any infection in your body can spread to your joint replacement. Infections are caused by bacteria. Although bacteria are abundant in our gastrointestinal tract and on our skin, they are usually kept in check by our immune system. For example, if bacteria make it into our bloodstream, our immune system rapidly responds and kills the invading bacteria. However, because joint replacements are made of metal and plastic, it is difficult for the immune system to attack bacteria that make it to these implants. If bacteria gain access to the implants, they may multiply and cause an infection. Despite antibiotics and preventive treatments, patients with infected joint replacements often require surgery to cure the infection. A total joint may become infected during the time of surgery, or anywhere from weeks to years after the surgery. The most common ways bacteria enter the body include: - Through breaks or cuts in the skin - During major dental procedures (such as a tooth extraction or root canal) - Through wounds from other surgical procedures Some people are at a higher risk for developing infections after a joint replacement procedure. Factors that increase the risk for infection include: - Immune deficiencies (such as HIV or lymphoma) - Diabetes mellitus - Peripheral vascular disease (poor circulation to the hands and feet) - Immunosuppressive treatments (such as chemotherapy or corticosteroids) Signs and symptoms of an infected joint replacement include: - Increased pain or stiffness in a previously well-functioning joint - Warmth and redness around the wound - Wound drainage - Fevers, chills and night sweats When total joint infection is suspected, early diagnosis and proper treatment increase the chances that the implants can be retained. Your doctor will discuss your medical history and conduct a detailed physical examination. Imaging tests. X-rays and bone scans can help your doctor determine whether there is an infection in the implants. Laboratory tests. Specific blood tests can help identify an infection. For example, in addition to routine blood tests like a complete blood count (CBC), your surgeon will likely order two blood tests that measure inflammation in your body. These are the C-reactive Protein (CRP) and the Erythrocyte Sedimentation Rate (ESR). Although neither test will confirm the presence of infection, if either or both of them are elevated, it raises the suspicion that an infection may be present. If the results of these tests are normal, it is unlikely that your joint is infected. Additionally, your doctor will analyze fluid from your joint to help identify an infection. To do this, he or she uses a needle to draw fluid from your hip or knee. The fluid is examined under a microscope for the presence of bacteria and is sent to a laboratory. There, it is monitored to see if bacteria or fungus grow from the fluid. The fluid is also analyzed for the presence of white blood cells. In normal hip or knee fluid, there are a low number of white blood cells. The presence of a large number of white blood cells (particularly cells called neutrophils) indicates that the joint may be infected. The fluid may also be tested for specific proteins that are known to be present in the setting of an infection. In some cases, just the skin and soft tissues around the joint are infected, and the infection has not spread deep into the artificial joint itself. This is called a "superficial infection." If the infection is caught early, your doctor may prescribe intravenous (IV) or oral antibiotics. This treatment has a good success rate for early superficial infections. Infections that go beyond the superficial tissues and gain deep access to the artificial joint almost always require surgical treatment. Debridement. Deep infections that are caught early (within several days of their onset), and those that occur within weeks of the original surgery, may sometimes be cured with a surgical washout of the joint. During this procedure, called debridement, the surgeon removes all contaminated soft tissues. The implant is thoroughly cleaned, and plastic liners or spacers are replaced. After the procedure, intravenous (IV) antibiotics will be prescribed for approximately 6 weeks. Staged surgery. In general, the longer the infection has been present, the harder it is to cure without removing the implant. Late infections (those that occur months to years after the joint replacement surgery) and those infections that have been present for longer periods of time almost always require a staged surgery. The first stage of this treatment includes: - Removal of the implant - Washout of the joint and soft tissues - Placement of an antibiotic spacer - Intravenous (IV) antibiotics An antibiotic spacer is a device placed into the joint to maintain normal joint space and alignment. It also provides patient comfort and mobility while the infection is being treated. Spacers are made with bone cement that is loaded with antibiotics. The antibiotics flow into the joint and surrounding tissues and, over time, help to eliminate the infection. Patients who undergo staged surgery typically need at least 6 weeks of IV antibiotics, or possibly more, before a new joint replacement can be implanted. Orthopaedic surgeons work closely with other doctors who specialize in infectious disease. These infectious disease doctors help determine which antibiotic(s) you will be on, whether they will be intravenous (IV) or oral, and the duration of therapy. They will also obtain periodic blood work to evaluate the effectiveness of the antibiotic treatment. Once your orthopaedic surgeon and the infectious disease doctor determine that the infection has been cured (this usually takes at least 6 weeks), you will be a candidate for a new total hip or knee implant (called a revision surgery). This second procedure is stage 2 of treatment for joint replacement infection. During revision surgery, your surgeon will remove the antibiotic spacer, repeat the washout of the joint, and implant new total knee or hip components. Single-stage surgery. In this procedure, the implants are removed, the joint is washed out (debrided), and new implants are placed all in one stage. Single-stage surgery is not as popular as two-stage surgery, but is gaining wider acceptance as a method for treating infected total joints. Doctors continue to study the outcomes of single-stage surgery. At the time of original joint replacement surgery, there are several measures taken to minimize the risk of infection. Some of the steps have been proven to lower the risk of infection, and some are thought to help but have not been scientifically proven. The most important known measures to lower the risk of infection after total joint replacement include: - Antibiotics before and after surgery. Antibiotics are given within one hour of the start of surgery (usually once in the operating room) and continued at intervals for 24 hours following the procedure. - Short operating time and minimal operating room traffic. Efficiency in the operation by your surgeon helps to lower the risk of infection by limiting the time the joint is exposed. Limiting the number of operating room personnel entering and leaving the room is thought to the decrease risk of infection. - Use of strict sterile technique and sterilization instruments. Care is taken to ensure the operating site is sterile, the instruments have been autoclaved (sterilized) and not exposed to any contamination, and the implants are packaged to ensure their sterility. - Preoperative nasal screening for bacterial colonization. There is some evidence that testing for the presence of bacteria (particularly the Staphylococcus species) in the nasal passages several weeks prior to surgery may help prevent joint infection. In institutions where this is performed, those patients that are found to have Staphylococcus in their nasal passages are given an intranasal antibacterial ointment prior to surgery. The type of bacteria that is found in the nasal passages may help your doctors determine which antibiotic you are given at the time of your surgery. - Preoperative chlorhexidine wash. There is also evidence that home washing with a chlorhexidine solution (often in the form of soaked cloths) in the days leading up to surgery may help prevent infection. This may be particularly important if patients are known to have certain types of antibiotic-resistant bacteria on their skin or in their nasal passages (see above). Your surgeon will talk with you about this option. - Long-term prophylaxis. Surgeons sometimes prescribe antibiotics for patients who have had joint replacements before they undergo dental work. This is done to protect the implants from bacteria that might enter the bloodstream during the dental procedure and cause infection. The American Academy of Orthopaedic Surgeons has developed recommendations for when antibiotics should be given before dental work and for which patients would benefit. In general, most people do not require antibiotics before dental procedures. There is little evidence that taking antibiotics before dental procedures is effective at preventing infection. Antibiotics may also be considered before major surgical procedures; however, most patients do not require this. Your orthopaedic surgeon will talk with you about the risks and benefits of prophylactic antibiotics in your specific situation. To assist doctors in the diagnosis and prevention of periprosthetic joint infections, the American Academy of Orthopaedic Surgeons has conducted research to provide some useful guidelines. These are recommendations only and may not apply to every case. For more information: Periprosthetic Joint Infections - Clinical Practice Guideline | American Academy of Orthopaedic Surgeons (aaos.org) AAOS does not endorse any treatments, procedures, products, or physicians referenced herein. This information is provided as an educational service and is not intended to serve as medical advice. Anyone seeking specific orthopaedic advice or assistance should consult his or her orthopaedic surgeon, or locate one in your area through the AAOS Find an Orthopaedist program on this website.
<urn:uuid:a61050df-541f-4069-bab9-cef64a75311c>
CC-MAIN-2021-43
https://www.orthoinfo.org/en/diseases--conditions/joint-replacement-infection/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00270.warc.gz
en
0.938866
2,175
3.0625
3
God Dwells With the Humble God dwells with the humble How we approach the Bible makes a profound difference in life. There is an acronym people use to speak about the Bible. B-I-B-L-E, Basic Instructions Before Leaving Earth. Instructions about what? Which of these descriptions fit best with how you approach the Bible? What is your goal? Some see the Bible as instructions on doing good. The Bible contains a list of do’s and don’ts. For them, the Bible is a book of formulas. They will study the Law to avoid doing what is wrong, and so they can go about doing what is right. They try hard to put off sin and put on righteousness. When the Bible is instructions on doing good, the goal for reading the Bible is to be good. Others approach the Bible as information to know. The Bible is the best source in the quest for information. It reveals why there is evil in the world. It tells what happens at the end of the age. The Bible is a great source of historical information. The Bible says that the truth will set us free, and they long to know the truth. We gain knowledge and understanding. The goal is wisdom. Most people see the Bible as an explanation of how to be saved. People are sinners, and sin leads to condemnation. The Bible provides information on how Jesus dies for our sins and takes the punishment we deserve. From the beginning to the end of the book, we learn God’s plan of salvation and the story of God’s amazing grace. The goal is to be saved. - The Bible tells us how to be good. - The Bible makes us wise. - The Bible tells us how we may be saved. All three of these views of the Bible are true. But, there one more way to look at the Bible. We may look at the Bible as God revealing Himself to us. God desires for us to know Him, so we may enjoy Him. When we see the Bible as a relational book, the goal of reading the Bible is to get to know God on a personal level. When we view the Bible from this perspective, then: - The Bible tells us how to be good so that we may live in a way that is pleasing to God. - The Bible makes us wise so we may know God’s character, purpose, and plan. - The Bible tells us how we may be saved so we may be in a right relationship with God. The Bible is a love letter from God. When we understand how much God loves us, we will read Isaiah with the right frame of mind. The passage in Isaiah 57 showcases God’s incredible love for His people. And, as God speaks about His love for us, He tells us how we may best relate to Him. Our passage this morning reveals how God works to bring us into a mature, healthy relationship with Him. Prepare the way (Isaiah 57:14) The first step in our relationship with God is for Him to remove obstacles that get in the way of our being with Him. Isaiah picks up an illustration from the beginning of Chapter 40 and repeats it here in chapter 57. Listen to the words from chapter 40, “A voice is calling, clear the way for the Lord in the wilderness; make smooth in the desert a highway for our God. Let every valley be lifted up, and every mountain and hill be made low; and let the rough ground become a plain, and the rugged terrain a broad valley; then the glory of the Lord will be revealed, and all flesh will see it together; for the mouth of the Lord has spoken.” Once again, a voice tells us that a way is being made for God’s people to be with Him. What is the most important thing we need to learn from this verse (57:14)? It is that God desires the way be prepared, and every obstacle standing between Him and His people to be removed. God decrees for nothing to stand in the way of bringing His people to Himself. It sounds like a major construction project. We can picture a cadre of angels, wearing hard hats. We can hear the heavens shake as the angels make their way around the earth with heavy equipment, and all sorts of tools to make a smooth highway. The angels remove boulders and trees, lay down asphalt, and then we see people walking onto the newly paved, highway walkway. The way to walk is as smooth as glass; so smooth, that even crawling babies find the way easy. People are excited and walk with a bounce in their step. God’s heavenly city sparkles like a jewel on the horizon. But, the construction project is not one needing bulldozers to move dirt stones and dynamite to blast hills and mountains on the earth. The world that God created does not stand in between us and God’s kingdom. The construction job is the work God is doing in the heart. In our heart is a deep chasm of rebellion. Our heart is a tangled jungle of iniquity and sin. Our mind is littered with boulders of deceit and impediments of greed and selfishness. The obstacle that stands between God and us is our mountain of pride. The sin and rebellion of our heart are what separates us from God. We may not blame anything else. People try to blame other people, religion, government, education, or anything else that they may place the blame. When people stand before the judgment seat of God, they will learn that the only obstacle which stands between them and God is their heart. God desires the way be prepared, and every obstacle of the heart standing between Him and His people is removed. Once they are removed, we may relate to God. The Dwelling Place of God (Isaiah 57:15) Verse 15 is one of those verses which leaves us speechless when we ponder its glories. The Hebrew phrase for high and exalted occurs four times in the Old Testament, each time in Isaiah (6:1; 33:10; 52:13; and 57:15). We are most familiar with the passage in Isaiah 6, when upon entering the throne room of God, Isaiah declares that he sees the Lord sitting on a throne, lofty and exalted (Isaiah 6:1). God is the great, “I AM.” He exists above all things. He is high and exalted. There is nothing and no one higher or more exalted. He lives forever. He lives before the beginning and after the end. His name is Holy, which means His name is Set Apart. His very essence is that He is unequaled. Ask Him who He is, and God says, “I am incomparable. I am supreme. I am matchless. I am unrivaled. My name is Holy.” Not only is God’s name Holy, but He also lives in a high and holy place. He is comfortable in that place. It is His home. His living room and backyard are not part of this world. He is in a completely different, unique existence in His Trinitarian place of residence. He exists outside of our space and outside of our time. When God creates this world, He makes it other than Himself. He stands over Creation. The first half of the verse sets the stage for the second. God dwells on a high and holy place, AND God dwells with the contrite and lowly of spirit. Imagine. The eternal, holy, all-powerful, all-knowing, most-high God of the universe abides with humble creatures. God makes His abode in a high and exalted, holy place. And God makes His abode with the humble. Let that sink in. Ask, why is God letting me know this truth? God’s reason to abide with the humble is one of revival. He wants to give life. God does not make His abode with the humble because it will improve His life. He doesn’t abide with the humble because they are such wonderful people. God is love. He is moved with compassion. He looks upon the humble and chooses them to be recipients of His goodness. God is oozing with grace, love, and compassion. His eyes look to and fro upon the earth to find vessels for His glory. The vessels which catch God’s eyes are those who are humble. He abides with the contrite and lowly because it is His nature to revive spirits and give new life to those who are dead in their heart. God looks at people, not for what He may gain, but for how He may give. He desires to dispense love and goodness. God never says, “What’s in it for me?” God’s nature is generous. He never has an extended palm looking for a handout. God’s love Overcomes (Isaiah 57:16-17) God has more reason to destroy us than to save and revive us. Those who are now humble were once full of sin. God summarizes the sin with the words, unjust gain. Unjust gain is greed. It represents the opposite of God’s generous nature. Unjust gain is covetousness, which is idolatry. Unjust gain is the love of self more than others. Scripture tells us that the love of money is a root of all sorts of evil (1 Timothy 6:10). Sin, without Christ’s atoning work, brings about God’s just anger. Even though God strikes people because of their sin, they continue in the way of their evil heart and do not repent. People lie, get caught, and they suffer the consequences. But, they lie again. People cheat, get caught, suffer the consequences, and they cheat again. Everywhere we look, we see how God allows the suffering and devastation of sin to exist. All people live under the curse and consequences of sin nature. The brokenness of human existence is due to God sovereignly allowing sin to yield the fruit of justice. The brokenness of the world is not a result of God’s pleasure, but God’s anger. The only way we may escape the fruit of sin is if God intervenes, and he does. He demonstrates His glorious love by sending His Son to be a guilt offering for sin. God places His Son on a cross and lifts Him up so all may see God’s justice and love. All who put their faith in God’s Son receive mercy and receive forgiveness. God’s love and compassion overcome His anger. God’s anger is spent upon His Son. God is pleased to crush His Son so that He may save people. The obstacles between God and His people are removed. It takes humility to say; I am a sinner. It takes humility to say; I need a Savior. It takes humility to cry out for mercy. Those who see Jesus, humble themselves, ask for God’s forgiveness, and God turns His anger away. God abides with those who demonstrate humility before the cross. God Heals the Heart (Isaiah 57:18-19) God sees our ways, but God chooses to heal us. God’s decision to show mercy speaks to His character. The fact that God is merciful and seeks to restore rebellious sinners is a truth that ought to have every forgiven sinner stand, applaud, and cheer. The alternative is unthinkable. God leads and restores those who are willing to humble themselves and cry out for mercy. God heals the heart, so it is no longer rebellious and evil. God restores the heat. Instead of cursing, words of praise come from their lips. God is calling out to the world that is filled with evil and turmoil, and He says, “Peace, peace.” God exalts the Prince of Peace, Jesus Christ, and makes His name known. Those who look to Jesus for salvation are no longer mourning, but they have the lips of praise. They speak forth the proclamation of Jesus Christ for the forgiveness of sins. They glorify God because of His wonderful grace and mercy. God is no longer someone to rebel against, but He is a God worthy of our humble submission. God is no longer seen as the cause of evil, but He is seen as the only source of what is good and right. No Peace for the Wicked (Isaiah 57:20-21) For those who are wicked, God promises they will have no peace. Their life will forever be in constant turmoil. The word picture God paints of their life is that of the wildly tossing sea, with waves stirring up trash and mud. It is not a peaceful picture, but a picture of continual movement and dread. There is no rest. Who are the wicked? They are proud. The wicked, in their pride, do not seek God. They believe in their heart, “There is no God.” (Psalm 10:4). They scoff at the cross of Jesus. In their eyes, there is no God to obey; they are their god. They see no need to be saved. They believe they may stand in the face of God and shake their fist in His face. There is no repentance on their lips. Some people think that people are not evil and wicked. The world teaches that people are good deep inside. But, anyone who refuses to obey God is indeed evil. Pride is evil. Anyone who looks at their Creator and walks away from His goodness is the epitome of wickedness. God freely offers salvation, forgiveness, and mercy. Yet, if we walk down the street and try to share this news with people, they answer with the words, “I’m okay.” People will say they do not need salvation. They do not know God. They do not fear His wrath. They do not honor His Son, who gives His blood so they may have forgiveness. What shall we do after reading these words of Scripture spoken by God’s prophet? In other words, why does God tells us He dwells with the humble? We are to humble ourselves in the presence of the Lord. Humility is a repeated topic in the book of Isaiah. We may say humility is one of the most important topics in the Bible. Pride is the downfall of angels and men. God came to earth in the form of a man to demonstrate humility. God leaves His high and lifted up position in heaven. He humbles Himself as a man. He further humbles Himself as a Servant. He further humbles Himself as a servant who is completely obedient, obedient to the point of death. The biggest obstacle which God removes as He builds the highway for His people to come to Himself is the obstacle of human pride. God is calling upon us to destroy our pride and follow the example of our Savior, Jesus Christ. Mature Christians are humble Christians. Christians with pride are immature. Christian maturity is not measured by knowledge, it is measured by character. Humility is the primary measurement. Jesus’s death on the cross makes it, so we can have a revived heart. His death on the cross enables us to repent of our pride and turn to humility. In other words, part of His saving work is to release us from being captive to our pride. Find Relational Rest in Humility God dwells with the humble. It is the place He enjoys. Therefore, let us take these steps to kill pride in our life. Seek to have a relationship with God that is peaceful, and at rest. Receive. Receive the salvation that only Jesus provides. Acknowledge that you need mercy. Confess to God that you can do nothing to save yourself. Go to God with hands open to receive, not with hands trying to give God good works. It’s like David, who says, “What shall I render to the Lord for all His benefits toward me? I shall lift up the cup of salvation and call upon the name of the Lord.” (Psalm 116:12-13). Here is my empty cup, fill it please Lord. I want to receive Your mercy and grace. I have nothing to offer but an empty cup. Let me receive from You. Empty. Jesus left His throne. He emptied Himself of His glory. We need to empty ourselves of importance. Let nothing be done to get the glory for yourself. Don’t push yourself on social media. Don’t brag about your accomplishments. Don’t try to save your reputation. Look at others as being far more important. Think of yourself as deserving to be last in line. Think of yourself as deserving to get the smallest portion of food. In lowliness of mind, esteem others better than yourself. Serve. Serve. Jesus takes the form of a servant. We need to take the form of a servant. Serve in the church. Don’t come to church expecting to receive. Come to church expecting to give. Serve your family. Be the example of servanthood everywhere you go. Serve your parents. Serve your neighbor. Stop expecting others to serve you, be known as a person with a servant’s heart. Turn. Turn from your ways, and purposefully choose to follow God’s ways. Turn from destructive ways that are self-centered and egotistical. Admit to God that your ways are not working. Turning is repenting. Ask yourself, do I know God’s way of how I need to relate to others? Do I seek God’s ways with how I post on Facebook? Do I follow God’s ways with my driving? We need to turn from our ways and seek to follow God’s ways. Ask God for help. God’s word speaks very, very strongly against pride. It tells us the God hates pride and arrogance (Proverbs 8:13). God detests the proud (Proverbs 16:5). A haughty spirit is a spirit that is a sin factory (Proverbs 21:4). If there is strife in our life, it exists because there is pride (Proverbs 13:10). There is more hope for a fool than for a proud person (Proverbs 26:12). God warns us against pride for our good. Pride destroys relationships. God resists the proud and gives grace to the humble. Pride is an obstacle to Christian maturity. God wants us to get the most out of a relationship with Him while we exist in this body. He wants us to glorify Himself in our lives. Make your life quest a quest for humility and enjoy the fullness of God’s presence.
<urn:uuid:00351611-940c-4c77-8c01-69fc96deb6a3>
CC-MAIN-2021-43
https://redbarnchurch.com/Archives/god-dwells-with-the-humble/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00311.warc.gz
en
0.954677
3,897
2.59375
3
For centuries, the Galapagos Islands have been an inspiration. Everything about these islands is truly extraordinary, from their “rising from the sea depths” inception to the incredible diversity of flora and fauna. They prompted the “Father of Evolution,” Charles Darwin, to have his eureka moment, developing his theory of natural selection in 1835 based on his visit to the archipelago. It shouldn’t come as a surprise, then, that they continue to enchant people to this day. Luckily, the Ecuadorian government recognized the islands’ value and founded the Galapagos National Park (GNP) in 1959. Since then, the Galapagos sea lions have flirted with tourists from all over the world, the Galapagos giant tortoises have awed visitors, and the archipelago has gained a growing number of passionate advocates all around the world. (Regardless of when they decide to travel to the Galapagos, around 160,000 tourists visit this destination annually.) The Galapagos Islands are so unique because a) they were formed through volcanic activity and are still continuously changing, and b.) you can visit them. They were also never joined to the mainland, which means that the flora and fauna that made their way to the archipelago by chance were able to evolve over time in this unique environment without interference from predators. Hence, an entirely new world was created away from prying eyes. Visitors to the Galapagos have a completely exclusive, awe-inspiring experience where animals not only roam the islands freely, but also reign supreme. We, as humans, have the privilege of observing these iconic Galapagos species in their habitat when we venture into the National Park, (as it truly is the best way to see the Galapagos); always in the company of certified naturalists guides, as per Park rules. The location of the Galapagos Islands is one of the reasons why it is so rich in biodiversity. Firstly, it enjoys warm weather year-round, as it quite literally hugs the equator. It is also about 620 miles (1,000 km) from the South American continent, which minimizes the effects of human impact. Finally, the islands are situated right at the crossroads of three primary ocean currents. The cold Humboldt current, the warm Panama Current, and the deep-sea Cromwell Current merge at the archipelago, creating an upwelling of ocean floor nutrients. This is what causes the enormous amount of marine biodiversity and, while snorkeling, you can observe many shark species, a vast variety of colorful fish, penguins, marine mammals, and, of course, sea turtles. The reptiles and marine birds that arrived on the islands had to adapt to survive as the Galapagos Islands are wholly isolated from other landmasses, and the food sources on the islands are completely unique. Free from the confines of predation, fauna could naturally select for characteristics over millions of years that would allow them to find food more easily, but would have impeded their ability to escape from predators if they were anywhere else in the world. This led to a truly unique archipelago, filled to the brim with extraordinary endemic species that have adapted to an alien world. Visitors to the Galapagos can witness the land iguanas chew on their favorite food, the prickly pear cactus. Just a few feet away, the camouflaged black marine iguanas can be seen sunning themselves on the blackened volcanic rocks. The marine iguanas are actually evolved distant relatives of the land iguanas, and, through the process of natural selection, have developed the ability to swim for their food. Not only that, but they also have the unique ability to recognize the alarm calls of mockingbirds and run for safety. This is the only recorded instance of a non-vocal species reacting to another species’ vocal call. As well as the marine iguana, the Santa Fe iguana has emerged as a highly-specific endemic land iguana that only lives on Santa Fe Island. Its larger and paler body fits perfectly into the island’s landscape and terrain. And just think – this is only one genus of many peculiar and wonderful flora and fauna found in the Galapagos archipelago! Tortoises have grown larger than life, weighing over 500 lbs (227 kg). Finches have adapted to their favorite food sources, whether nuts, berries, or cactus flesh. Even the flora plays along! While cacti have adapted to their specific environments, the scalesia tree, which is part of the same family as the tine popular daisy, can reach up to 30 feet (10 meters). Here are some incredible statistics of just how amazing the Galapagos flora and fauna are: In every Galapagos tour, no matter what island you are visiting or if you choose to explore the archipelago by sea or on land, every inch of this paradise holds a new marvel to behold. Incredible expedition cruises like those offered by Yacht La Pinta take you to so many different Galapagos Islands points of interest where you can observe the wonderful world that Charles Darwin witnessed so many years ago. Not only that, when you take an expedition cruise you’ll answer the question of where to stay in the Galapagos. Plus, exploring the archipelago demonstrates just how every cell and living being is connected throughout these fascinating ecosystems. Following their initial discovery in the 1500s, the Galapagos Islands were not known for being very hospitable to humans. Yet, man was not easily deterred from visiting this place. This was especially the case during the stretch of months between June and December, in particular, when the Humboldt Current carries nutrients from Antarctica all the way north, directly to the Galapagos Islands, drawing huge schools of fish and cetaceans, which whales would then follow. Early on, there were whalers that caught onto how this current provoked so much interest in sea life. Theses whaling ships often followed the whales to the Galapagos. Their crew earned huge sums of money as they sold whale oil to ever-growing cities in North America and Europe. The money drew more and more men to the archipelago and this slippery fountain of whale oil gold. In the 17th century, the Galapagos Islands quickly transformed from a bubbling paradise to a broken and destroyed wasteland. This, because of the whalers that cut down thousands of trees to burn whale fat and hunted the easiest targets, often, giant tortoises. Invasive species that hitched aboard their same ships ate the young and the eggs of the endemic species and caused erosion on many of the islands. The effect was so complete that by 1960 there were only 200 adult tortoises on Pinzón Island and 14 on Española. Despite all of the damage done, the Ecuadorian government took on the huge project of creating the Galapagos National Park in 1959. Slowly, all of the invasive species were captured and the islands began to recover their original glory. However, it is a work in progress. Fortunately, restorative projects are at different stages all over the islands, creating the biggest conservancy project in the history of mankind! And the hard work has paid off. From 14 tortoises on Española 60 years ago, there are now more than 1,700! Much of this work is funded through Galapagos Islands tourism. Expedition tours show off this paradise to visitors traveling to Galapagos from all around the world while the National Park continues its work to protect every plant and animal species. And, when asking yourself, why you should visit the archipelago, or what can you do in the Galapagos, you should know that this place is not just about scientific- and nature-based experiences, there are plenty of activities, too. Here, the GNP regulates numerous ways of experiencing the islands. Fortunately, these are really fun and entertaining things like swimming, hiking, kayaking, paddleboarding, and more! And Yacht La Pinta, for instance, offers each of these activities to their guests. The purest interaction that humans can have with animals is to experience a true connection with them in their own environment. Imagine visiting endemic species outside of the zoo and in their original habitat. Imagine this animal not being afraid of you, but just showing pure curiosity towards you. It’s nearly unfathomable, but that is precisely why the Galapagos Island wildlife is unique. Luckily, the whalers’ actions didn’t have a detrimental effect on the attitude of the wildlife towards humans. And you’ll experience this as you walk over the otherworldly terrain on the many protected islands. You’ll find that there is peaceful respect between animals and humans. The male frigates won’t be disturbed as you step by them while they call out to their potential mate in the sky, their enormous red throat pouch bulging with every squawk. Male blue-footed boobies even seem to enjoy the attention as they do their popular mating ritual in an effort to win the interest of a female. The main problem when visiting the Galapagos Islands is ensuring that you don’t trip over a quietly nesting bird or run into a statuesque land iguana. And that’s exactly how the Galapagos National Park wants it. Local authorities go to great lengths to ensure that guests enjoy their adventure and truly experience the magic of the islands while making sure the flora and fauna are not in any way disturbed by the presence of humans. This scattered archipelago covers a vast 17,000 square miles (45,000 square kilometers) that cross the equatorial line. Although there are officially 19 islands as well as many islets and rocks according to the Galapagos Island National Park, this number is still changing. The islands were originally formed by volcanic activity- either by eruptions, lava flow, or uplifts from the ocean floor. In fact, in the last 200 years, there have been over 50 eruptions in the Galapagos Islands where some have threatened island life, and others have revealed new land. Each island has a unique formation and terrain, and is inhabited by a mismatched combination of unrivaled species. As well as the incredible fauna, the flora on each island is also unique. Scientists have identified three major vegetation zones: the coastal zone, the arid zone, and the humid highlands. Each island produces a variety of flora depending on its location and altitude. The Galapagos Islands are a fragile paradise that must be protected in every way possible. Therefore, over 97% of the islands are National Park, and only certain islands have been cleared to be visited. Even then, there are strict guidelines that must be followed by everyone. Each ship must not be over a predetermined size, and must have a National Park-certified guide to lead any excursion. The size of the group cannot exceed 16 explorers. Though, some vessels, like Yacht La Pinta, average far fewer explorers (around 12) per excursion. Because of these careful conservation efforts, guests can enjoy a world straight out of a Dr. Seuss’ book. The black volcanic rock carves through some islands while others are covered in greenery, red dirt, and surrounded by turquoise waters. No two islands are the same, meaning that a cruise through this archipelago is full of surprises. The islands that can be visited are: Although some islands are fiercely protected and do not allow visitors in order to safeguard their endemic Galapagos species, there are fourteen islands that we can explore as part of various Galapagos cruise itineraries. You’ll be pleased to know that each one of the accessible islands is included in at least one of Yacht La Pinta’s itineraries. Which one will you choose? How about an even more adventurous journey? Each of our itineraries has been specifically planned out, but we understand the need for a little flexibility. That’s why we offer combined Galapagos itineraries. Contact us today to find out how easy getting to the Galapagos can be, and just how Yacht La Pinta can make all your Galapagos dreams come true! La Pinta’s expeditions travel to every accessible island in the Galapagos archipelago, and each itinerary has been carefully planned to guarantee that every guest onboard will experience the Enchanted Islands in a satisfying way. The extraordinary location and history of the archipelago mean that every island is teeming with unique flora and fauna and incredible landscapes. Make sure you have the chance to encounter all of the iconic BIG15 animal species and enjoy every second of your Galapagos adventure. Let Yacht La Pinta take you on the journey of a lifetime with a combination of land and water activities, premium accommodation and dining, and the very best service offered in the Galapagos.
<urn:uuid:3740967a-7785-4acc-8b55-5decdfc06f63>
CC-MAIN-2021-43
https://www.lapintagalapagoscruise.com/galapagos-information/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00471.warc.gz
en
0.955362
2,698
3.140625
3
June 18, 2008 by Frank Matero Heritage and conservation have become important themes in current discussions on place, cultural identity, and the preservation of the past. Archaeological sites have long been a part of heritage and its display, certainly before the use of the term “heritage” and the formal study of tourism. However, current concerns with the escalating pace of site destruction can be attributed to the perception among the public and professionals that archaeological sites, like the natural environment, are finite nonrenewable resources deteriorating at an increasing rate. This deterioration stems from a wide array of causes, ranging from neglect and poor management to increased visitation and vandalism, from inappropriate past treatments to deferred maintenance (Figure 1 and Figure 2). No doubt the recent pressures of economic benefits from tourist activities in conjunction with increasing communication and mobility have caused accelerated damage to many sites unprepared for development and visitation. To add to these problems, few archaeological projects have incorporated site conservation as a viable strategy in addressing these issues either before or during excavation (Figure 3). This has been due in part to archaeology’s neglect of the long history and tradition of conservation theory and practice and the general misperception of conservation as an exclusively off-site, post excavation activity associated with technical issues and remedial solutions. On the other hand, specialists in conservation and heritage management have been largely absent in the recent and rapidly expanding discussions on the meaning, use, and ownership of heritage for political and economic purposes. Both professions have avoided a critical examination of their own historical and cultural narratives pertaining to the construction of sites through excavation, analysis, conservation, and display. The primary objective of conservation is to protect cultural heritage from loss and depletion. Conservators accomplish this through both preventive and remedial interventions. In so doing, conservation embraces the technical means by which heritage may be studied, displayed, and made accessible to the public. In this way, the conservation of archaeological sites is like other heritage conservation. Implicit in conservation’s objectives is the basic requirement to remove or mitigate the causes of deterioration. For archaeological sites, this has a direct and immediate effect on visual legibility and indirectly conditions our perceptions and notions of authenticity. Among the repertoire of conservation techniques applied to archaeological sites are structural stabilization, reconstruction, reburial, protective shelters, and a myriad of fabric-based conservation methods. Each solution affects the way archaeological information is preserved and how the site is experienced and understood, resulting in a push and pull of competing scientific, associative, and aesthetic values (Figure 4 and Figure 5). The practices of archaeology and conservation appear by their very nature to be oppositional. Excavation, as one common method by which archaeologists study a site, is a subtractive process that is both destructive and irreversible. In the revealing of a site, structure, or object, excavation is not a benign reversal of site formational processes but rather a traumatic invasion of a site’s physico-chemical equilibrium, resulting in the unavoidable deterioration of associated materials. Conservation, on the other hand, is predicated on the safeguarding of physical fabric from loss and depletion, based on the belief that material culture possesses important scientific and aesthetic information as well as the power to inspire memory and emotional responses. In the first case, the informational value embodied in the materiality of objects and sites has been expressed in conservation rhetoric through the concept of integrity. Integrity can manifest in many states as purity (i.e., free from corruption or adulteration) or completeness of form, composition, or context. It has come to be an expression of authenticity in that it conveys some truthfulness of the original in time and space, a quality constructed partly in response to the interventions perpetrated by us in our effort to preserve. Whereas archaeology decontextualizes the site by representing it ex situ, in reports and museum exhibits, historic preservation represents and interprets the site in situ. But archaeological sites are also places. If we are to identify and understand the nature and implications of certain physical relationships with locales established through past human thought and experience, we must do it through the study of place. Places are contexts for human experience, constructed in movement, memory, encounter, and association. While the act of remembering is acutely human, the associations specific places have at any given time will change. In this last respect, conservation itself can become a way of reifying cultural identities and historical narratives over time through interpretation. In the end, all conservation is a critical act in that the decisions regarding what is conserved, and who and how it is presented, are a product of contemporary values and beliefs about the past’s relationship (and use) to the present. Nevertheless, technical intervention—that is, what is removed, what is added, what is modified—is the concrete expression of a critical judgment thus formed in the course of this process. What, then, does it mean to conserve and display an archaeological site, especially when what is seen was never meant to be displayed as such, or at least in the fragmented manner viewed? Archaeological sites made, not found. They are constructed through time. Display as intervention is an interface that mediates and therefore transforms what is shown into heritage, and conservation’s approaches and techniques have always been a part of that process. Beginning with the Sixth International Congress of Architects in Madrid in 1904 and later with the creation of the Charter of Athens following the International Congress of Restoration of Monuments (1931), numerous attempts have been made to identify and codify a set of universal principles to guide the conservation and interpretation of structures and sites of historic and cultural significance. Despite their various emphases and differences, all these documents identify the conservation process as one governed by absolute respect for the aesthetic, historic, and physical integrity of the structure or place and requiring a high sense of moral responsibility. Implicit in these principles is the notion of cultural heritage as a physical resource that is at once valuable and irreplaceable and an inheritance that promotes cultural continuity in a dynamic way. Out of this dilemma, our current definition of conservation has emerged as a field of specialization concerned primarily with the material well-being of cultural property and the conditions of aging and survival, focusing on the qualitative and quantitative processes of change and deterioration. Conservation advocates minimal but opportune interventions conducted with traditional skills as well as experimentally advanced techniques. In current practice, it has tended to avoid the renewal of form and materials; however, the level of physical intervention possible can vary considerably even under the current doctrinal guidelines. This includes even the most invasive methods such as reconstruction and the installation or replication of missing or damaged components. Such interventions, common on archaeological sites, are often based on the desire or need for greater visual legibility and structural reintegration ( Figure 6). These interventions become even more critical if they sustain or improve the future performance or life of the site or structure in its environment. Obviously, for archaeological sites, changing or controlling the environment by reburial, building a protective enclosure or shelter on site, or relocating selected components such as murals or sculpture, often indoors, are options that allow maximum physical protection and thus privilege the scientific value inherent in the physical fabric. However, such interventions significantly affect the meaning and associative and aesthetic values, an aspect already discussed as significant for many such sites. Conversely, interventions developed to address only the material condition of objects, structures, and places of cultural significance without consideration of associated cultural beliefs and rituals can sometimes denature or compromise their power, “spirit,” or social values. In this regard, cultural and community context and dialogue between professionals and stakeholders are crucial. One of the first coordinated attempts to codify international principles and procedures of archaeological site conservation was formulated in the Athens Charter of 1931 where measures such as accurate documentation, protective backfilling, and international interdisciplinary collaboration were clearly articulated. In 1956 further advances were made at the General Conference on International Principles Applicable to Archaeological Excavations adopted by the United Nations Educational, Scientific, and Cultural Organization (UNESCO) in New Delhi where the role of a centralized state administration in administering, coordinating, and protecting excavated and unexcavated archaeological sites was advocated. Other charters such as the ICOMOS (Venice) Charter of 1964 extended these earlier recommendations through explicit recommendations that included the avoidance of reconstructions of archaeological features except in cases in which the original components were available but dismembered and the use of distinguishable modern techniques for the conservation of historic monuments. The Australia ICOMOS (Burra) Charter of 1979 expanded the definition of “archaeological site” to include the notion of place, challenging Eurocentric definitions of value, significance, authenticity, and integrity to include context and traditional use, an idea important for culturally affiliated indigenous groups. Finally, in 1990, the ICOMOS (ICAHM) Charter for the Protection and Management of the Archaeological Heritage was adopted in Lausanne, Switzerland, formalizing the international recognition of many archaeological sites as living cultural landscapes and the responsibility of the archaeologist in the conservation process. In addition to these various international attempts to address the issues of archaeological site conservation through the creation of charters and other doctrinal guidelines, a conference to discuss the realities of such standards was held in Cyprus in 1983 under the auspices of ICCROM and UNESCO. In the context of the conference subject, that is, archaeological sites and finds, conservation was defined as traditionally concerned with the preservation of the physical fabric in a way that allows maximum information to be retrieved by further study and analysis, whereas restoration involves the representation of objects, structures, or sites so that they can be more visually “accessible” and therefore readily understood by both scholars and the public. From the scholar’s position, the maximum scientific and historical information will be obtained through recording, sampling, and analysis immediately on exposure or excavation. With each passing year, except under unique circumstances, sensitive physical information will be lost through exposure and weathering. It is true that when archaeologists return to existing previously excavated sites, they may collect new information not previously identified, but this is often the result of new research inquiries on existing finds and archived field notes. Exposed sites, depending on the nature of the materials, the environment, and the state of closure of the site, will yield limited, certainly diminished archaeometric information, especially for fragile materials or features such as macro- and microstratigraphy, surface finishes, impressions, and residue analysis. Comprehensive sampling programs, instrumental recording, and reburial maximize the preservation of the physical record both indirectly and directly. Sites with architectural remains and landscape features deemed important to present for public viewing require quite different strategies for conservation and display. Here the record of approaches is far older and more varied, both in method and in result (e.g., Arch of Titus [Figure 7]), Knossos, Casa Grande, Pompeii, and the Stoa of Attalos). Not to distinguish between the specificity of what is to be conserved on site, or retrieved for that matter, given the impossibility of doing so, makes for a confused and often compromised archaeological program and interpreted site. Too often conservation is asked to address the dual requirements of an archaeological site as document and place without explicit definition and identification of what is actually to be preserved. The results have often been compromised physical evidence through natural deterioration—or worse, through failed treatments meant to do the impossible. On the other end, the need to display has sometimes resulted in confused and discordant landscapes that deny the entire story of the site and the natural and sublime state of fragmentation all ruin sites possess. This last point is especially important on the subject of interpretation and display. In an effort to address the economic benefits from tourist development, many archaeological sites have been directly and heavily manipulated to respond to didactic and recreational programs deemed necessary for visual understanding by the public. In many cases this has resulted in a loss of place, accompanied sometimes by accelerated damage to those sites unprepared for development and visitation. To balance this growing trend of seeing archaeological sites as predominantly outdoor museums, shaped by current museological attitudes and methods of display, it would be useful to approach such sites instead as cultural landscapes with ecological concerns. A more balanced combination of approaches could also mediate the often difficult but powerful overlay of subsequent histories visible on archaeological sites, including destruction, reuse, abandonment, rediscovery, and even past interpretations. Like all disciplines and fields, archaeological conservation has been shaped by its historical habit and by contemporary concerns. Important in its development has been the shifting, even expanding notion of site conservation to include the stabilization and protection of the whole site rather than simply in situ artifact conservation or the removal of site (architectural) features. The public interpretation of archaeological sites has long been associated with the stabilization and display of ruins. Implicit in site stabilization and display is the aesthetic value many ruin sites possess based on a long-lived European tradition of cultivating a taste for the picturesque. With the scientific investigation and study of many archaeological sites beginning in the late nineteenth century, both the aesthetic and the informational value of these sites was promoted during excavation-stabilization. In contemporary practice, options for archaeological site conservation have included reconstruction, reassembly (anastylosis), in situ preservation and protection including shelters and/or fabric consolidation, ex situ preservation through removal, and excavation or reburial with or without site interpretation. Despite the level of intervention, that is, whether interpretation as a ruin is achieved through anastylosis or reconstruction, specific sites, namely, those possessing monumental masonry remains, have tended to establish an idealized approach for the interpretation of archaeological sites in general. However, many sites such as earthen tells, at once challenge these ingrained notions of ordered chaos and arranged masonry by virtue of their fragile materials, temporal and spatial disposition, and sometimes conflicting relationships among foreign and local professionals and traditional communities. Moreover, changing notions of “site” have expanded the realm of what is to be interpreted and preserved, resulting in both archaeological inquiry and legal protection at the regional level. These aspects of site conservation and interpretation become all the more difficult when considered in conjunction with the demands of tourism and site and regional development for the larger physical and political contexts. Archaeological sites, like all places of human activity, are constructed. Despite their fragmentation, they are complex creations that depend on the legibility and authenticity of their components for public meaning and appreciation. How legibility and authenticity of such structures and places are realized and ensured must be carefully considered and understood for effective conservation. Certainly conservators, archaeologists, and cultural resource managers need to know well the theoretical concepts and the history of those concepts pertaining to conservation; they need to know something of the historical and cultural context of structures and sites, archaic or past building technologies, and current technical solutions. They need to familiarize themselves with the political, economic, and cultural issues of resource management and the implications of their work for local communities, including issues of appropriate technology, tradition, and sustainability. The basic tenets of conservation are not the sole responsibility of any one professional group. They apply instead to all those involved in the conservation of cultural property and represent general standards of approach and methodology. From the broadest perspective, archaeology and conservation should be seen as a conjoined enterprise. For both, physical evidence has to be studied and interpreted. Such interpretations are founded on a profound and exact knowledge of the various histories of the thing or place and its context, on the materiality of its physical fabric, on its cultural meanings and values over time, and its role and effect on current affiliates and the public in general. This implies the application of a variety of specialized technical knowledge, but ideally the process must be brought back into a cultural context so that the archaeology and conservation project become synonymous. Frank Matero is Professor of Architecture and Chairman of the Graduate Program in Historic Preservation at the University of Pennsylvania. His research is focused on the conservation of buildings and archaeological sites including Mesa Verde, Bandelier, Casa Grande and other American Southwest sites, Catal Hoyuk and Gordion in Turkey, and Chiripa in Bolivia.
<urn:uuid:331fee3f-d2e8-436f-aa79-774dc5a4be00>
CC-MAIN-2021-43
https://www.archaeological.org/heritage-conservation-and-archaeology-an-introduction/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00030.warc.gz
en
0.940729
3,294
3.203125
3
Endangered Species Conservation Endangered Species Conservation NOAA Fisheries is responsible for the protection, conservation, and recovery of endangered and threatened marine and anadromous species under the Endangered Species Act. The ESA aims to conserve these species and the ecosystems they depend on. To implement the ESA, we work with the U.S. Fish and Wildlife Service and other federal, tribal, state, and local agencies, as well as nongovernmental organizations and private citizens. Under the ESA, a species is considered: - Endangered if it is in danger of extinction throughout all or a significant portion of its range. - Threatened if it is likely to become endangered in the foreseeable future. Our Work Under the ESA Our work to conserve and recover endangered and threatened marine species includes: Developing and implementing recovery plans for listed species (Section 4). Monitoring and evaluating the status of listed species (Section 4). Consulting on federal actions that may affect a listed species or its designated critical habitat to minimize possible adverse effects (Section 7). Entering bilateral and multilateral agreements with other nations to encourage conservation of listed species (Section 8). Investigating violations of the ESA (Section 9). Cooperating with non-federal partners to develop conservation plans, safe harbor agreements, and candidate conservation agreements with assurances for the long-term conservation of species (Section 10). Issuing permits that authorize scientific research to learn more about listed species, or activities that enhance the propagation or survival of listed species (Section 10). Designating experimental populations of listed species to further the conservation and recovery of those species (Section 10). Issuing determinations regarding the pre-listed or antique status of ESA species parts (Section 10). ESA By the Numbers Additional species are currently under review or have been proposed for ESA listing: NOAA Fisheries and U.S. FWS share responsibility for administering the ESA Generally, NOAA Fisheries manages marine species and anadromous species (fish that are born in freshwater, spend most of their lives in saltwater, and return to freshwater to spawn) including whales, corals, sea turtles, and salmon. The U.S. Fish and Wildlife Service manages land and freshwater species such as polar bears, sea otters, and manatees. Both U.S. species and foreign species are protected under the ESA Listing Species Under the ESA Before an animal or plant species can receive ESA protections, it must first be added to the federal lists of threatened and endangered wildlife and plants. Once NOAA Fisheries determines that a species warrants listing, it adds the species to its lists at 50 CFR 223.102 (threatened species) and 50 CFR 224.101 (endangered species). All plant and animal species, except pest insects, are eligible for listing. Monitoring Species Status The conservation status of all species listed under the ESA must be reviewed at least once every 5 years. The review evaluates whether the endangered or threatened classification is still appropriate for the species. These 5-year reviews consider recent recovery progress and the level and impact of ongoing and new or future threats. They also incorporate any new information about the species. Designating Critical Habitat One of the main purposes of the ESA is to provide a means for conserving the ecosystems that threatened and endangered species depend upon for survival and recovery. Specific areas and areas that contain features that are essential for the conservation of an ESA-listed species may be designated as “critical habitat.” Once critical habitat is designated, federal agencies consult with NOAA Fisheries to ensure their actions are not likely to destroy or adversely modify the critical habitat. Critical habitat does not affect land ownership or set up a refuge or closed area, and it does not restrict private citizens’ use of the area. Critical habitat also does not mandate government or public access to private lands. Recovering Endangered and Threatened Species Recovery is the process of restoring listed species and their ecosystems to the point where they no longer require ESA protections. To guide efforts to bring these species back to health, we develop recovery plans that outline the path and activities required to restore and secure self-sustaining wild populations. We collaborate with federal, state, and local governments, as well as tribal nations and interested nongovernmental stakeholders, to create these plans. Conservation groups; academia; tribal nations; and federal, state, and local governments have all made important contributions to the recovery of many endangered and threatened species. We partner with these organizations in many ways to minimize harmful effects on listed species and work toward their recovery. ESA Regulations, Policies, and Guidance We have issued regulations, national policies, and guidance to promote efficiency and consistency in implementing the ESA to conserve and recover marine species. Conservation & Management NOAA Fisheries and the U.S. Fish and Wildlife Service share responsibility for implementing the Endangered Species Act, which is the primary way the federal government protects species in danger of extinction. The purpose of the act is to conserve endangered and threatened species and their ecosystems. NOAA Fisheries is responsible for endangered and threatened marine and anadromous species—from whales and seals to sharks, salmon, and corals. The U.S. Fish and Wildlife Service is responsible for terrestrial and freshwater species, but also has responsibility over several marine species like sea otters, manatees, and polar bears. The U.S. Fish and Wildlife Service and NOAA Fisheries also share jurisdiction over several other species such as sea turtles and Atlantic salmon. Currently, NOAA Fisheries has jurisdiction over more than 160 endangered and threatened marine species under the ESA. Before an animal or plant species can receive the protections provided by the ESA, it must first be added to the federal lists of endangered and threatened wildlife and plants. Once a species is listed, several requirements and prohibitions are triggered to provide for the species’ conservation. Marine mammals that are listed as endangered or threatened are also considered "depleted" under the Marine Mammal Protection Act. Learn more about the ESA listing process. An endangered listing prohibits the: Import and export of the species. Sale and/or offer to sell the species in interstate or foreign commerce. Delivery, receipt, carriage, transport, or shipment of the species in (1) interstate or foreign commerce, and (2) the course of a commercial activity. “Take” of the species (e.g., by harassing, harming, pursuing, hunting, shooting, wounding, killing, trapping, capturing, or collecting) within the United States, within U.S. territorial seas, or on the high seas. These ESA prohibitions apply to all persons under U.S. jurisdiction, but permits may be issued to authorize specific prohibited activities. Learn more about endangered species permits. For threatened species, we may issue regulations deemed necessary and advisable for the conservation of the species. These regulations can extend some, or all, of the prohibitions that apply to endangered species to threatened species. The ESA also requires us to: Designate critical habitat for the conservation of the species. Consult on federal actions that may affect a listed species, or its designated critical habitat, to minimize possible adverse effects. Develop and implement species recovery plans. Learn more about these topics below. One of the main purposes of the ESA is to provide a means for conserving the ecosystems that threatened and endangered species depend upon for survival and recovery. When a species is listed under the ESA, NOAA Fisheries must determine what areas meet the statutory definition of critical habitat: Specific areas within the geographical area occupied by the species at the time of listing that contain physical or biological features essential to conservation of the species, and that may require special management considerations or protection. Specific areas outside the geographical area occupied by the species if the agency determines they are essential for conservation of the species. Critical habitat designations include areas or habitat features that support the life-history needs of the species, such as nursing, pupping or breeding sites, or foraging areas containing needed prey species. In other words, areas that are designated as critical habitat are necessary to support the species’ recovery. Once critical habitat is designated, federal agencies are required to consult with NOAA Fisheries to ensure their actions are not likely to destroy or adversely modify the critical habitat. Critical habitat is not a sanctuary, refuge, or closed-area. Critical habitat does not affect land ownership or restrict private citizens’ use of the area. Critical habitat also does not mandate government or public access to private lands. Consulting on Federal Actions The ESA directs all federal agencies to work to conserve endangered and threatened species. Section 7 of the ESA, titled "Interagency Cooperation," requires that federal agency actions are not likely to jeopardize the existence of any ESA-listed species, or to destroy or adversely modify their critical habitat. Under section 7, federal agencies must consult with NOAA Fisheries when any action they carry out, fund, or authorize (such as through a permit) may affect a listed species or their critical habitat. This process usually begins with the federal agency requesting an informal consultation with NOAA Fisheries in the early stages of project planning. During this consultation, we might discuss the types of listed species that live in the proposed action area and the effect the proposed action may have on those species. Species Recovery Planning Endangered and threatened species have different needs and may require different conservation strategies to achieve recovery. Recovery is the process of restoring listed species and their ecosystems to the point where they no longer require ESA protections. To recover a species, we work to: Reduce or eliminate threats. Restore or establish self-sustaining wild populations. After a species has recovered, we: Remove the species from the list because it has recovered to the point where they no longer need ESA protection—this is known as “delisting.” Monitor the species status for no less than 5 years after delisting to ensure its recovery is sustained. Conservation measures for endangered and threatened species may include conserving and restoring habitat, reducing entanglement or bycatch in fishing gear, preventing vessel strikes, and minimizing exposure to pollutants and chemical contaminants. Knowledge of the natural history of a species is essential to understanding its needs and developing effective and appropriate conservation measures. The ESA has been successful in preventing species extinctions—less than 1 percent of the listed species have gone extinct. Although we have recovered and delisted only a small percentage of species since the ESA was enacted in 1973, hundreds of species would likely have gone extinct without the protections of the ESA. More About This Topic Science is critical to understanding the needs and status of protected species populations, as well as the threats to their health and well-being. Our scientific understanding of these topics helps us develop and implement recovery efforts for endangered and threatened species. Examples of our work include assessing and monitoring populations, researching disease agents (e.g., pathogens, parasites, and harmful algal blooms), and developing gear modifications to reduce entanglement and bycatch. We rely on population assessments to evaluate the status of the endangered and threatened species we manage under the Endangered Species Act. These assessments collect and analyze scientific information on a species’ population structure, life history characteristics and vital rates, abundance, and threats—particularly those caused by human activities. Our scientists and resource managers develop population assessment reports to inform decisions related to a protected species’ listing status. The reports also inform federal or federally-funded activities that may impact a species or its habitat, and acceptable bycatch levels. The reports also inform scientific research and incidental take permits issued to agencies, scientific and academic institutions, and industry. Finally, population assessments allow us to evaluate and determine the effectiveness of recovery measures and to adjust management approaches as needed. Population assessment depends on collaboration between experts throughout our science centers. We also work closely with the U.S. Fish and Wildlife Service and many university scientists in the United States and beyond. Ship-based and aerial surveys are critical to achieving our marine mammal and sea turtle population assessment goals, which include estimating abundance and examining trends and human impacts relative to management objectives. Our science centers conduct and manage a limited number of marine mammal and sea turtle-focused surveys each year, often with external collaborators. The number of surveys depends on funding and available ship time and flight time. The efficiency of sound travel under water has led to increasing concern over how artificial sound potentially impacts the underwater environment. Our scientists support and conduct research to examine these potential impacts on marine animals and to increase understanding of: How marine animals use sound. How underwater acoustics can be used to assess marine animal populations. How and to what degree anthropogenic activities are changing the underwater soundscape. How these changes may potentially impact marine animals in their acoustic habitat. What measures can be taken to mitigate potential impacts. Reducing bycatch of protected species can improve the recovery of marine mammals, sea turtles, and fish. Together with the fishing industry, we work to minimize bycatch by developing technological solutions and changes in fishing practices. These include gear modifications, avoidance programs, and/or improved fishing practices in commercial and recreational fisheries. Species Valuation Studies Species valuation studies assess the national benefits derived from threatened and endangered marine species, including fish, sea turtles, marine mammals, and seabirds. Determining the economic value of protected species helps us determine the benefits and value of our corresponding conservation and recovery efforts. Climate and Ecosystem Science Understanding climate change impacts on living marine resource distribution and occurrence patterns is a high priority for NOAA Fisheries. We know relatively little about the effects of global and regional climate dynamics on species distribution, abundance, and prey availability. The Arctic in particular is a window to changing climate patterns and a suitable biological laboratory to observe and record the impacts of receding sea ice, warming sea surface temperatures, and variable energy flow. These impacts all affect key marine ecosystem functions and native tribal communities that depend on Arctic resources for their livelihood and sustenance. Learn about other advanced technologies used by our scientists—including drones, satellite tagging and tracking, and genetic research—to study marine mammals and other ocean animals. Species in the Spotlight Of all the species NOAA Fisheries protects under the Endangered Species Act, we consider nine among the most at risk of extinction in the near future. For some, their numbers are so low that they need to be bred in captivity; others are facing human threats that must be addressed to prevent their extinction. We launched the "Species in the Spotlight" initiative in 2015 to bring greater attention and marshal resources to save these highly at-risk species: - Atlantic salmon Gulf of Maine distinct population segment (DPS) - Central California Coast coho salmon evolutionarily significant unit (ESU) - Cook Inlet beluga whale DPS - Hawaiian monk seal - North Atlantic right whale - Pacific leatherback sea turtle - Sacramento River winter-run chinook salmon ESU - Southern resident killer whale DPS - White abalone We chose these nine species because they are all endangered, their populations are declining, and they are considered a recovery priority #1C. A recovery priority #1C species is one whose extinction is almost certain in the immediate future because of rapid population decline or habitat destruction, and its survival conflicts with construction, development, or economic activity. In most cases, we understand the limiting factors and threats to these species, and we know that the necessary management actions have a high probability of success. In some cases, we are prioritizing research to better understand the threats so we can fine-tune our actions for the maximum effect. We know we can’t do this alone. A major part of the Species in the Spotlight is to expand partnerships and motivate individuals to work with us to get these species on the road to recovery. Actions we and are partners are focusing on include: Protecting and restoring habitat. Encouraging community stewardship and citizen science. Reducing human-caused threats such as entanglement in fishing gear, habitat destruction, vessel strikes, and noise pollution. Breeding species in captivity. Cooperating with other nations. - Continuing Species in the Spotlight Initiative Empowers NOAA Fisheries' Endangered Species Conservation Efforts - Species in the Spotlight 2021–2025 Priority Action Plans - Species in the Spotlight 2016–2020 Priority Action Plans Conservation groups; academia; tribal nations; and federal, state, and local governments all make important contributions to the protection and recovery of endangered and threatened species. We work with these organizations in many ways to minimize harmful effects on listed species and work toward their recovery. Our work with partners includes: - Regularly reviewing and recommending activities to help reduce threats to a listed species - Entering into agreements to proactively conserve species before they need listing under the Endangered Species Act - Providing grants to support species recovery - Developing and implementing conservation strategies under species recovery plans Cooperation With States Section 6 of the ESA, titled “Cooperation with the States,” allows NOAA Fisheries and states to collaborate in the conservation of threatened and endangered species, and in the monitoring of candidate and recently delisted species. Under section 6, we are authorized to enter into agreements with any state that establishes and maintains an "adequate and active" program for the conservation of endangered and threatened species. Once a state enters into such an agreement, NOAA Fisheries is authorized to both help and fund implementation of the state's conservation program. States can use federal funding to support management, research, monitoring, and outreach projects that have direct conservation benefits for listed species, recently delisted species, and candidate species within that state. We provide this funding in the form of Species Recover Grants. Species Recovery Grants to States and Tribes Species recovery grants support management, research, monitoring, and outreach activities. Eligible species include: Species listed under the ESA (excluding Pacific salmonids, which may receive funding under the Pacific Salmon Recovery Fund). Recently delisted species. Species proposed for listing under the ESA. Under the ESA, we must list species as endangered or threatened regardless of where they are found. The ESA benefits foreign species by restricting their commercial trade and facilitating bilateral and multilateral efforts and agreements. We partner with the U.S. Fish and Wildlife Service and other nations through the Convention on International Trade in Endangered Species of Wild Fauna and Flora. This partnership ensures that international trade does not threaten species survival. We are also a party to the Specially Protected Areas and Wildlife Protocol of the Cartagena Convention. Under this protocol, we collaborate with other nations of the wider Caribbean region to conserve and manage threatened and endangered species. ESA listing of foreign species can also increase global awareness of the threats they face, which may fuel conservation efforts in their range countries. Draft RIR/ESA Section 4(b)(2) Preparatory Assessment/IRFA of Critical Habitat Designation for the Beringia Distinct Population Segment (DPS) of the Bearded Seal Analysis of the economic, socioeconomic, and other costs and benefits associated with designating… Recovery Status Review for the Main Hawaiian Islands Insular False Killer Whale Distinct Population Segment Recovery Status Review for the Main Hawaiian Islands Insular False Killer Whale Distinct Population… Recovery planning workshop summary report that was held in May 2021 for the 15 U.S. ESA-listed Indo… Outreach & Education Enjoying Hawaiian monk seals from a distance is easy with this simple rule of thumb! August 1 to 26, 2021 Pacific Marine Assessment Program for Protected Species (PacMAPPS) information…
<urn:uuid:3906bb51-6238-40b1-901b-c862808b65cf>
CC-MAIN-2021-43
https://www.fisheries.noaa.gov/topic/endangered-species-conservation#resources
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00711.warc.gz
en
0.91128
4,054
3.421875
3
For the past 10 years, skin cancer has been the most frequent malignant neoplasm in Brazil and worldwide. Each year, there are more new cases of skin cancer than the combined incidence of cancers of the breast, prostate, lung and colon1,2. There were an estimated 188 000 new cases of skin cancer in Brazil in 20163. In the USA, on average, one person dies from melanoma every hour4. Non-melanoma skin cancer (NMSC), despite having low mortality, may cause physical and psychological damage to patients because these cancers mainly affect regions of the face, disfiguring the patients5. Skin cancer prevention consists of raising awareness in the population through educational programs and evaluation by a health professional. Skin cancer prevention and screening programs have been implemented by several health facilities, although the costs/benefits of this practice are still under discussion6. BCH is a tertiary hospital located in the city of Barretos in the state of São Paulo, 430 km away from the capital city, São Paulo. The prevention department of BCH runs some prevention programs for cancers such as breast, prostate, cervical, oral, colon and skin cancers. The skin cancer prevention program comprises educational activities and medical assistance conducted at the hospital and at a mobile unit (MU). In the literature, there are few reports of the use of MUs in skin cancer prevention programs. There is one MU in Switzerland, which only performs medical evaluations7, and one in Australia, where primary prevention activities are conducted8. The BCH MU has visited more than 700 cities in nine states in Brazil in the past 10 years. The BCH MU includes a medical outpatient module and a small surgical center. In addition to skin cancer screening, the MU also offers care for the prevention of cervical and prostate cancers. It has the potential to provide medical assistance and minor skin surgeries. The medical assistance is provided by a general practitioner specialising in skin cancer, and an oncological surgeon. The MU’s methodology has previously been published9. The objective of the present study is to evaluate the use of the MU as part of a skin cancer prevention program, 10 years after the implementation of this prevention program using an MU in remote areas of Brazil. The database of BCH was used. These data refer to data collected by the BCH Prevention MU. The data relate to patients who were seen at the MU from 2004 to 2013. The data for 2004–2007 have already been published, and were complemented with data from 2008 to 20139. The collected data consist of the total number of appointments, the number of procedures performed (biopsy, excision, cryotherapy), and the number of referrals made (in some cases, due to the size of the lesion or the clinical condition of the patients, they were referred directly to the regular Barretos unit, and the procedure was not performed at the MU). Age, sex and staging of the malignant lesions were other variables for which data were collected. The data are described as absolute and relative frequencies. All the informed consent terms were signed by each patients and are currently archived in the prevention department under supervision of the local ethics committee (approval number 377/2010). A total of 45 872 patients with suspected skin cancer were evaluated at the MU from 2004 to 2013, with an average of 4587 patients seen per year at the MU. Of these, 8954 surgical procedures (excision and/or biopsy) were performed. These patients had a mean age of 64 years, and 50.3% were men and 49.7% were women from nine Brazilian states (São Paulo, Minas Gerais, Goiás, Mato Grosso, Mato Grosso do Sul, Tocantins, Pará, Rondônia, Santa Catarina) (Table 1). The youngest patient was a 21-year-old man, from the state of Rondônia, with a diagnosis of squamous cell carcinoma. The oldest patient was a 99-year-old woman, from the state of Minas Gerais, with a diagnosis of basal cell carcinoma. Among the patients undergoing surgical procedures, 7098 (15.5%) received pathological confirmation of the malignancy. The other 38 774 (84.5%) patients without suspected skin cancer lesions were instructed to continue follow-up with their general practitioner. In relation to malignant lesions, 81.5% had basal cell subtype, 14.5% squamous cell, 1.7% melanoma, 0.7% metatypical, 1.2% Bowen disease, and 0.4% had two other subtypes (dermatofibrosarcoma and Merkel cell) (Table 2). Among the nonmelanoma cancers, 85.7% were diagnosed at stage 0 or 1, whereas 85.9% of the melanomas were diagnosed at stage 0 or 1 (Table 3). Two cases of Merkel cell carcinoma were diagnosed, one patient at stage 2; the other patient chose to continue treatment at a different hospital unit, and hence the BCH do not have follow-up data. Two cases of dermatofibrosarcoma were diagnosed, one at stage 3; the other patient also preferred to continue treatment at another hospital unit. Brazil, a developing country with a large territory and significant social inequality, has an insufficient health system designated the universal health system (sistema universal de saúde, SUS). The SUS, whose pillars are equity, universality, comprehensiveness, decentralization and social monitoring, cannot provide quality assistance to the entire Brazilian population for several reasons. One of the reasons may be the difficulty in retaining physicians in regions far from large urban centers. For example, in the state of Amazônia, there is one dermatologist for every 90 000 residents10. There is an unequal distribution of physicians in Brazil, with the majority preferring to live in large cities11. Several policies have been attempted to retain physicians in peripheral regions; the most recent was the More Doctors Program adopted in 2014, a program highly criticized by clinicians and that still awaits effectiveness data12. In Brazil, in the 1970s, some federal incentive programs for the colonization of the northernmost regions of Brazil were carried out by the federal government through tax incentives and facilitated credit. This led to migration of European descendants, who lived in southern Brazil, to colonize the northern region, whose main labor market in this region was and is still agriculture13. In addition to the scarcity of physicians in remote areas, which can delay an early diagnosis of skin cancer, is the important factor of tropical climate. A high incidence of UV radiation and an annual mean temperature of approximately 24°C14 promote the development of skin cancer. In addition, 17% of the Brazilian population works and lives in rural regions15. The hottest regions of Brazil (north and central eastern) were colonized mainly by European descendants, with type I and II Fitzpatrick skin types16, and these people work in agriculture. These factors have accentuated the incidence of skin cancer in this region17,18. BCH receives patients from all over Brazil, and every appointment and treatment is free of charge. Approximately 4000 appointments each day are held at the hospital, which has a medical team of 350 specialized health professionals. The hospital has been conducting work on prevention, diagnosis, treatment and research in oncology and is recognized at national and international levels19,20. The skin cancer prevention program implemented by BCH comprises educational and care components. In addition to several lectures targeting the general population, it also trains health professionals from several cities in Brazil. These health professionals receive training for 3 days on the prevention, diagnosis and treatment of skin, oral, breast, prostate and cervical cancers. The patients seen at the MU are screened by the city’s nurse, who received prior training at BCH. During the training conducted at BCH, these health professionals learn how to conduct screening of the population in their cities. When a patient with a complaint of a skin lesion comes to the city’s health unit, the health professional, usually a nurse, initiates the screening, and if the nurse suspects the skin lesion to be a malignancy, the patient is referred for assistance at the MU, whenever it is at the patient’s city. There was a significant improvement in the screening conducted by the nurses after the 3-day training was implemented at BCH. The percentage of patients with skin cancer detected by the nurses after the training increased from 12% to 30%. The MU visits the Brazilian cities once a year. There is the option for the local nurse to refer a patient with a suspected lesion to another oncology department, which is routine for places where MU does not visit. However, the operation of these services at the places visited by the MU is difficult, due to either the absence of specialists near these cities or difficulties in transportation to the nearest referral center. Therefore, the majority of the patients identified by the nurse prefer to wait for the MU. The MU has been well received by the population in these cities, mainly due to the convenience of the unit – the MU is capable of training, and diagnosing and treating skin cancer in a patient’s own city, without the need for travel and, consequently, without unnecessary patient costs. A total of 92% of the lesions diagnosed at the MU are treated at the MU itself. Only 8% of the patients must be referred to the BCH, mainly due to lesion size. These patients immediately receive a date for their appointment at the BCH, and the Health Bureau of their own city is responsible for their transportation. At present, the BCH has 17 MUs, but only one performs skin cancer surgeries. With the implementation of this prevention program by the BCH through the MU, it was observed that, during the 10 years of operation, a flow of referrals to the MU and expectation for arrival of the MU to the cities was generated. All of the assistance provided at the MU is free of charge. BCH prevention department staging data (Table 3) were compared with the general BCH data (Table 4) – cases that are referred to BCH by basic health units, not through the skin cancer prevention program from BCH. In these referrals, the patient already has a biopsy with a positive result, with the diagnosis made by the health team of the municipality. This may have generated some delay, both in diagnosis (because there are not enough specialists in Brazil)10, and in delays due to the bureaucracy of the Brazilian health system. Connecting a patient with a specialist physician begins with a request from the general practitioner to the municipal health department, and this requests the state health department to look for a vacancy for the patient in an oncological hospital. These requests are time consuming. Through these data, it was observed that the patients diagnosed by the prevention department, whose objective is to offer the opportunity of a specialized treatment in their municipality, not requiring referrals to other referral centers, presented 0.3% staging 3 and 4 in relation to NMSCs and 3.3% for melanomas. For patients referred to the BCH by a local doctor in their city, 1.4% of patients with staging 3 and 4 were NMSCs and 27.3% staging 3 and 4 for melanomas, a significant difference. For 505 patients with these NMSCs, lesions were larger than 5.0 cm in the skin – that is, they should have been easy to diagnose, rather than have a late-stage diagnosis with consequent increased mortality. In relation to melanoma, there was a very important difference in the number of late diagnoses (27.3% v 3.3%). Skin cancer prevention initiatives are highly cost effective and may also be cost-saving. There is a significant cost burden of skin cancer for many countries, and health expenditure for this disease will grow as incidence increases. Public investment in skin cancer prevention and early detection programs show strong potential for health and economic benefits21. A cost%u2010effectiveness evaluation of the Australian SunSmart program demonstrated a reduced burden of disease and high cost%u2010effectiveness22. Opportunistic screening for skin cancer for the general population has insufficient evidence to support its practice since it does not provide a reduction in the overall mortality of the screened population6. It is indicated for the high risk population: Fitzpatrick I and II skin type men and women older than 65 years, atypical moles, more than 50 moles, a personal history of melanoma or a strong family history of melanoma23,24. The MU provides assistance to remote communities of patients previously selected by a local health team, while performing a punctual actions like small surgeries, and some longitudinal actions, such as cancer education training for health professionals in these communities. With the aim to improve this prevention program, the BCH initiated a skin cancer screening modality through teledermatology in 2014. Its methodology and results will be published soon. This study demonstrated a significant number of skin cancer cases diagnosed and treated by the MU, showing that the MU positively contributes to the early diagnosis and treatment of skin cancer among populations residing in remote areas of Brazil.
<urn:uuid:2eeec087-5b32-40eb-bd61-82762a9d70d2>
CC-MAIN-2021-43
https://www.rrh.org.au/journal/article/4599
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00630.warc.gz
en
0.966256
2,725
2.984375
3
The late Dr. Stephen R. Covey, famed author of the international bestseller The Seven Habits of Highly Effective People, once illustrated his approach to success in front of a packed convention hall. He called up a volunteer, who was asked to attempt to fit various rocks into a clear plastic bucket. The rocks differed in size, ranging from larger rocks down to pieces of gravel, and each was affixed with a different label, such as family, work, spirituality, exercise. The volunteer surveyed the group of rocks on the table and instinctively picked up the smallest stones first and placed them in the bucket, and then picked up increasing larger rocks. As he advanced, the volunteer experienced increasing difficulty finding room in the bucket for the remaining rocks. Ultimately, despite all effort, there was simply not enough room in the bucket for all the rocks to fit. After the volunteer gave up, Covey produced a new, empty bucket and suggested that the volunteer start again, this time placing the larger rocks in first. Following instructions, the volunteer began placing the larger rocks into the bucket. After all the big rocks had been placed, the volunteer then added the smaller rocks and finally the gravel. They all fit into the bucket. The crowd applauded. Covey then commented on the lesson they had just learned. Life is complex, he explained, filled with a plethora of goals, needs and interests. One might think that success is limited to those who either limit their aspirations (don’t have so many rocks) or simply find more time in the day (get a bigger bucket). There is, however, a third route to success, and that is to prioritize wisely. The big rocks represented one’s most important goals and values, and the lesson was simple: Those need to go in first – they need to be prioritized. Once they are in, there will be room around them for everything else. Prioritizing as Integral to Success The key to a successful life is not merely identifying and pursuing goals, but also learning how to prioritize appropriately. Goals, time allocations and focus must each be addressed in accordance with this prioritization. The first step, of course, is to identify the goals and values one seeks to incorporate into one’s life – which rocks need to go into the bucket. The crucial next step is to determine the relative “sizes of the rocks,” identifying which are the “big ones” – in other words, one’s primary goals and values – and make sure they go in the bucket first. Unfortunately, it is all too common that we instinctively try to take care of the “small rocks” first, eventually discovering that the big ones got left out. Goals, desires, interests, expectations are all part of a normal, healthy life. It’s great to be ambitious and strive to “have it all.” But while American culture and opportunity make great achievements attainable, the spectrum of daily pressures confronting each individual is truly massive, especially against the backdrop of an incessant bombardment of messages, texts, emails and calls. It is no wonder that we feel as if we are jamming rocks into an already filled bucket. It is almost inevitable that we will become extremely frustrated and, in many instances, unfulfilled. So how is one to do it all? How does one achieve success while simultaneously retaining a semblance of peace of mind? How are we to fit so many big rocks into our limited-capacity buckets? Setting Priorities by Refining Goals It is an all-too-common malady that people tend to adopt de facto goals that do not reflect their true, personal aspirations. Such a disconnect will almost inevitably result in frustration, even if the goals are spectacularly achieved. In addition, even when someone sets healthy goals that honestly reflect his values, he may very well evaluate his success against yardsticks that do not reflect those underlying values. It is, therefore, critical that each person review the connection between their deeper aspirations and the goals they have set for themselves, as well as the manner by which they measure success. The first step is to reexamine one’s current goals in life and review how they were selected in the first place. For example, did these goals originate within a context created by parents, teachers, community or friends, and if so, do they reflect one’s own personal values and aspirations? It is both healthy and appropriate for a child, as well as a developing adult, to be eager to meet the approval of those he loves and respects. But as an adult, one must consider whether the activities guided by seeking such approval remain consistent with one’s own values and priorities. For the mature individual, a failure to integrate personal values and priorities as the primary driver of his goals will almost inevitably lead to feelings of inadequacy and frustration. Although there is no magic formula for selecting appropriate goals, below is a suggested four-step process: Life choices must be guided by goals. The first and perhaps most challenging exercise is identifying the goals one has already chosen – consciously or not – and that are already guiding one’s life. Many people make life choices instinctively or without thorough consideration as to what influenced their choices. This attempt to articulate their goals may be the first time that some explore how these goals came to be. But it goes one step further. For a thought to be actionable, it needs to be articulated. A person averages between 12,000 and 60,000 thoughts a day, most of which are automatic or irrelevant. For a thought to justify an allocation of time and effort, it must first be articulated. It needs to be considered, spoken and written down. Only then can it be properly evaluated and deemed worthy of an action. I once had the opportunity to spend an afternoon with a great sage. A man who accomplished so much in his life and yet seemed to be always at peace. Among my many questions, I asked him how he was able to seemingly “do it all.” He looked at me with his loving but piercing eyes and with a booming voice said one word… Cheshbon. Cheshbon is a one-word reference to a ancient Jewish practice called cheshbon hanefesh (spiritual accounting). It means taking an ongoing accounting of your life. This practice recognizes the importance of articulating one’s goals and priorities before taking action. It celebrates the practice of thinking before (and after) action, in order to ensure that the action is both meaningful and productive. Creating a habit to do a daily accounting is critical to the application of Covey’s “rocks in the bucket” exercise. It creates a structure within which goals can be prioritized. It shows a person that by taking the time necessary to articulate and account for deliberate thought and ensuing action, he can accomplish, grow and ultimately succeed. Most people, at least deep down, have a good sense of what objectives they should and should not pursue. By failing to articulate their goals, however, they lack the perspective necessary to allocate their time and effort properly. Moreover, if the goals are not clearly and deliberately articulated in advance, neither objectives nor actions will be deeply grounded, possibly resulting in ill-advised changes in direction or focus as a response to some new inspiration or idea. Once goals are determined, the next step is actually to question one’s own motivations in selecting these specific goals. One must ask, “What made me choose these particular goals and leave out others?” The exercise of connecting goals with their true motivations is not intended as a judgmental process, nor does it necessarily reflect how proper any actual motivations may be. The exercise is necessary because curiously, success in achieving goals is dependent on the underlying motivations in pursuing those goals in the first place. There is nothing wrong with being ambitious, for example, in a particular sphere of pursuit. But one’s measurement of success, and thus resulting self-image and degree of satisfaction that will ensue, is influenced by the nature of the ambition and the true values that inform that goal. For one thing, goals frequently are set in response to one’s environment. Occasionally, we simply want what others have, just to be like everyone else. At other times, as noted earlier, we seek to impress or earn the approval of, others. Or perhaps we take certain values for granted, when they might not be as simple as we think. There is no shortage of subconscious factors that profoundly influence the goals we choose. Why is it so critical to recognize the importance of knowing why we want to be rich (or learned, or have many friends, or be kind, or play an instrument well)? Mihali Chicksenmehi, in his groundbreaking work, Flow, introduces an interesting concept regarding life satisfaction. He suggests a distinction between two different types of life experiences: autotelic and exotelic. Autotelic comes from the Latin words auto, meaning self, and telos, meaning goal. The experience itself is the goal. An autotelic personality derives satisfaction from himself, and from the very activities in which he or she is involved. The activities are goals in and of themselves, not merely a path to a different goal. An autotelic experience is intrinsically rewarding; life and time spent are justified in the present, rather than being held hostage to a future gain. The voyage is for the scenery and companionship, not to reach a destination. An exotelic experience, by contrast, is an activity undertaken not for its own sake but exclusively to achieve a separate result. The ultimate objective may be mundane or profound, such as to afford a sports car or feed the starving, but in any event, an exotelic experience has no inherent value in and of itself. Chicksenmehi observes that when engaged in an exotelic experience, one almost necessarily has the feeling that the time being spent is hollow. After all, there is no meaning in the activity itself, only in its effects. Much of peoples’ frustration and emptiness results from their engaging in their daily activities as exotelic experiences. Jobs are viewed as mere conduits to an income, errands or car pool are undertaken in satisfaction of duties. It is thus no wonder that these activities fail to generate joy, or even some peace of mind. Without a natural flow between one’s goals and the motives that drive them, the pursuit of those goals necessarily becomes an exotelic experience. Such a person is constantly running, never satisfied, never present in the moment and never able to fully embrace his experiences, since the activity is merely for the resultant benefit. In truth, the person doesn’t really want to engage in the activity at all, but only wants the activity to have been done. When one’s goals naturally reflect the motivations behind them, the true, intrinsic value of each activity can be explored, allowing the activity itself to become meaningful. This level of introspection can convert actions from exotelic to autotelic, increasing the level of engagement and ultimately the degree of success likely to be realized. Even more importantly, identifying one’s motivation provides the opportunity to reject it as unnecessary and even unwanted. For example, one may have chosen financial success as a goal, without ever really knowing why. Taking the time to reconsider this question may reveal that his motivation was really to satisfy the expectations of others, and that his own preference would be to give up on some of his financial goals and replace them with other goals, which, for example, he may have rejected as a young man but that have become meaningful to him over time. In other words, synthesis between goals and motivations enables one to identify the “big rocks” in one’s life, and thereby choose wisely which to place first in the bucket, which to place later and which to reject outright. One of the surefire ways to distinguish between an autotelic and an exotelic goal is to determine whether the interest is solely in the outcome, or if it is also in the activity itself. Upon recognizing the distinction between the value of an activity itself and its outcome, one can begin to appreciate that success can also be viewed through the prism of the activity or its outcome. True and meaningful success is achieved when accomplishment is realized through the activity itself. Success does not happen at the destination, it happens along the way. Often, people desire to achieve the success realized by others. Alas, they view the other person’s results as the success, rather than the choices they made to get there. They don’t want to engage in the activities that made the difference – they just want the end result. By failing to appreciate the investment that was necessary to achieve success, they cripple their own ability to duplicate the results. If success is evaluated on results alone, a misleading picture is painted. One fails to develop the appreciation that the central focus of life must be the efforts to get to the result, and thus, that for life to have meaning, those efforts themselves must have meaning. Therefore, when prioritizing goals, we must not focus on the outcomes. After all, the outcome is merely a result of the important part – the effort. Moreover, we have no real way of predicting any outcome (or even assuming a necessary connection between an activity and its apparent result). Greatness is in the journey, not the destination. By so viewing life, joy and meaning are attainable. When we start to appreciate and enjoy the journey, we will find the inner strength to be more successful. We will look at the myriad responsibilities we juggle each day with a sense of pride, rather than frustration, appreciating that it is during the struggle itself that greatness is born. The final piece of piece of the puzzle is learning to measure your success against your own aspirations and capacity, rather than against the success of others. Judging oneself in contrast to others is one of the leading causes of dissatisfaction in life. As children, we are socialized to view success in a comparative fashion – in sports, spelling bees, and even grades. We gauge our success by whether we are doing better than our peer group. Such a mentality is intrinsically destructive. Success must be understood as being internal and personal. It is based on one’s ability to grow within on one’s own life and conditions, to meet one’s potential. The final commandment of the Ten Commandments is “Thou Shalt Not Covet.” Its placement there seems odd, since it appears to follow a list of far more egregious sins, such as murder, adultery, kidnapping and false testimony. These transgressions can literally destroy someone’s life. Merely looking over the fence to your neighbor’s new car seems quite innocuous in comparison. The lesson is actually quite profound. Focusing on another’s possessions can also have a tragic effect. It can destroy a life – but this time it’s your own. When desiring the possessions or accomplishments of others, one’s own goals and priorities become obscured. Families can be ruined and lives and relationships destroyed when one’s own life is viewed through the prism of someone else’s. Great people don’t look out the window to see what their neighbor has acquired. Great people look at the world to see what is needed, and then undertake to make a difference. Great people try to create a better version of themselves every day. So stop looking. Stop comparing. Stop driving yourself and your family crazy that you haven’t been able to win the race, even when the race is wrapped in the image of achieving personal greatness. Take the time to identify your goals and understand where they come from. Then ensure that you engage in your daily activities as autotelic experiences. Allocate the appropriate amount of time to each activity and demand excellence of yourself. With this formula, excitement will be felt for the journey, rather than pressure and anxiety in pursuing the destination.
<urn:uuid:3557e126-384e-4f7d-afc6-f221d0e22cc9>
CC-MAIN-2021-43
https://charlieharary.com/having-it-all-setting-priorities-in-a-busy-life/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584567.81/warc/CC-MAIN-20211016105157-20211016135157-00470.warc.gz
en
0.966451
3,336
2.71875
3
Dominique has had end-stage renal disease since the age of 16 and has been on dialysis for years. She has received three kidney transplants. Diet and nutrition are a vital part of living well with kidney disease and/or failure. A dialysis (or renal) diet is one that is prescribed for people who have chronic renal failure and are on dialysis. It could also be prescribed to someone who has been given a diagnosis of imminent kidney failure, meaning that their kidneys will eventually fail. It is designed to control how much potassium, sodium, phosphorus, calcium, and fluids a person ingests. There is no one type of diet that is suited to everyone, but there are general guidelines. Your doctor and dietician will help tailor a regimen that suits your specific needs. Potassium (K or KCl) is a mineral that protects blood vessels from damage and keeps vessel walls from thickening. Once inside the body, potassium becomes an ion and functions as an electrolyte. Electrolytes contribute to the regulation of many life-sustaining processes, such as maintaining normal blood pressure, nerve function and muscle contraction. Potassium also assists in carbohydrate and protein metabolism. According to the National Kidney Foundation, a potassium restricted diet is typically about 2,000mg per day. With this recommendation in mind, ultimately, your doctor should always determine your ideal potassium intake based on your nutrition needs and current health status. Early symptoms of high potassium levels, which is called hyperkalemia, include muscle weakness, numbness and tingling in the fingers and toes. If potassium levels continue to increase, it can lead to heart palpitations and an irregular heartbeat. If you don’t immediately receive medical attention, the heart will stop beating without any warning. Most fruits and vegetables are high in potassium, so make choices like apples, grapes, pineapple, green beans, summer squash, bell peppers and onions. These choices tend to have lower potassium counts, and still provide many of the other vitamins and minerals that the body needs to stay healthy. Be sure to check with your dietician for daily serving sizes. Avoid things that are the very high in potassium like bananas, oranges, cantaloupe, mangoes, potatoes, legumes, peas, spinach, dried fruits and tomatoes. Milk products and chocolate products also tend to be high in potassium. Each month, when you receive your lab results from the dialysis center, a “safe” potassium level will be between 3.5 and 5.0. When most people hear sodium (Na), they think of table salt. Salt is the mineral compound sodium chloride (NaCl). Foods may contain sodium chloride (salt) or may contain sodium in other forms. It can be tricky identifying the different forms of sodium, so talk with your dietician to get help identifying the many other forms. Following a low-sodium diet means limiting salt and other ingredients that contain high amounts of sodium. Sodium is one of the body’s major electrolytes and helps control the fluids going in and out of the body’s tissues and cells. Sodium also contributes to the regulation of blood pressure and blood volume, helps with the transmission of impulses for nerve function and muscle contraction and the regulation of the acid-base balance of blood and other body fluids. As a guideline, if you’re doing in-center hemodialysis, it is recommended that you keep your sodium intake to between 1,200mg and 2,000mg per day. Too much sodium can be harmful for people with kidney disease because the kidneys can no longer eliminate excess sodium and fluid from the body. As sodium and fluids build up in the bloodstream and body tissues, blood pressure increases. There are several other sodium-related complications such as edema which is swelling in the legs, hands, face and other extremities, heart failure due to excess fluid in the bloodstream that can overwork the heart weakening and/or enlarging it and shortness of breath that is caused by fluid building up in the lungs, which makes it difficult to breathe. Once you reach the point where you require dialysis, you will be asked to follow a low-sodium diet. The diet will help control blood pressure and fluid retention in the body and its tissues. Controlling sodium intake will help avoid cramping and blood pressure drops during dialysis. It is imperative that you work closely with your dietitian who will determine how much sodium you can eat each day and counsel you on regulating it in your diet. There are numerous salt substitutes out there today. Ask your dietitian before you start using any of these salt substitutes because some of them may contain potassium, which needs to be limited on a renal diet. The list of food items that are high in sodium is lengthy. Some of the things that should be avoided include table salt, processed foods and cured meats, cheese, pickles, sauces and salads dressings and snack foods (like potato chips). You can get a more comprehensive list from your dietician at your dialysis center. Phosphorus (P) is a mineral that is necessary for building and maintaining strong bones and teeth as well as a healthy metabolism. The kidneys remove excess phosphorus from the body through the urine. Unfortunately, when your kidneys have failed and you require dialysis, they are no longer able to remove excess phosphorus from the body. Dialysis removes some of the phosphorus from your blood, but it’s inefficient. In order to prevent serious complications, you must balance your phosphorus level through diet, dialysis and medications. When you get your monthly lab results, the normal phosphorus level in the blood is between 3.5 and 5.0. When the level of phosphorus is higher than this range, calcium is pulled from your bones and teeth to form calcium-phosphorus crystals, which are deposited in the blood vessels, skin, and organs. This can cause itching, skin sores, weakened bones and stiff blood vessels. The risk of heart failure and death are significantly increased. The National Kidney Foundation (NKF) recommends that chronic kidney disease patients limit their daily phosphorus intake to 800mg to 1,000mg per day. Foods high in phosphorus that should be limited or avoided are dairy products, beer, dark colas, organ meats (i.e. liver), processed meats, dried beans or peas, nuts, seeds, quick breads, bran and whole-grain products. In addition to the foods naturally high in phosphorus, many processed foods are high in the mineral due to phosphate additives. It’s important to read food labels and avoid foods with phosphates listed on the ingredient list. You can consult your dietician for assistance with identifying hidden phosphorus. Because so many foods contain phosphorus, it is often extremely difficult to control phosphorus levels with only diet and dialysis. Your doctor will more than likely prescribe a medication called a phosphate binder, such as Tums®, Renvella ®, or PhosLo ®. These medications should be taken with all meals and snacks (taken before you eat) because they bind with phosphorus and remove it from your body in the stool. Your dietician will have valuable information on food alternatives that are lower in phosphorus as well as offer tips on limiting your phosphorus intake. Almost 99% of the calcium (Ca) in the body is in bones and teeth. The rest is found in blood and soft tissues. The body uses calcium to build/maintain strong bones and teeth, to help muscles contract and relax during normal body movement, to transmit nerve impulses, to make the blood clot normally, to regulate cell division and cell multiplication and to assist with enzyme reactions within the body. Vitamin D and parathyroid hormone (PTH) help manage how much calcium is absorbed for use by the body and eliminated by the kidneys. Healthy kidneys turn vitamin D into an active hormone called calcitriol, which helps increase calcium absorption from the intestines into the bloodstream. When you’re kidney function has been compromised (i.e. chronic kidney disease or kidney failure), these things have to be managed by dialysis, medications, and dialysis. Chronic kidney disease (CKD) causes certain imbalances in bone metabolism and increases the risk of a particular bone disease called renal osteodystrophy, which causes such symptoms as bone deformities, joint and bone pain, bone fractures, and decreased mobility. These imbalances can also cause calcium deposits in the blood vessels and contribute to heart disease. Your doctor will measure your calcium, phosphorus and PTH levels to determine what course of treatment may be required to stabilize your calcium levels. If calcium levels are low, a calcium supplement may be prescribed or there may be the use of calcium-based phosphorus binders to treat both low calcium and high phosphorus levels. When a dialysis patient gets his or her monthly lab results, a normal calcium level will range between 8.5 and 10.5. Of course, this will vary from patient to patient but should generally fall within this range. When a person is on hemodialysis, it is necessary to be on a fluid restriction. It’s important to follow this restriction very closely because it can help you feel more comfortable before, during and after dialysis treatments. It’s true that dialysis gets rid of excess fluid, however, it’s not as effective as healthy kidneys that work all day, every day. Most people on hemodialysis get treatments at least three times a week for approximately three or more hours for each treatment. Therefore, on the days between treatments, the body holds on to excess fluid. By exceeding recommended fluid allowances given to you by the doctor and dietician, there will likely be swelling and your blood pressure rise, which makes your heart work harder. There will be too much fluid built up in the lungs, causing difficulty breathing and even congestive heart failure or pneumonia. There is a limit to how much fluid can be safely removed during a dialysis treatment, so if the fluid allowance is exceeded, the need for an extra dialysis treatment may become necessary in order to remove all the extra fluid in a safe manner. Generally speaking, a person on dialysis should restrict their fluid intake to about 1 liter a day. This is equivalent to approximately 34-36oz., or 1,000cc (1cc =1ml). Also, keep in mind that anything that is liquid at room temperature should be counted as fluid intake such as ice, ice cream/popsicles, Jell-O, and gravy just to name a few. Some of the complications that can be a result of ingesting too much fluid between treatments are: - high blood pressure, or hypertension - sudden drops in blood pressure that can occur during a hemodialysis treatment - Shortness of breath - Swelling, particularly in feet, hands, and even the face - Fluid buildup in the lungs - Heart problems like a weakened heart muscle and/or an enlarged heart Sources of Fluid The Need for a Restricted Diet Doctors and dieticians strongly recommend that dialysis patients follow a restricted diet. It isn’t meant as a form of punishment or torture. Rather, it’s meant as a way to improve the quality of the patients’ lives and as a means of being healthier on dialysis. The diet will aid in making dialysis treatments more effective, will help you feel your best, and will help to avoid other health complications that can be managed and possibly avoided by doing so. The dialysis diet includes a balance of nutrients that will help keep your body healthy and strong, while allowing the levels of potassium, phosphorus, calcium, sodium and fluids to remain safe. This will improve clearance rates associated with each dialysis treatment, which means they are an acceptable amount of fluid and other waste products efficiently. Follow Your Diet This content is accurate and true to the best of the author’s knowledge and does not substitute for diagnosis, prognosis, treatment, prescription, and/or dietary advice from a licensed health professional. Drugs, supplements, and natural remedies may have dangerous side effects. If pregnant or nursing, consult with a qualified provider on an individual basis. Seek immediate help if you are experiencing a medical emergency. [email protected] on May 03, 2018: Thanks; easy to undersatad Jan Modric from Europe on May 31, 2016: Would you mind to mention the source of the image "Foods High in Phosphorus" by making a live link pointing to our article http://www.nutrientsreview.com/minerals/phosphorus...
<urn:uuid:7aaf3884-1b70-4705-817a-f08e6bc5bebf>
CC-MAIN-2021-43
https://youmemindbody.com/disease-illness/Dialysis-Diet-what-is-it-and-why-is-it-necessary
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00670.warc.gz
en
0.942756
2,589
3.25
3
We write in advance of your visit to Africa to highlight key human rights concerns relevant to the Great Lakes region, Liberia, and in your meetings with the African Union, with suggested action points. Great Lakes region Democratic Republic of Congo (DRC) Despite the dramatic shift in alliances in eastern Congo in recent months and an improvement in relations between Congo and Rwanda, and between Congo and Uganda, the fundamental problems have not improved. Peace, safety, and stability for individual Congolese remain as elusive ever. The humanitarian and human rights situation in eastern Congo remains desperate: marginal improvements in some areas offset by sharp deteriorations in others. More than 1.4 million people are still displaced, of which 250,000 in North Kivu have fled violence since January 2009. Human Rights Watch has long documented brutal attacks in the Kivu region against civilians by two Rwandan militia groups, the Democratic Forces for the Liberation of Rwanda (FDLR) and the Rally for Unity and Democracy (RUD). The most recent attacks seem to be a deliberate response to military operations against these groups by the Congolese and Rwandan armies. UN peacekeepers are now supporting follow-up military operations, known as Operation Kimia II, but the reprisal attacks on civilians continue. Civilian protection is never easy to manage in the face of such tactics, but efforts to date by government forces and the UN have been insufficient. The Congolese army (FARDC) is also a big part of the problem. Ill-disciplined, poorly led and badly paid, FARDC soldiers in eastern DRC have killed dozens of civilians, raped women and girls, and burned hundreds of homes indiscriminately this year. Civilians tell us that they fear the FARDC as much as the Rwandan militias. Human Rights Watch research shows that the number of women and girls who are victims of rape has increased dramatically since January 2009. In Lubero territory (North Kivu), the FARDC are responsible for over half of them. The UN mission, MONUC, has a difficult job. It is the world's largest peacekeeping operation but it is operating in one of the most difficult environments. At the same time, UN peacekeepers working alongside the FARDC in Operation Kimia II have been unable to stop Congolese soldiers from committing many serious abuses, nor have they demanded (as a pre-condition for their cooperation) the removal of known human rights abusers from Congolese ranks. Bosco Ntaganda, wanted under an International Criminal Court (ICC) arrest warrant for war crimes, even has a leadership role in these operations, as confirmed by Congolese army documents leaked last month to Reuters and the BBC. So does Jean-Pierre Biyoyo, who was found guilty by a Congolese military court in March 2006 for recruiting children into a militia. It is unacceptable for the UN and the Security Council to tolerate abusers in such positions: it entrenches a culture of impunity, undermines MONUC's role in promoting justice, and makes the Council complicit in putting civilians at risk. MONUC has internal challenges. But it is also overstretched and under-resourced. The 3,000 additional peacekeepers authorized and promised by the Security Council in November 2008 are still not deployed. Desperately-needed helicopters, rapid-reaction and intelligence capabilities remain elusive. On April 9 in New York, Alan Doss, the head of MONUC, warned the Council that without such assets MONUC's "capacity to respond quickly to emerging threats and protect civilians would be curtailed." Few Council members have shown leadership and committed meaningful military forces of their own to the UN. The failed attempt by the Congolese, South Sudanese and Ugandan People's Defence Forces (UPDF) to apprehend the Lord's Resistance Army (LRA) leaders in northern Congo in December 2008 had devastating consequences for the local population. A January 2009 Human Rights Watch fact-finding mission to the area documented the LRA's brutal "reprisal" attacks, including the killing of at least 865 civilians and the abduction of at least 160 children. We also documented the inadequate attention paid to civilian protection before, during and after the military operation. MONUC has since deployed troops to the Dungu area and other locations, but these measures are still insufficient for the task required. Abuses by the LRA continue, including at least 11 civilians killed and 24 abducted in the first two weeks of April 2009. Over 200,000 civilians have been displaced in Orientale province since December. The Security Council has an important role in ensuring that LRA abuses remain on the international agenda, that civilian protection is addressed, and that justice for victims is guaranteed. If it does not, civilians in Congo and neighboring countries will continue to be at risk, especially in Sudan and the Central African Republic. Human Rights Watch supports efforts to disarm Rwandan and other militias. But simply replacing one set of abuses and displacement in Congo for another is not a long-term solution. High-level political fixes between the region's governments do not guarantee improvement on the ground. We urge the Security Council to ensure that: - Those responsible for serious human rights abuses are held to account. Members of foreign armed groups, local militias, or the Congolese army who commit or order attacks on civilians should be prosecuted for war crimes. The Council should insist that all military operations in Congo respect international human rights and humanitarian law and that MONUC cease cooperation with FARDC and foreign armies if serious violations recur. - MONUC has the troops and resources it needs to protect civilians. Council members could show a lead by offering the desperately-needed helicopters, rapid-reaction, and logistical and intelligence support. In the case of LRA-affected communities, this means giving priority to the protection of populations at greatest risk, namely Faradje, Duru, Banda, and especially Doruma - Known human rights abusers are immediately removed from Operation Kimia II and condition future MONUC operational support on their arrest. Ntaganda and Biyoyo should be arrested first, but arrests should not end with them. A credible vetting mechanism for the Congolese army and police should also be implemented - A detailed, properly-resourced, and transparent strategy for civilian protection is developed and made central to all UN-supported military operations. MONUC should reach out to civil society and humanitarian actors to ensure that the strategy follows best practice and international humanitarian law. It might include safe zones for civilians, better communication with affected communities, providing escorts for civilians, and delivering humanitarian aid. - MONUC puts in place a mobile monitoring unit with clear benchmarks to regularly report on and review Operation Kimia II's adherence to international humanitarian law. Reports from this unit should be reviewed regularly at the highest level, and timely actions taken. - A clear and properly resourced international strategy is developed to apprehend and bring to justice LRA commanders wanted by the ICC, plus others implicated in war crimes and crimes against humanity. - UN civilian teams with strong human and children's rights components are deployed to LRA-affected areas around Dungu, to monitor abuses by all parties to the conflict. Fifteen years after the genocide, President Paul Kagame's government is tightening controls on political space, civil society, and the media. Opposition to government policies often leads to official accusations of "genocide ideology," based on a vaguely-defined law that requires no link to any genocidal act or incitement to violence. Instead of minimizing ethnic divisions, the law merely forces the issue of ethnicity in Rwanda beneath the surface. The International Criminal Tribunal for Rwanda (ICTR) and extradition courts in France, Germany, and the United Kingdom have ruled that the law is one reason why an accused may not receive a fair trial in Rwanda. Another key issue affecting long-term reconciliation and stability in Rwanda is accountability for crimes committed before, during, and after the 1994 genocide. Those responsible for the genocide need to be held to account. But justice also means holding to account those members of the now-governing Rwandan Patriotic Front (RPF) who committed crimes in violation of international law. To date, Rwanda's ruling party has refused to cooperate with the ICTR even on the most serious cases, making it impossible for victims of RPF crimes to receive justice. Most Rwandans are left with a sense of one-sided, or victor's, justice. The UN estimates that 25,000 to 45,000 people were killed by RPF soldiers between April and August 1994. Yet, until 2008, only 32 RPF soldiers had been brought to trial for crimes committed against civilians in 1994, with 14 found guilty and given light sentences. In June 2008, Rwanda nominally charged four military officers with war crimes for the killing of 15 civilians, including 13 clergy. However, the RPF-appointed military prosecutor did not vigorously pursue the case on the basis of the evidence in the government's possession concerning those responsible for the crime. The June 2008 case did nothing to enhance Rwanda's reputation for judicial impartiality. In jurisdictions beyond its borders, Rwanda has also aggressively sought to avert prosecution of its soldiers, by impeding the travel of witnesses for genocide trials at the ICTR in 2002 (forcing the suspension of several trials for months) and later breaking diplomatic relations with countries whose prosecutors issued or implemented warrants for RPF officials. RPF crimes are not equivalent to genocide. But the rights of the victims are equivalent: all have the right to justice and redress, regardless of the nature of the crime and regardless of their ethnic and political affiliation and that of the alleged perpetrator. Without some discussion of- and accountability for- the atrocities committed in 1994 by all sides, long-term reconciliation and peace in Rwanda will remain elusive, and stability in the Great Lakes region could be undermined. The Security Council established the ICTR to ensure that those responsible for heinous crimes be prosecuted. It has demanded in several resolutions that all parties cooperate with the Tribunal and that the ICTR Prosecutor's office investigate crimes committed by all sides. Yet the Council has consistently allowed the Rwandan authorities to ignore their obligations. We urge the Security Council to: - Encourage President Kagame to acknowledge that the RPF committed war crimes in 1994 and to allow victims of those crimes the right to obtain redress. - Challenge President Kagame to allow the ICTR to do its work unimpeded. - Press the ICTR's chief prosecutor, Hasan Jallow, to forge ahead with RPF indictments when he briefs the Council in New York on June 4. In your meetings with African Union officials in Addis Ababa, Somalia will be a key concern. Everyone wants an end to the humanitarian and human rights catastrophe in that country, but this means learning the lessons of the past. These show that misguided external interventions, shoring up unrepresentative or ineffective leaders in the Transitional Federal Government (TFG), and financing abusive security forces merely repeats previous cycles of violence. For example, recent international interventions in Somalia's security sector have exacerbated problems rather than eased them. In 2007, donors began training and other assistance for TFG police forces through the United Nations Development Programme (UNDP). This included direct financial support for police salaries. However, during that period, the police were implicated in serious human rights abuses, including the indiscriminate killing of civilians during combat operations, arbitrary detention of civilians in Mogadishu to extort ransom payments from their families, looting, armed robbery, and murder. At a donors' conference in April 2009 in Brussels, the United States, the European Union and others pledged increased resources for TFG security forces and the African Union Mission in Somalia (AMISOM). While efforts to bolster security are a justifiable priority, governments promoting them need to recognize that deeply entrenched patterns of impunity for serious abuses have been a primary cause of violence in Somalia over the long term. The conference paid insufficient attention to these issues. Rather than blindly bolstering existing institutions and the TFG per se, the Security Council and others should focus on improving security for Somali civilians. We urge the Security Council to ensure that: - Action is taken to make the security forces more accountable: the removal of police commissioner Abdi Qeybdid, who has a history of human rights abuses, would be a positive start. - Effective vetting mechanisms are set up for anyone being considered for recruitment to any official security organization. - Effective human rights components are part of all security forces' training. - An independent mechanism is set up to investigate alleged abuses by TFG and AMISOM personnel. - A UN Commission of Inquiry into human rights abuses in the country is established. Such an inquiry need not have the backing of the parties to the conflict and could begin its work outside the country by interviewing the hundreds of thousands of Somalis in external refugee camps. - Somalia's terrible humanitarian crisis is addressed urgently - 1.2 million Somalis were displaced from their homes as of March 2009, and 3.25 million need humanitarian assistance. Human Rights Watch has reported extensively on the failure to protect the rights of Somalis seeking refuge in Kenya. Also in Addis, the Security Council will discuss its approach to Sudan with its African Union counterparts, including the arrest warrant for Sudanese President Omar al-Bashir, and the need for the Sudanese government to cooperate with the ICC. Human Rights Watch hopes that the Council will explain that the horrific abuses in Darfur committed by forces directed by the Khartoum government led the UN to establish a Commission of Inquiry in 2004. Council members could further explain that the gravity of the crimes and inadequacy of domestic Sudanese accountability mechanisms - as documented by the Commission of Inquiry- became the basis for the Council's referral of the Darfur situation to the ICC in Resolution 1593 in 2005. The Council should add that Resolution 1593 obliges Sudan to cooperate with the ICC even though the country is not a party to the Rome Statute. We also call on the Council to highlight that the ICC is an independent judicial institution and that Sudanese government obstruction of the Court's warrants for two other Sudanese officials, plus the recent expulsion of humanitarian aid agencies from Darfur, shows Khartoum's flagrant disregard for the victims in Darfur and international humanitarian law. The Council should also debunk the notion that peace efforts in the region are being undermined by these warrants: the fact is that peace has eluded Darfur for years and the region is no closer now to a durable peace for the victims of the violence. Similarly, there is no evidence that the 2005 Comprehensive Peace Agreement between the National Congress Party and the Sudan People's Liberation Movement is at risk of unraveling due to the ICC warrants. Finally, the Council should make clear that- even though it has the authority- it will not consider any deferral of the arrest warrant for President al-Bashir. Given the record of the al-Bashir government, such a decision would clearly set an appalling precedent in international justice. The government of President Ellen Johnson Sirleaf has made tangible progress in addressing endemic corruption, creating the legislative framework for respect for human rights, and facilitating economic growth, but little headway in strengthening the rule of law. Frequent incidents of violent crime, mob and vigilante justice, and bloody land disputes continue to claim lives and expose the systemic and persistent weaknesses within the police, judiciary, and corrections sectors. The disappointing progress in these sectors, five years after the end of armed conflict, highlight the fragility of the security situation in Liberia. Human Rights Watch is concerned about the Liberian government's failure to establish three key commissions which we believe could both reinforce respect for rule of law and address issues which gave rise to Liberia's brutal armed conflicts: the Independent National Commission on Human Rights, the Law Reform Commission, and the Land Reform Commission. We are also concerned about the July 2008 passage of a law that allowed for the death penalty for certain offenses. The legislation, passed in response to high rates of violent crime, contravened Liberia's obligations under the Second Optional Protocol to the International Covenant on Civil and Political Rights. Human Rights Watch is also concerned about the lack of national strategy for holding to account perpetrators of war crimes and crimes against humanity during Liberia's brutal armed conflicts (1989-1996 and 1999-2003). Liberia's armed conflicts were characterized by the commission of widespread and systematic violations of international humanitarian law. The gravity of these abuses - massacres, mutilations, sexual violence, the recruitment and use of children as soldiers - have been tragically illuminated during the ongoing public hearings of the Liberian Truth and Reconciliation Commission (TRC). While the TRC - empowered to recommend for prosecution the most serious offenders - has made significant progress chronicling a record of abuses, there appears to be little discussion by Liberian or international actors about how to hold perpetrators of war crimes and crimes against humanity to account. Human Rights Watch believes that the many victims of these unspeakable crimes deserve justice for what they have suffered, and that prosecutions of the most serious crimes committed would go a long way towards consolidating and firmly anchoring respect for the rule of law in Liberia. During its discussions with representatives of Liberia's government and civil society we therefore request that the Security Council urge the Liberian government to: - Establish without further delay the Independent National Human Rights Commission, the Law Reform Commission, and the Land Reform Commission. - Repeal the July 2008 passage of a law that allowed for the death penalty for certain offenses. - Act to ensure accountability for past human rights violations, and also develop a strategy for prosecuting those allegedly responsible for the most egregious crimes. Given the persistent weaknesses in the Liberian justice system, international support will be necessary to ensuring justice for these crimes. We are available to answer any questions or requests you may have. We wish you a successful trip. Executive Director, Africa Division UN Advocacy Director
<urn:uuid:8907648e-10be-42ed-b6d8-5fb2f0edbc8a>
CC-MAIN-2021-43
https://www.hrw.org/node/236465/printable/print
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00190.warc.gz
en
0.946991
3,674
2.703125
3
Adam Adler: Protecting your organization from Malware Adam Adler (Miami, Florida): Malicious software (also known as ‘malware’) is software or web content that can harm your organization, such as the recent WannaCry outbreak8. The most well-known form of malware is viruses, which are self-copying programs that infect legitimate software. What is malware? Malware is malicious software, which - is able to run - can cause harm in many ways, including: causing a device to become locked or unusable stealing, deleting, or encrypting data taking control of your devices to attack other organizations obtaining credentials which allow access to your organization's systems or services that you use using services that may cost you money (e.g. premium rate phone calls). Tip 1 Install (and turn on) antivirus software - which is often included for free within popular operating systems - should be used on all computers and laptops. For your office equipment, you can pretty much click ‘enable’, and you’re instantly safer. Smartphones and tablets might require a different approach and if configured, separate antivirus software might not be necessary. Tip 2 Prevent staff from downloading dodgy apps You should only download apps for mobile phones and tablets from manufacturer-approved stores (like Google Play or Apple App Store). These apps are checked to provide a certain level of protection from malware that might cause harm. You should prevent staff from downloading third-party apps from unknown vendors/sources, as these will not have been checked. Staff accounts should only have enough access required to perform their role, with extra permissions (i.e. for administrators) only given to those who need it. When administrative accounts are created, they should only be used for that specific task, with standard user accounts used for general work. Tip 3 Keep all your IT equipment up to date (patching) For all your IT equipment (so tablets, smartphones, laptops, and PCs), make sure that the software and firmware are always kept up to date with the latest versions from software developers, hardware suppliers, and vendors. Applying these updates (a process known as patching) is one of the most important things you can do to improve security - the IT version of eating your fruit and veg. Operating systems, programs, phones and apps should all be set to ‘automatically update’ wherever this is an option. At some point, these updates will no longer be available (as the product reaches the end of its supported life), at which point you should consider replacing it with a modern alternative. Tip 4 Control how USB drives (and memory cards) can be used We all know how tempting it is to use USB drives or memory cards to transfer files between organizations and people. However, it only takes a single cavalier user to inadvertently plug-in an infected stick (such as a USB drive containing malware) to devastate the whole organization. When drives and cards are openly shared, it becomes hard to track what they contain, where they’ve been, and who has used them. You can reduce the likelihood of infection by • blocking access to physical ports for most users • using antivirus tools • only allowing approved drives and cards to be used within your organization - and nowhere else Make these directives part of your company policy, to prevent your organization being exposed to unnecessary risks. You can also ask staff to transfer files using alternate means (such as by email or cloud storage), rather than via USB. Tip 5 Switch on your firewall Firewalls creates a ‘buffer zone’ between your own network and external networks (such as the Internet). Most popular operating systems now include a firewall, so it may simply be a case of switching this on. Prevent malware from being delivered and spreading to devices You can reduce the likelihood of malicious content reaching your devices through a combination of: filtering to only allow file types you would expect to receive blocking websites that are known to be malicious actively inspecting content using signatures to block known malicious code These are typically done by network services rather than users' devices. Examples include: mail filtering (in combination with spam filtering) which can block malicious emails and remove executable attachments. NCSC's Mail Check platform can also help with this intercepting proxies, which block known-malicious websites internet security gateways, which can inspect content in certain protocols (including some encrypted protocols) for known malware safe browsing lists within your web browsers which can prevent access to sites known to be hosting malicious content Public sector organizations are encouraged to subscribe to the NCSC Protective DNS service. This will prevent users from reaching known malicious sites. Ransomware is increasingly being deployed by attackers who have gained access remotely via exposed services such as Remote Desktop Protocol (RDP), or unpatched remote access devices. To prevent these organizations should: enable MFA at all remote access points into the network, and enforce IP allow listing using hardware firewalls use a VPN that meets NCSC recommendations, for remote access to services; Software as a Service or other services exposed to the internet should use Single Sign-On (SSO) where access policies can be defined (for more information read our blogpost on protecting management interfaces) use the least privilege model for providing remote access - use low privilege accounts to authenticate, and provide an audited process to allow a user to escalate their privileges within the remote session where necessary the patch is known vulnerabilities in all remote access and external-facing devices immediately (referring to our guidance on how to manage vulnerabilities within your organization if necessary), and follow vendor remediation guidance including the installation of new patches as soon as they become available Prevent malware from running on devices A 'defense in depth' approach assumes that malware will reach your devices. You should therefore take steps to prevent malware from running. The measures required will vary for each device type, OS, and version, but in general, you should look to use device-level security features. Organizations should: centrally manage devices in order to only permit applications trusted by the enterprise to run on devices, using technologies including AppLocker, or from trusted app stores (or other trusted locations) consider whether enterprise antivirus or anti-malware products are necessary, and keep the software (and its definition files) up to date provide security education and awareness training to your people, for example, NCSC's Top Tips for Staff disable or constrain scripting environments and macros, by: enforcing PowerShell Constrained Language mode via a User Mode Code Integrity (UMCI) policy - you can use AppLocker as an interface to UMCI to automatically apply Constrained Language mode protecting your systems from malicious Microsoft Office macros disable autorun for mounted media (prevent the use of removable media if it is not needed) In addition, attackers can force their code to execute by exploiting vulnerabilities in the device. Prevent this by keeping devices well-configured and up to date. We recommend that you: install security updates as soon as they become available in order to fix exploitable bugs in your products enable automatic updates for OSs, applications, and firmware if you can use the latest versions of OSs and applications to take advantage of the latest security features configure host-based and network firewalls, disallowing inbound connections by default Prepare for an incident Malware attacks, in particular ransomware attacks, can be devastating for organizations because computer systems are no longer available to use, and in some cases, data may never be recovered. If recovery is possible, it can take several weeks, but your corporate reputation and brand value could take a lot longer to recover. The following will help to ensure your organization can recover quickly. Identify your critical assets and determine the impact to these if they were affected by a malware attack. Plan for an attack, even if you think it is unlikely. There are many examples of organizations that have been impacted by collateral malware, even though they were not the intended target. Develop an internal and external communication strategy. It is important that the right information reaches the right stakeholders in a timely fashion. Determine how you will respond to the ransom demand and the threat of your organization's data being published. Ensure that incident management playbooks and supporting resources such as checklists and contact details are available if you do not have access to your computer systems. Identify your legal obligations regarding the reporting of incidents to regulators, and understand how to approach this. Exercise your incident management plan. This helps clarify the roles and responsibilities of staff and third parties, and to prioritize system recovery. For example, if a widespread ransomware attack meant a complete shutdown of the network was necessary, you would have to consider: how long it would take to restore the minimum required number of devices from images and re-configure for use how you would rebuild any virtual environments and physical servers what processes need to be followed to restore servers and files from your backup solution what processes need to be followed if onsite systems and cloud backup servers are unusable, and you need to rebuild from offline backups how you would continue to operate critical business services After an incident, revise your incident management plan to include lessons learned to ensure that the same event cannot occur in the same way again. Steps to take if your organization is already infected If your organization has already been infected with malware, these steps may help limit the impact: Immediately disconnect the infected computers, laptops, or tablets from all network connections, whether wired, wireless or mobile phone-based. In a very serious case, consider whether turning off your Wi-Fi, disabling any core network connections (including switches), and disconnecting from the internet might be necessary. Reset credentials including passwords (especially for administrator and other system accounts) - but verify that you are not locking yourself out of systems that are needed for recovery. Safely wipe the infected devices and reinstall the OS. Before you restore from a backup, verify that it is free from any malware. You should only restore from a backup if you are very confident that the backup and the device you're connecting it to are clean. Connect devices to a clean network in order to download, install, and update the OS and all other software. Install, update, and run antivirus software. Reconnect to your network. Monitor network traffic and run antivirus scans to identify if any infection remains.
<urn:uuid:13517f10-62e9-4082-b241-f8edac7ea61f>
CC-MAIN-2021-43
https://www.digitalbank.capital/post/adam-adler-protecting-your-organization-from-malware
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00030.warc.gz
en
0.932298
2,179
2.671875
3
Early learning is key to Kitzhaber's education strategy Walking into Mrs. Maria Rodriguez's classroom at the Bethel Head Start in east Salem is a lot like walking into your best friend's cozy living room. Natural light pours in from the north wall windows that look out over sprawling farmland. The room itself is filled with children at small tables who hold different games that, at a glance, seem like they would be played just for fun: blocks, glass containers with colored sand, coloring stations and Play-Doh. But look a little closer and it becomes apparent that the kids, ages 3-5, are not just playing around. They are stacking colored blocks into recognizable patterns, forming the Play-Doh into shapes and writing letters in the sand with their fingers. They're engaged in "purposeful choice time," said Stephanie Whetzel, student services coordinator for Early Childhood Programs in the Salem-Keizer School District. This time includes activities meant to enhance fine motor skills, communication and sharing. Student projects also mark the walls and shelves throughout the classroom — and one hanging above the carpeted area opposite the windows seems to display the effectiveness of these activities. The project tracks the students' progress in drawing self-portraits. White pieces of paper mounted on black cardstock show portraits drawn by the students in September and January. In September, a student named Melissa scribbled messy lines and egg-shaped circles across the entire sheet of paper. Four months later, she drew what was clearly a person with stick legs, eyes, a nose and hair. "With our self-portraits, we are definitely working on fine motor skills. Students are also learning about what people look like, how people are different and how they are alike. They are developing basic math skills such as shape recognition and understanding the part-to-whole relationship. They are naming facial features and are learning to follow steps in a sequence," Whetzel said. This classroom and the pre-elementary school instruction are at the heart of Gov. John Kitzhaber's focus on early childhood education. Invest more now; reap the benefits later. In other words, this demonstrates Kitzhaber's strategy to bolster early childhood learning with the intent of increasing the state's academic performance, preparing more students for college and careers and decreasing future spending on prisons and social services. Proposing the investment of $135 million into Oregon's early learning programs, Kitzhaber has positioned early childhood education as a hot topic for this legislative session. With years of research that support early learning's effectiveness, some wonder why this issue just now is gaining traction. But the governor's education policy adviser and leaders in the state's early learning initiatives said conversations about early childhood education that are now swirling in the Legislature and communities are a result of his emphasis on the subject. Kitzhaber's education goals have been clearly detailed. Ten years from now: 95 percent of third-graders reading at grade level, 100 percent high school graduation rate, and 40-40-20 — which says 40 percent of adult Oregonians will have at least a baccalaureate degree, 40 percent will have an associate degree or career certificate, and the remaining 20 percent will have a high school diploma or equivalent. One of the governor's strategies to achieve this goal is investing in an education that begins at birth. The American Educational Research Association wrote in a 2005 article that it was becoming widely accepted that quality early childhood education helps prepare students for school and helps decrease racial and ethnic achievement gaps. Studies found that at-risk children who participated in high-quality programs had better language and cognitive skills in their first few years of school than their counterparts who did not participate. They tended to score higher on math and reading tests and were less likely to repeat a grade, drop out and get in trouble with the law. Economic analyses indicated every $1 invested in early childhood learning generates a return to society of $3 to $17 because of reduced costs in special education and crime rates, and increased higher adult earning and tax revenues, according to the article. Such research appears to fuel Kitzhaber's strategy of increased emphasis on early childhood education. In December, he told the Statesman Journal editorial board that the early years are the "one investment we ought to make." Now is the time, he said, to make the investments that will accomplish this vision. He called on not only education organizations but also taxpayers, nonprofits, schools and civic leaders to put early childhood and early elementary education ahead of their own interests. As a result, Oregon would spend less down the road on prisons, social services, educational remediation and other programs. The governor's budget proposed that $135 million go to early learning: $25 million for early years to kindergarten, specifically to support students and families through Early Learning Hubs; $25 million for birth to age 3 to invest in early screening for children's health and wellness; and $85 million for quality child care and preschool, namely to make it more accessible and improve quality. Education policy adviser Dani Ledezma said these investments are up significantly from the 2013-15 budget proposal. So why now? If the benefits of early childhood education have been known since the 1960s, why now does it seem as if it's become a top priority? Ledezma said that while there has been a growing recognition in the state of the importance of early learning, Kitzhaber's emphasis on it is not new, but rather a scaling-up of work that began when he stepped in as governor. "I would say the proposed investments really build upon work that has been happening since he's been in office. The second time around, he worked to expand the education system to be beyond K-12. He really did a lot of work by establishing the Early Learning Council and the Early Learning Hubs," Ledezma said. "This investment is really sort of building upon work that's been done, and it's the governor's attempt to scale up these practices he's been committed to since he's been in office." Nancy Golden, chief education officer, and Megan Irwin, Early Learning Division director, said it is part of their jobs to go out into communities and share the state's vision for early learning collaboration and development. "We've been really building momentum toward where we're at now," Irwin said. She said early childhood education was disorganized before the governor established it as a priority. "We spent four years working to coordinate a system," she said. Whetzel said it took time to get influential voices behind the topic. This is the first time, after nine years in her position, that she has seen Head Start begin to build an even stronger bridge between kindergarten and pre-K programs. Head Start is a pre-kindergarten program that provides services to low-income families. "Part of it is, everyone can get behind 3- and 4-year-olds. ... You need someone behind them who can control the dollars," Whetzel said. "The difference is we need someone at a high-enough level who can help control the funding pieces." She said the benefits and availability of early learning resources have become more common knowledge. "Part of it is communication out to families about what resources there are," Whetzel said. Mark Girod, dean of the College of Education at Western Oregon University, said that it's time the public recognizes the impact of early education and that the reason it seems to be a legislative priority is likely because of the Oregon Education Investment Board's task with creating a seamless system. (Girod is a distant relative of Sen. Fred Girod, R-Stayton.) "There's finally clear recognition about the powerful research that has come out about the efficacy of early education," Girod said. "Clearly its time is overdue." The research, he agreed, is not new. "If you care well for your youngest citizens, good things happen. Crime rates go down; education completion rates go up." Western has seen growth in their new early childhood education program. The 3-year-old program houses about 50 students — although the university offered courses in early learning before the comprehensive program was established, Girod said. A couple of years ago, federal guidelines about the level of education required to be a teacher in federally supported child care programs shifted from a two-year degree to a four-year degree. By September 2013, 50 percent of Head Start teachers throughout the country were required to have a bachelor's degree in early childhood education or a related degree with early childhood education courses, according to Head Start's Early Childhood Learning and Knowledge Center. As a result, Western has seen the return of early learning teachers seeking to complete degree requirements, Girod said. It's unclear whether Kitzhaber's vision for early childhood education will come to fruition as a result of the legislative session. "There is a lot of need in Oregon," Irwin said. "There are a lot of priorities that legislators have to balance." Any time you ask people to do their work differently, there will be challenges, Irwin and Golden said. "It's a big ask," Irwin said. "We see a lot of support — certainly around the need to expand affordable day care options," Ledezma said. "I see a lot of support, and I see that what communities have been talking about is really resonating not only in our office, but the Legislature. There are certainly lots of champions in the Legislature for early learning." Sue Hildick, president of the Chalkboard Project, an independent education group aiming to make Oregon's K-12 public schools the best in the country, said it's a great vision, but there are questions about how the vision will be implemented and developed. "We're supportive, and we think there needs to be an implementation strategy," Hildick said. Chuck Bennett, director of government relations at the Confederation of Oregon School Administrators, forecasts a good deal of discourse about early learning at the Legislature, but he doesn't anticipate major opposition. "The governor has outlined a plan I think is going to move along," he said. Particular details about his vision that the Confederation of Oregon School Administrators supports include making sure that school districts can participate as providers of pre-kindergarten services. In terms of the budget, Bennett said, it's likely there will be a discussion because of how much has been proposed. "There is always going to be a discussion to any large amount of funding." [email protected], (503) 399-6714 or follow on Twitter.com @Joce_DeWitt Education bills to watch SB 213: Requires Early Learning Council to develop metrics for funding Early Learning Hubs and allows the council to require matching funds from hubs that receive support. SB 214: Makes calculation of total days membership for kindergarteners contingent upon an early reading program; establishes a Kindergarten Through Grade Three Reading Initiative program to help organizations working with school districts to implement early reading programs. SB 215: Removes sunset on Oregon Education Investment Board. HB 2015: Directs the Department of Human Services to adopt rules for subsidy programs for child care that allow at least one year of eligibility regardless of change of employment. HB 2650: Directs the State Library to give grants to schools offering early reading programs during the summer. HB 2801: Directs the Department of Education to give money under the Oregon Early Reading Program to nonprofits that provide literacy programs to school districts to increase the delivery of reading assistance to students in kindergarten through third grade.
<urn:uuid:3733cce5-daab-41dc-b6d6-7e72d2f3460d>
CC-MAIN-2021-43
https://www.statesmanjournal.com/story/news/2015/02/08/proven-early-education-still-catching-salem/23013021/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00150.warc.gz
en
0.971174
2,414
3.03125
3
How do Transformers work? In this section, we will take a high-level look at the architecture of Transformer models. A bit of Transformer history Here are some reference points in the (short) history of Transformer models: The Transformer architecture was introduced in June 2017. The focus of the original research was on translation tasks. This was followed by the introduction of several influential models, including: June 2018: GPT, the first pretrained Transformer model, used for fine-tuning on various NLP tasks and obtained state-of-the-art results October 2018: BERT, another large pretrained model, this one designed to produce better summaries of sentences (more on this in the next chapter!) February 2019: GPT-2, an improved (and bigger) version of GPT that was not immediately publicly released due to ethical concerns October 2019: DistilBERT, a distilled version of BERT that is 60% faster, 40% lighter in memory, and still retains 97% of BERT’s performance May 2020, GPT-3, an even bigger version of GPT-2 that is able to perform well on a variety of tasks without the need for fine-tuning (called zero-shot learning) This list is far from comprehensive, and is just meant to highlight a few of the different kinds of Transformer models. Broadly, they can be grouped into three categories: - GPT-like (also called auto-regressive Transformer models) - BERT-like (also called auto-encoding Transformer models) - BART/T5-like (also called sequence-to-sequence Transformer models) We will dive into these families in more depth later on. Transformers are language models All the Transformer models mentioned above (GPT, BERT, BART, T5, etc.) have been trained as language models. This means they have been trained on large amounts of raw text in a self-supervised fashion. Self-supervised learning is a type of training in which the objective is automatically computed from the inputs of the model. That means that humans are not needed to label the data! This type of model develops a statistical understanding of the language it has been trained on, but it’s not very useful for specific practical tasks. Because of this, the general pretrained model then goes through a process called transfer learning. During this process, the model is fine-tuned in a supervised way — that is, using human-annotated labels — on a given task. An example of a task is predicting the next word in a sentence having read the n previous words. This is called causal language modeling because the output depends on the past and present inputs, but not the future ones. Another example is masked language modeling, in which the model predicts a masked word in the sentence. Transformers are big models Apart from a few outliers (like DistilBERT), the general strategy to achieve better performance is by increasing the models’ sizes as well as the amount of data they are pretrained on. Unfortunately, training a model, especially a large one, requires a large amount of data. This becomes very costly in terms of time and compute resources. It even translates to environmental impact, as can be seen in the following graph. And this is showing a project for a (very big) model led by a team consciously trying to reduce the environmental impact of pretraining. The footprint of running lots of trials to get the best hyperparameters would be even higher. Imagine if each time a research team, a student organization, or a company wanted to train a model, it did so from scratch. This would lead to huge, unnecessary global costs! This is why sharing language models is paramount: sharing the trained weights and building on top of already trained weights reduces the overall compute cost and carbon footprint of the community. Pretraining is the act of training a model from scratch: the weights are randomly initialized, and the training starts without any prior knowledge. This pretraining is usually done on very large amounts of data. Therefore, it requires a very large corpus of data, and training can take up to several weeks. Fine-tuning, on the other hand, is the training done after a model has been pretrained. To perform fine-tuning, you first acquire a pretrained language model, then perform additional training with a dataset specific to your task. Wait — why not simply train directly for the final task? There are a couple of reasons: - The pretrained model was already trained on a dataset that has some similarities with the fine-tuning dataset. The fine-tuning process is thus able to take advantage of knowledge acquired by the initial model during pretraining (for instance, with NLP problems, the pretrained model will have some kind of statistical understanding of the language you are using for your task). - Since the pretrained model was already trained on lots of data, the fine-tuning requires way less data to get decent results. - For the same reason, the amount of time and resources needed to get good results are much lower. For example, one could leverage a pretrained model trained on the English language and then fine-tune it on an arXiv corpus, resulting in a science/research-based model. The fine-tuning will only require a limited amount of data: the knowledge the pretrained model has acquired is “transferred,” hence the term transfer learning. Fine-tuning a model therefore has lower time, data, financial, and environmental costs. It is also quicker and easier to iterate over different fine-tuning schemes, as the training is less constraining than a full pretraining. This process will also achieve better results than training from scratch (unless you have lots of data), which is why you should always try to leverage a pretrained model — one as close as possible to the task you have at hand — and fine-tune it. In this section, we’ll go over the general architecture of the Transformer model. Don’t worry if you don’t understand some of the concepts; there are detailed sections later covering each of the components. The model is primarily composed of two blocks: - Encoder (left): The encoder receives an input and builds a representation of it (its features). This means that the model is optimized to acquire understanding from the input. - Decoder (right): The decoder uses the encoder’s representation (features) along with other inputs to generate a target sequence. This means that the model is optimized for generating outputs. Each of these parts can be used independently, depending on the task: - Encoder-only models: Good for tasks that require understanding of the input, such as sentence classification and named entity recognition. - Decoder-only models: Good for generative tasks such as text generation. - Encoder-decoder models or sequence-to-sequence models: Good for generative tasks that require an input, such as translation or summarization. We will dive into those architectures independently in later sections. A key feature of Transformer models is that they are built with special layers called attention layers. In fact, the title of the paper introducing the Transformer architecture was “Attention Is All You Need”! We will explore the details of attention layers later in the course; for now, all you need to know is that this layer will tell the model to pay specific attention to certain words in the sentence you passed it (and more or less ignore the others) when dealing with the representation of each word. To put this into context, consider the task of translating text from English to French. Given the input “You like this course”, a translation model will need to also attend to the adjacent word “You” to get the proper translation for the word “like”, because in French the verb “like” is conjugated differently depending on the subject. The rest of the sentence, however, is not useful for the translation of that word. In the same vein, when translating “this” the model will also need to pay attention to the word “course”, because “this” translates differently depending on whether the associated noun is masculine or feminine. Again, the other words in the sentence will not matter for the translation of “this”. With more complex sentences (and more complex grammar rules), the model would need to pay special attention to words that might appear farther away in the sentence to properly translate each word. The same concept applies to any task associated with natural language: a word by itself has a meaning, but that meaning is deeply affected by the context, which can be any other word (or words) before or after the word being studied. Now that you have an idea of what attention layers are all about, let’s take a closer look at the Transformer architecture. The original architecture The Transformer architecture was originally designed for translation. During training, the encoder receives inputs (sentences) in a certain language, while the decoder receives the same sentences in the desired target language. In the encoder, the attention layers can use all the words in a sentence (since, as we just saw, the translation of a given word can be dependent on what is after as well as before it in the sentence). The decoder, however, works sequentially and can only pay attention to the words in the sentence that it has already translated (so, only the words before the word currently being generated). For example, when we have predicted the first three words of the translated target, we give them to the decoder which then uses all the inputs of the encoder to try to predict the fourth word. To speed things up during training (when the model has access to target sentences), the decoder is fed the whole target, but it is not allowed to use future words (if it had access to the word at position 2 when trying to predict the word at position 2, the problem would not be very hard!). For instance, when trying to predict the fourth word, the attention layer will only have access to the words in positions 1 to 3. The original Transformer architecture looked like this, with the encoder on the left and the decoder on the right: Note that the the first attention layer in a decoder block pays attention to all (past) inputs to the decoder, but the second attention layer uses the output of the encoder. It can thus access the whole input sentence to best predict the current word. This is very useful as different languages can have grammatical rules that put the words in different orders, or some context provided later in the sentence may be helpful to determine the best translation of a given word. The attention mask can also be used in the encoder/decoder to prevent the model from paying attention to some special words — for instance, the special padding word used to make all the inputs the same length when batching together sentences. Architectures vs. checkpoints As we dive into Transformer models in this course, you’ll see mentions of architectures and checkpoints as well as models. These terms all have slightly different meanings: - Architecture: This is the skeleton of the model — the definition of each layer and each operation that happens within the model. - Checkpoints: These are the weights that will be loaded in a given architecture. - Model: This is an umbrella term that isn’t as precise as “architecture” or “checkpoint”: it can mean both. This course will specify architecture or checkpoint when it matters to reduce ambiguity. For example, BERT is an architecture while bert-base-cased, a set of weights trained by the Google team for the first release of BERT, is a checkpoint. However, one can say “the BERT model” and “the
<urn:uuid:93748ec4-31df-4d41-a20e-3ddb5c4bb17c>
CC-MAIN-2021-43
https://huggingface.co/course/chapter1/4?fw=pt
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00391.warc.gz
en
0.948398
2,508
3.203125
3
Treating your feet like crown jewels may sound a bit wacky, but they’re worth it. Diabetes affects both the feeling in and the blood flow to your feet, which makes it easy for problems to sneak up. And foot problems, annoying enough by themselves, are just a step away from bigger problems. “People who have uncontrolled diabetes can develop neuropathy,” says Bonnie W. Greenwald, MD, Chief of Endocrinology at White Plains Hospital in White Plains, New York, “which causes a lack of sensation, including pressure and temperature sensations.” Neuropathy can also affect the functioning of your foot muscles; your feet may lose proper alignment, or you may put abnormal pressure on certain areas of your feet when you walk. David Kerr, MD, Director of Research and Innovation at William Sansum Diabetes Center in Santa Barbara, California, says neuropathy, or nerve damage, affects at least 30% to 40% of people with diabetes. Unfortunately, neuropathy can be prevented or stopped only with good control of blood sugar. What is type 1 diabetes? Type 1 diabetes is an autoimmune disorder in which the immune system attacks and destroys the insulin-producing beta cells in the pancreas. As a result, the pancreas produces little or no insulin. Type 1 diabetes is also characterized by the presence of certain autoantibodies against insulin or other components of the insulin-producing system such as glutamic acid decarboxylase (GAD), tyrosine phosphatase, and/or islet cells. When the body does not have enough insulin to use the glucose that is in the bloodstream for fuel, it begins breaking down fat reserves for energy. However, the breakdown of fat creates acidic by-products called ketones, which accumulate in the blood. If enough ketones accumulate in the blood, they can cause a potentially life-threatening chemical imbalance known as ketoacidosis. Type 1 diabetes often develops in children, although it can occur at any age. Symptoms include unusual thirst, a need to urinate frequently, unexplained weight loss, blurry vision, and a feeling of being tired constantly. Such symptoms tend to be acute. Diabetes is diagnosed in one of three ways – a fasting plasma glucose test, an oral glucose tolerance test, or a random plasma glucose test – all of which involve drawing blood to measure the amount of glucose in it. “People with diabetes can also develop peripheral artery disease (PAD), or poor flood flow, which additionally puts them at risk for foot ulcers,” says Greenwald. Poor blood flow makes it harder for the body to heal, which increases the risk for skin ulcers and gangrene, or tissue death. PAD affects about 20% of people age 55 and older, and according to the University of California-San Francisco Medical Center, people with diabetes have two to four times the risk of developing the condition compared to those without diabetes. PAD has no direct treatment except preventive measures such as controlling blood sugar, cholesterol and blood pressure, and quitting smoking. In some cases, a doctor can perform an angioplasty, a surgery that widens a narrowed artery, or an arterial bypass, in which a blood vessel is taken from one part of the body and used to bypass a blocked artery. Those most at risk for neuropathy and PAD are people who smoke, drink to excess and have high cholesterol and poor glucose control, especially over a long period, says Kerr, who adds, “and those with bad luck.” Either or both of these conditions can make your feet vulnerable to infections and deformity. If you have neuropathy, says Kerr, you may feel tingling or burning, or shock-like sensations in your feet. “The symptoms of severe pain are predominantly in the evening,” he adds, “so they may also interfere with sleep.” Over time, your feet can get dry and cracked, which puts you at risk for infections and ulcerations. Sometimes people lose sensation altogether, Kerr says, which is particularly dangerous: “You may be unaware if you’re standing on a stone, a nail or a piece of glass.” In addition, feet can be scraped or abraded by ill-fitting shoes, and these wounds can become infected. “In late-stage neuropathy, you may get Charcot arthropathy, which can lead to a collapse of the arch, in the middle of your foot,” says Greenwald. The early signs of Charcot arthropathy include redness and swelling, followed by bone fractures and dislocations when bones shift out of their usual positions. The foot may also lose muscle tissue. PAD can cause leg pain such as cramping when you walk. Your feet may feel cold, and you may also have little or no pulses in your legs. The key to keeping your feet healthy is vigilance. As Kerr says, the saying “an ounce of prevention is worth a pound of cure” has never been more true. Take these steps to avoid problems. • Control your blood sugar. Work with your doctor and healthcare team to keep your blood sugar within the limits your team has set for you. “Keep your A1C level” — the average of your blood glucose levels over three months — “to below 7% and your daily blood sugar normal,” says Greenwald. Ask your doctor what to do if your numbers are too high or too low. • Check blood pressure and fat levels. Have your blood pressure checked at every doctor’s visit. The target for most people with diabetes is less than 140/90. Your cholesterol and triglyceride levels — both types of blood fats — should be checked at least once a year. For people with diabetes, the target is below 100 for LDL, or bad cholesterol; HDL, or good cholesterol, levels should be above 40 for women and 50 for men. Triglyceride levels should be below 150. • Quit smoking. Smoking raises the risk of foot complications by narrowing and hardening your blood vessels so that fewer nutrients and insufficient oxygen reach your feet. Smoking also keeps your cholesterol and blood pressure levels up and puts you at greater risk for heart attack, stroke, kidney disease and amputation. Ask your doctor for help in quitting. A government program offering free counseling is available at 1-800-QUITNOW. • Get a podiatrist. Before you have a problem, Greenwald suggests you ask your doctor to refer you to a podiatrist, or foot specialist, who is experienced with diabetes. “Most people with diabetes see a podiatrist at least once or twice a year,” he says. The podiatrist should check for ulcers between your toes, calluses, bone abnormalities, and the pulse in your feet (a lack of pulse indicates PAD). He or she should also test the sensation in your feet, probably using a 10g monofilament test, which assesses touch and pressure sensation. He or she will press a single strand of fiber against various spots on your foot until you register sensation, noting the point at which the strand bends; this tells the doctor how much pressure you can feel. • Inspect, and re-inspect. Do your own foot inspections daily, says Greenwald: “Check between your toes for redness and skin breaks. For places you can’t see easily, use a mirror or ask someone else to look.” If you find a break in the skin or anything suspicious, see your doctor or podiatrist right away. • Get shoe savvy. Before you put on your shoes, feel inside to make sure there are no stones or other debris. Avoid shoes that are too tight, pointy or high-heeled, or that have stitching inside that might abrade your foot. Buy shoes at the end of the day when your feet will likely be a bit swollen. A good choice is athletic or walking shoes, which allow air to circulate inside the shoe (unlike vinyl or plastic shoes) and offer flexibility and support. Break new shoes in slowly, wearing them only an hour or two a day for the first couple of weeks. Don’t go sockless or wear open-toed shoes; buy lightly padded seamless socks. And don’t even think about going barefoot. “If you’re not sure whether your shoes fit well, see a podiatrist who can check or custom-make your shoes,” says Greenwald. Specially made shoes or inserts can protect your feet, which may have changed shape over time. Your insurance may help pay for such shoes. • Wash your feet daily. Before you immerse your foot, check the water temperature with your hand because your feet may not feel heat. Use warm water, not hot and don’t soak — it can dry out your skin and lead to cracking. Dry your feet carefully and apply moisturizing cream or baby oil to prevent dryness, says Greenwald. Don’t put cream between the toes; the moist conditions can encourage infection. • Trim weekly. After you’ve washed your feet, trim your toenails straight across and file off any snags with an emery board. While your feet are still wet, you can carefully smooth corns and calluses with a pumice stone. If you can’t see well, can’t reach your feet, or have tough or ingrown nails, let your podiatrist do the trimming. He or she can address problem calluses and corns, too, says Kerr, who cautions: “Do not use home devices like razors or anything that may pierce the skin.” • Sanitize your pedicure. If you get a pedicure, bring along your own utensils, boiling them for a few minutes before and after to avoid germs and bacteria. Kerr suggests you make sure your pedicurist knows you have diabetes; confirm his or her experience in dealing with people with diabetes, or at least that he or she knows the risks. • Weatherproof. Always wear shoes on the beach, and put sunscreen on your feet to avoid sunburn. Wear warm boots in winter, and check your feet often so that they don’t get frostbitten. If your feet are cold, don’t put hot-water bottles or heating pads on them; put on socks instead. • Keep blood flowing. The Department of Health and Human Services recommends 150 minutes of exercise per week. Provided your doctor gives you the thumbs-up, try exercise that’s easy on the feet, such as walking, swimming or biking, and avoid more abrasive choices such as running and jumping. Not only will exercise increase the blood flow to your feet and elsewhere, but it also helps to lower your blood sugar and reduce your risk of heart disease. “Check your feet before and after exercise,” says Greenwald. “There’s almost never a downside to exercise, ever.” Despite your best intentions, foot problems can arise whether you have diabetes or not. However, the mix of foot problems and diabetes easily can lead to infections. Here are common foot troubles and how to deal with them. • Foot infections and ulcers. Most infections are treated with antibiotic therapy, beginning with oral antibiotics. More serious infections may require intravenous antibiotic therapy, administered at an in-patient facility or at home by a nurse. Your podiatrist may need to remove any dead skin, a process called debridement. If your doctor determines you have PAD or a foot ulcer, says Kerr, he or she may suggest that you see a vascular surgeon. The surgeon will likely use x-rays or magnetic resonance imaging (MRI) to see whether the infection has gotten into the bone; if it has, you may need an antibiotic given intravenously, bone removal and/or a procedure to clear blocked arteries that are impeding your blood flow. Among Medicare recipients, about 8% of people with diabetes develop foot ulcers, and around 1.8% need amputation. • Fungal infection of the nails. Fungal infections turn nails tough and colorful, usually unflattering shades of yellow, green, brown or black. Your doctor may prescribe an oral antibiotic, remove the nail surgically or chemically, or perform a laser treatment that kills the fungus. • Athlete’s foot. Athlete’s foot, another fungal infection, can cause itchy sores. Your doctor may recommend over-the-counter antifungal medication such as clotrimazole (brand name Lotrimin) or miconazole (Micatin), prescribe oral medications such as fluconazole (Diflucan) or itraconazole (Sporanox), or recommend topical medications such as butenafine (Mentax) or naftifine (Naftin). • Heel infections. Infections on the heel are especially prevalent in older people who spend a lot of time in bed. Treat an infection with an antibiotic as soon as possible so that it does not get into the heel bone. • Corns and calluses. If a corn or callus leads to infection, your podiatrist may prescribe antibiotics and remove the hardened skin, which otherwise can delay healing. • Blisters. If you get a blister, cover it with a bandage. If it becomes infected, your doctor will drain it and give you antibiotics. • Ingrown toenail. This is a toenail that has grown into your skin. It’s typically treated by cutting away the part entering the skin. If the spot is infected, your doctor will likely prescribe antibiotics. • Bunions. A bunion is a foot deformity: The big toe leans toward the second toe, making the joint where the big toe joins the foot thrust out, which leads to soreness and callused skin. Foot padding available at drugstores can help prevent shoe friction on the bunion, or your doctor may suggest surgery to correct the deformity. • Dry skin. Dry skin can lead to cracked, infected skin. The remedy is usually a prescription topical or oral antibiotic. • Hammertoes. These are toes that curl under the feet because of weakened muscles. They can cause blisters and calluses, which can lead to infection. Your podiatrist may suggest corrective footwear or surgery to straighten the toe. • Plantar warts. These warts resemble calluses and can become infected or bleed. Your podiatrist may suggest wearing a pad over a wart to prevent irritation, or decide to remove it. Taking the time to treat your feet royally — inspecting them daily and protecting them from abrasions — can keep both them and you healthier. Start now, and keep at it: Prevention is a whole lot easier than treatment.
<urn:uuid:087625f0-b426-493f-bb0e-77ff1b47d850>
CC-MAIN-2021-43
https://www.diabetesselfmanagement.com/caring-for-your-feet-when-you-have-type-1-diabetes/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00070.warc.gz
en
0.934317
3,097
3.265625
3
One of the leading essential oil educators, Jimm Harrison, has been my mentor and instructor in aromatherapy. Jimm created and teaches the aromatherapy program for Bastyr University, one of the most well-known naturopathic universities. He teaches across the globe, is the author of Aromatherapy: Therapeutic Use of Essential Oils for Esthetics, and has 30 years of experience in the essential oil world. In Essential Oil Myths Part 1, I discussed the following widely-held beliefs: - “X Brand” is the best brand of oil - The label “Therapeutic Grade” is important - X is an essential oil - “X Brand” doesn’t use solvents in extracting their oils - Essential oils are destroyed by heat - Choose raw essential oils - GC/MS is important to determine quality - An essential oil is a high quality if it has a Nutrition Facts section on its label. - Standardization is not adulteration - Adulterated oils don’t work This post continues what I’ve learned in Jimm’s classes. 11. Reductionism can explain how and why an essential oil works Many of the studies done on essential oils examine specific chemical compounds in the oil, and the biological effects of that compound. This provides valuable insight into the potential applications of essential oils, but it doesn’t fully explain why and how essential oils work. In his classes, Jimm would delve deeply into studies on essential oil components, and how the scientific perspective provides insight into using the oil. But he told the class countless times, “The oil always goes beyond the chemistry.” Reductionism is a belief foundational to the scientific method, and it attempts to understand a whole by breaking it down and analyzing the tiniest particles. Author and biochemist Rupert Sheldrake points out the primary problem with reductionism: it’s like trying to find out how a computer works by grinding it up, and studying the molecules of nickel and copper. There is a place for reductionism in studying the human body, but we also need to compliment it with the bigger-picture perspective of synergy. Synergy arises when the whole surpasses the sum of its parts. The whole of the essential oil surpasses the sum of all its chemical compounds. Another term for synergy is emergent properties – characteristics that are not present in the components. A heartbeat is an emergent property, because it occurs in heart tissue but is not found when analyze heart tissue cells in a petri dish. Essential oils, just like the human body, is Mozart Symphony. But Western science, through the reductionistic approach, tries to study the symphony by playing each note by each instrument, one by one. We can learn a lot about essential oils by studying their individual notes – the chemical compounds – but it is also a sure way to lose the music. To expand into the untapped potential of using essential oil for wellness, we need to consider BOTH the individual notes, and the symphony as a whole. 12. X oil is anti-carcinogenic You can put a tumor cell in a petri dish, under sanitary laboratory conditions, dab just about any essential oil on it, and the cancerous cell will die. Jimm cautioned the class against taking carcinogenic claims of essential oils at face value. Once again, this reductionistic approach of putting an oil on a cancer cell can offers insight into essential oils, but a plant is not a laboratory, and a body is not a petri dish. That does not mean frankincense oil will have carcinogenic properties when taken into the body. 13. X oil is scientifically shown to have these properties A common fallacy involves recommending a specific application of essential oil that has been studied for another application. For example, someone may recommend drinking grapefruit oil to support weight loss, based on a study involving the inhalation of grapefruit oil for weight loss. In the same way, the researched effects of an herb is attributed to its associated essential oil. I see this done in countless blog posts with titles like, “Benefits of X Essential Oil.” It is not appropriate to interpret the effects of an ingested herb with the effects of an essential oil used aromatically or topically. While an herb and an oil share similar botanical compounds, they often create different physiological responses. An herbal product (tincture, tea, CO2 exact, powder) shares some energetic and chemical commonalities with its associated essential oil. The herbal product, however, has properties not in the essential oil. Why? An essential oil contains only small molecules called volatile compounds, while the herb contains active plant compounds such as alkaloids which are too large to be present in an essential oil. To apply study results to an essential oil, the method of the study must be examined. Go back to the original source and, at minimum, read the abstract to understand the study. 14. “X Brand” has oils is “pure enough to take internally” Did you know it that a certain MLM essential oil company put nutrition fact labels on their oils as a marketing ploy? This company recommended internal use of their oils to position themselves as higher quality. Now, other essential oil companies are following this example. “Pure enough to take internally” is a MLM-created marketing term. Any oil company can make this claim about their oils. It signifies neither quality nor purity. It’s that simple: it is a meaningless term. Whether essential oils should be consumed internally, however, is another question entirely. 15. Essential oils are dangerous to take internally Because essential oils are used widely in the flavoring industry, people ingest essential oils all the time without knowing it. For example, gum is flavored with essential oils. So the FDA approves essential oils for ingestion, but this doesn’t signify safety. The FDA fails miserably in policing safety of food and cosmetic additives. Voracious marketing by the MLMs have led to individuals ingesting essential oils, because “it can only help, and it can’t hurt.” In 2012, 180 moderate-to-major outcomes due to essential oil ingestion/exposure was reported to the American Association of Poison Control Statistics. This is a low number considering the rising popularity of essential oils, and no deaths reported that year. But that is still 180 individuals significantly harmed. The real question here is should essential oils be taken internally. If the oils could work in an energetic, diluted, or olfactory application, then ingesting drops of oil in water does not honor them as a precious natural resource. Essential oils do offer potent results when used internally, but in the right application for specific situations. Anal suppositories are more effective than oral ingestion to deliver essential oils into the body. Suppositories allow the oil to bypass the breakdown processes of the liver. Remember that widely-cited study showed that grapefruit oil supported weight loss? MLM representatives recommended individuals take grapefruit oil in glasses of water. This is one example of marketing that rather blindly focuses on selling product…. because the original study had subjects just smelling oil, not ingesting it. 16. Essential oils were widely used in “ancient times” Widely-circulating misinformation, used as a marketing approach, presents essential oils in sacred and medicinal applications during Biblical times. While distilling technology for essential oil production existed in this timeframe, essential oils are not an common ancient remedy. The high cost of producing the oils prevented access to anyone but the elite. Plant resins and herbal extracts, not essential oils, are the substances to which ancient texts refer. Essential oil production arose in the 16th century, and was used primarily in the perfumery industry and, due to cost, relegated mainly to royalty. Aromatherapy – the practice of using essential oils – began in the early 1900’s, and expanded with the broader availability of oils. The first book on aromatherapy, Aromatherapié, was not published until 1937. 17. Essential oils may be “relaxing” but don’t create significant emotional effects One essential oil myth claims that any emotional effect attributed to an oil arises due to placebo. Looking at essential oils from a scientific lens, however, we see that oils directly alter the emotional centers of the brain. When inhaled, the volatile compounds of essential oils travel into the nasal passages, where they directly trigger the brain’s limbic system (the emotion and memory center). This instantaneous reaction bypasses the brain’s intellectual, logical centers. The sense of smell holds primal, survival-based roots in the human body, and it was the first sense to develop in humans. As a result, we create emotional connections to fragrances in our environment. Every smell is attached to a memory or emotion, but this varies from person to person. An essential oil will trigger the emotion you have connected to a certain smell. 18. Essential oils cause detox reactions, not allergic reactions Another myth purports that any reaction to an essential oil is a detox reaction (a healing reaction), not an allergic reaction. It’s true that essential oils can support detoxification by encouraging the body’s detox processes. Specific essential oils promote lymphatic flow, which causes stored toxins to flush through the system. Other oils may support liver or gallbladder health, which may also circulate stored toxins so they can be released from your body. But essential oils may cause true allergic reactions. If you experience allergy-like symptoms including hives, rashes, bumps, red skin, shortness of breath or nasal congestion, it is very likely that the oil is not agreeing with your body. Sensitization to an oil can cause these allergy-like symptoms. Sensitization is occurs when the body becomes allergic to an essential oil due to excessive and prolonged use. Using an oil topically without proper dilution is a common path to an oil sensitivity. Our consultant who sources our oils (Jimm Harrison), has a background in the skincare industry and has seen sensitization occur countless times. Lavender oil, surprisingly, is the common culprit because it is used so heavily. Fortunately, in many cases, a break from the oil will reverse the sensitivity. We work with Jimm to ensure that our oils are properly combined and blended with jojoba oil to minimize any risk of sensitization. Additionally, an essential oil may trigger allergic symptoms due to an emotional response. An individual may have a traumatic association to a certain fragrance, such as pine. As discussed, essential oils directly stimulate the emotional center of the brain, which could trigger a PTSD-type response to a particular scent. 19. Use essential oils for ALL the things Use essential oils where they are effective, enjoyable, and suitable. Do not waste them with unrestrained enthusiasm. Keep the following in mind when using essential oils use: - Sensitization – As discussed above, sensitization occurs when an individual becomes sensitive (or allergic) to a specific oil, frequently due to excessive use. If you use lavender oil daily and undiluted in your baths, in your dryer, in your diffuser, and in concentrated doses on your skin, a sensitization to the oil would not be unusual. - Sustainability – It takes an enormous amount of plant material to produce a 5ml bottle. - Alternative plant medicine – Tinctures, teas, CO2 extracts, and other forms of plant medicine also offer healing capacities to the human body. Essential oils are a powerful form of plant medicine, but do not contain many of the active compounds found in whole-plant forms. To use essential oils with integrity for the environment and the body, I believe these three foundations must not be compromised. 20. Anyone can become an essential oil expert A marketing technique I see among MLM companies is self-promotion as an essential oil expert. But it requires more than a few classes to just scratch the surface of aromatherapy. Through Jimm’s classes and consulting, I’ve learned the challenges of finding pure, sustainable, and dynamic essential oils. Most important, I’ve learned that essential oils are like wine. An essential oil aficionado has both market savvy and a highly refined nose. This requires years of experience to attain. He or she approaches the sourcing process with a passion for the people, the land, and the intention behind each oil. 21. Science is always better than a nose Sourcing essential oils is more than finding pure, unadulterated oils (and that is a challenge enough). When compared to a mass-produced oil, a “fine wine” oil has an unparalleled energetic, chemical, and fragrance complexity. It is the different between Yellowtail wine and a fine vintage wine. Sourcing essential oils with this complex profile requires a connoisseur who has developed a highly discerning sense of smell, just like a a fine wine expert has a refined palate. In some cases, an experienced and discerning nose is more effective than a GC/MS test in finding an adulterated oil. And a GC/MS test is useless when it comes to sourcing oils of this “fine wine” quality. Essential oils, as with every aspect of nature, will always work beyond the explanation of chemistry. If you use essential oils in your healing journey, what are the questions you would have for Jimm and myself? We want to hear your experiences, successes, and challenges in navigating the essential oil world.
<urn:uuid:2da0772a-4980-433f-84dc-574c61a4b3a6>
CC-MAIN-2021-43
https://empoweredsustenance.com/essential-oil-myths-2/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00630.warc.gz
en
0.950756
2,847
2.734375
3
PORTENT OF DISASTER: THE SOCIALIST WOMAN AND SPECIAL PROTECTIVE LEGISLATION The Socialist Women's debate over special protective legislation for women demonstrates how the combined legal apparatus of patriarchy and capitalism placed nearly insurmountable barriers in the path of unity of women with women or workers with workers, thus compounding the difficulties of the socialist-feminist enterprise. Some Socialist women demanded special legislative protection for working women. However, in a controversy foreshadowing later disagreements about the Equal Rights Amendment, others decried such protections as assuming, and reinforcing, women's social and alleged biological inferiority. Special treatment of women not only divided male from female workers, but middle-class from working women. The first sustained discussion of this question occurred in October 1908, after the Supreme Court's Muller v. Oregon validated a ten-hour law for women. The court had consistently voided protective labor legislation for men on the grounds that it infringed the right of contract. The Court, however, said that women were physically weaker than men, and additionally handicapped in the struggle for survival by their maternal functions. Furthermore, healthy mothers were a social asset, thus making women's health a legitimate object of "public interest and care in order to preserve the strength and vigor of the race." Finally, women were traditionally dependent on men for their sustenance; their freedom of contract could therefore be limited for their own protection. Muller v. Oregon emphatically did not rest upon either women's rights or any putative rights of workers against industrial mass murder and torture. Rather, it assumed women's permanent biological, social, and economic inferiority to men. It reinforced women as a distinct legal category, divided from their male co-workers by the law of the capitalist state. But it did confer some benefits upon overburdened female workers. Mary S. Oppenheimer attacked the decision as, on balance, a defeat for women. It would depress women's wages, confine women to low-paying jobs, render them uncompetitive with men, and prevent them from working longer hours even if survival dictated. Oppenheimer believed that the government could legitimately protect pregnant and nursing women, but not all women. The canard about the physical weakness of women, she said, "partakes of sentiment rather than justice." Women "are, in spite of their physical disabilities, the drudges of the race." Oppenheimer admitted that some Socialist party state platforms demanded special legislative protection for women, but speculated that "these clauses may well have been inserted by the men alone." Parce similarly opposed any special protections for women on the grounds that such protections were based upon male tutelage of women. However, Mary Garbutt championed such laws on the grounds that "there should be no freedom of contract on the part of employer or employee, when human life and human virtue and happiness are in the balance. An interference is justifiable by the state at all times, when such conditions exist as wreck human beings." Although this rationale would also endorse protective legislation for men, Garbutt--perhaps acknowledging political reality--demanded only the protection "of that ever swelling army, of children and young women, utterly defenseless, except as society defends them." Conger-Kaneko also vigorously supported special protective legislation for women. She criticized the International Socialist Women's Congress of 1910 for essentially adopting Oppenheimer's and Parce's position that Socialists should demand safe workplaces for both sexes rather than accepting special protections for women. Conger-Kaneko's position resembled Parce's own denunciations of male Socialists who opposed partial suffrage for women where that was the only feasible form; in that case, Parce had demanded that women take whatever they could get. Similarly, Conger-Kaneko now protested that Socialists who opposed protective legislation for women "would grant women nothing, until men had it first, or got it at the same time. It is much easier to secure legislation protecting women of this sort, then it is to secure it for men. Night work is much harder on women and infinitely more dangerous than it is on men.... There is not a question of class interests here. It is a question of sex interests in the working class." Garbutt similarly asserted that "all enlightened states are awakened to the fact that wage earning women need special legislation for their protection..... There should be no freedom of contract on the part of employer and employee, when human life and human virtue and happiness are in the balance." Young women and children were "utterly defenseless, except as society defends them." This justification of special legislation, however, raised precisely the issues which bothered its opponents, who demanded that women be placed in a position where they could defend themselves by their own efforts. Existing limits on women's ability to sign legal contracts, they said, was a badge of their social and legal inferiority, meriting repeal rather than applause. Men as well as women and children suffered in the charnel houses of capitalism; Socialists must safeguard the life and happiness of all workers. Most Socialist women--or at least those whom SP members elected as their representatives--endorsed this line of reasoning in relation to the Socialist party itself: they opposed an amendment to the party's constitution reducing women's dues to one-third of those charged men. The Woman's National Committee charged that "this proposed amendment to the national constitution provides for a special privilege, with its implied inferiority and subservience, and smacks of that old chivalry which has ever granted to women these petty privileges and withheld from them equal responsibility with men, in civic and political affairs.... This proposed amendment is foreign to the ideal of equality and comradeship and not in harmony with the spirit of the Socialist party." In confronting these issues, Socialist women faced a dilemma which often afflicts and divides victims of oppression: to what extent should revolutionaries compromise with an oppressive reality in order to modify or overthrow it? In this case, sexist cultural patterns, male fear of women's economic independence and competition on the job, and capitalist exploitation of women created a segmented job market that severely disadvantaged women. In the name of protecting women, the Supreme Court merely put its own imprimatur on this gendered system of class oppression. In declaring labor legislation protecting men unconstitutional, while allowing it for women, the capitalist legal system reinforced patriarchy even as it divided the working class, and women, against themselves. In so doing, it placed a nearly insurmountable barrier in the way of both feminism and workers rights. After World War I, disputes over the ERA helped stalemate the feminist movement for a generation even as the near impossibility of winning legal protection for male workers demoralized and defeated labor insurgencies. The Court's solicitude for women was itself limited; in 1923 the Court upheld what Florence Kelly termed "the inalienable right of women to starve" by voiding a minimum wage law for women on the grounds that the nineteenth amendment had accorded women equality. This further defused women's activism. The intractability of this division became evident in discussions about mothers' pensions. The proposal for such pensions took diverse forms. The most common proposal guaranteed sustenance for a woman and her dependent children in the event of the death, desertion, or disablement of her husband. In this form, it was a relatively minor reform (however vital to individual beneficiaries) that left the causes of capitalist mass murder intact and (because it was available only for women) assumed the privatized home with the wife as the full-time caregiver. Some women advocated more sweeping proposals: because mothers were the foundation of the state, society should reward every mother with a pension, thus paying her for her contribution to society. This would elevate and dignify motherhood by rewarding its services in monetary terms (the only ones respected in capitalist society), protect children, and afford women some independence from their husbands. Conger-Kaneko may have favored this form of pension for mothers. In a somewhat ambiguous statement, she pointed out that "wifehood and motherhood subject more women to invalidism and death each year than soldiers have been subject to during the combined wars of 130 years." Why not, she asked, pension mothers and make soldiers provide for themselves? Parce similarly believed that, in an age when children were increasingly an economic burden rather than an asset, families should be recompensed for the expense and trouble of raising them. In a blistering attack on the natalist policies of some European governments, she asked "if the state needs the children, why doesn't the state pay for the service of producing them. The state pays for every other service it receives; is maternity the only thing on earth that isn't worth anything?" Such payments, she pointed out, would help fathers as well as mothers. The incendiary nature of such advocacy was demonstrated when a male Socialist wrote what he doubtless felt was a strongly feminist defense of mothers pensions. Walter Lenfersiek, who rarely contributed to The Socialist Woman, complained that most Socialists offered a variety of unpalatable choices to women. Some Socialist men would leave the condition of women virtually unchanged; others would give a wife one-half of her husband's earnings, leaving the husband with the choice of how much to earn; others would give the children an allowance, but leave the wife dependent on the whims of her husband. Lenfersiek, however, said that women "must demand economic independence even from their husbands, or they cannot be really free." He advocated that "women with children should receive an income from society purely as child-bearers, or as mothers," who make an indispensable contribution to society. The condition of mothers, he continued (echoing the Supreme Court) is of concern to everyone, "and not the business of the particular insignificant individual who happens to be the father of her children.... If her well-being is a social need, why should she not receive social pay? Is dressing the children and washing them and soothing their little pains any less useful than the work of a nurse in a hospital?" Other writers made this latter point: work done for pay outside the home was considered valueless when performed lovingly for family members inside the home. Lenfersiek must have been astonished when his article evoked a spirited rebuttal from Belle Oury, also a very infrequent contributor to The Socialist Woman. Oury attacked Lenfersiek's article as a male inanity typical of those "who are unable to consider women apart from the home, and consider her as part of social processes." Socialism would not compensate mothers at all because "Socialism implies a free race--given a free race, motherhood will be voluntary, and I see no reason why it should interfere with any occupation which a woman may have." The idea that women would work until they marry, and then become mere breeders, "is a most astonishing proposition, and one which will not bear analysis." Women would not be reduced to, or paid for, their sex or maternal functions. Lenfersiek "associates motherhood with care of children. It is not [for] motherhood that he desires recompense, but the duties which have hitherto attended motherhood. Specialization will solve this problem"--kindergartens, day nurseries, and the general professionalization of childcare would ensure that children "receive the best care that society can furnish," which "cannot always be supplied by their mothers." Motherhood would become a merely sexual function, and "we will not have to consider recompensing women in any other capacity than that of worker." Oregon v. Muller and the state protective laws it sanctioned evoked a contentious debate which foreshadowed the devastating post-war divisions between the "women-first" feminists who condemned special protective legislation, and "social feminists" who supported it--a controversy which would absorb much of the energy of Crystal Eastman, a predominant socialist-feminist of the 1920s. Mary S. Oppenheimer, "Is It a Handicap?," SW October 1908. Oppenheimer, "Is It a Handicap?," SW October 1908; Parce, "The Examiner's Glass," SW December 1908. Mary Garbutt, "For Socialist Locals, Program for October, Labor Legislation in the United States," PW, October 1911. Garbutt surveyed existing laws protecting women and children; as scant legislation protected men, she focused on the two categories of persons for which the Supreme Court allowed protections. JCK, "Action of Women's Congress," PW October 1910; Garbutt, "For Socialist Locals: Program for October," PW October 1911; "Resolution Adopted by the Woman's National Committee," PW June 1909. William Chafe, The American Woman: Her Changing Social, Economic, and Political Roles, 1920-1970 (New York, Oxford U Press, 1972) p. 80. For a moving account of a woman who needed such a pension, see "Pensions for Mothers," PW November 1910 (reprinted from The Social Democratic Herald). JCK, "The Woman," [unsigned editorial statements], PW March 1909; Parce, "The Examiner's Glass," September 1910 and November 1910. Lenfersiek, "How Shall Mothers Be Compensated Under Socialism?," PW March 1910. Oury, "Woman's Relation to Society," PW June 1910.
<urn:uuid:c7569510-7602-41ce-9c10-85940f7723c7>
CC-MAIN-2021-43
https://www.americanradicalmovements.com/10-portent-of-disaster-the-socialist-woman-and-special-protective-legislation.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00671.warc.gz
en
0.970218
2,746
2.84375
3
A look at later seeding for winter wheat By Carolyn King Many winter wheat growers in Western Canada are wondering if the seeding window can be extended. A multi-year, multi-site Prairie study is working towards a tool that will help growers answer that question for their own conditions. By Carolyn King “Recently it has become increasingly challenging to seed winter wheat in the fall. Probably the biggest factor is crop rotations. In Manitoba and even Alberta and Saskatchewan, producers are growing longer-season crops, like corn and soybeans, or longer-season varieties of crops like canola, and so they are harvesting later. Also, with canola, with the shift towards direct harvest, the crops are staying longer in the field. So, farmers who are planting winter wheat are planting it later,” notes Yvonne Lawley, a professor of agronomy and cropping systems at the University of Manitoba who is leading the study. “There is a lot of interest on the part of growers to know what is the impact of later planting and can they push the seeding window later than they think.” She adds, “Farmers are also thinking about soil health and wanting to increase the time where plants are covering the soil. Winter wheat is one of our best starting places for cover crops in Western Canada. If late planting allows growers to achieve goals related to yield and profits as well as soil management, then that is important to consider.” The recommended seeding window for winter wheat is from about mid-August to mid-September across most of the Prairies. Seeding after that window may mean the young plants won’t have enough time to become properly established before winter sets in. For winter survival, the optimal growth stage is for the plants to have well-developed crowns going into the winter. Lawley’s study involves 13 sites across the Prairies, encompassing a very wide region with diverse growing conditions. The Manitoba sites include Carman, Kelburn Farm (just south of Winnipeg), Arborg, Portage la Prairie, Brandon and Melita. In Saskatchewan, the sites are at Melfort, Scott and Swift Current. The Alberta sites include Lethbridge-dryland, Lethbridge-irrigated, Medicine Hat and Falher. Most sites started the trials in the fall of 2013, though a few started in the fall of 2014. The winter wheat variety grown at all the sites is Flourish, which has fair winter survival. At each site, the study compares six planting dates: Aug. 15, Sept. 1, Sept. 15, Oct. 1, Oct. 15 and Nov. 1. The study team had a four-day window on either side of those target planting dates to account for weather and other logistical concerns. For each of the six seeding dates, a fungicidal seed treatment (tebuconazole and prothioconazole) is compared with an untreated check. Seed treatments can help young plants deal with stresses like seedling diseases and cool growing conditions. The study is looking at the effects of seeding date and seed treatment on spring plant stand populations, timing of maturity and crop yield. Some key trends “One of the questions at the beginning of the study was just how many of these seeding dates could we actually plant in the experiment, across these wide areas?” Lawley says. The biggest challenge turned out to be the Aug. 15 treatment. She says, “Much like farmers, researchers have a hard time having a piece of land open to plant in August.” Lawley notes, “We were quite surprised that each of the sites were able to seed most of the treatments in October. The lowest percentage of sites that we had seeding on Oct. 15 was 92 per cent, or 12 out of the 13 sites, in the fall of 2015. Even for our Nov. 1 date, we were able to seed at 80 per cent of sites in 2013, 90 per cent of sites in 2014, and 70 per cent of sites in 2015.” The Nov. 1 date is a dormant-seeding timing, when the soil is cold enough that the seed won’t germinate immediately. She explains, “To be able to seed at this date, you just have to be able to get your drill through the ground – for instance, the soil could be frozen at night and trafficable during the day – and to have no snow cover.” In general, and as expected, later seeding reduced spring plant stands. For example, Lawley says: “There were significant planting date treatment effects at 10 out of 13 site-years in 2016. The amount of stand reduction at these 10 sites ranged from six to 80 per cent when comparing the Sept. 1 and Oct. 15 planting date treatments. When averaged over all 13 sites, there was a stand reduction of 30 per cent between the Sept. 1 and Oct. 15 planting dates and a 60 per cent stand reduction between the Sept. 1 and Nov. 1 treatments,” Lawley says. For most site-years, winter wheat yields declined with later planting, as expected. “In Manitoba and Saskatchewan, definitely our highest yielding planting window is between Sept. 1 and 15. This finding agrees with the research conducted by Brian Fowler in the late 1970s that identified early September as the optimal highest-yielding planting window for winter wheat,” Lawley says. Fowler’s study identified the optimum seeding window to be from mid-August to early September. Results from Lawley’s study across a wider growing region suggest the optimum window has shifted and now ranges from late August/early September to early/mid-September. The effect of planting date on yield differed by province. “In Manitoba, most sites tended to follow the pattern of decreasing yield with later planting. However, in years where most Manitoba sites were able to get their Aug. 15 treatments seeded, like 2015/16, the Aug. 15 planting had a lower yield than the Sept. 1 planting. So you can plant too early. That result is very consistent with research that Brian Fowler did many years ago,” Lawley notes. “In 2016, yield trends were most consistent over the six Manitoba sites. On average in Manitoba in 2016, there was a 23 per cent yield reduction between the Sept. 1 and Oct. 1 planting dates and a 37 per cent reduction for the Sept. 1 versus Oct. 15 dates. At the three Saskatchewan sites, yield reductions between the Sept. 1 and Oct. 1 treatments ranged from zero to 35 per cent. In Alberta, yields in 2016 actually increased with later planting at two of three sites when comparing the Sept. 1 to the Oct. 1, 15, and Nov. 1 planting dates.” For the Manitoba sites, the growing conditions at the different sites are similar, which may be why the patterns in the relationships between planting date and yield were somewhat similar from site to site. The Saskatchewan and Alberta sites encompass more contrasting environments and the yield patterns were more complicated, so Lawley needs to do more analysis to draw further conclusions about the relationship between yield and planting date for those two provinces. The seed treatment affected spring plant stands and yields, but not at every site in every year. “When we pooled the data over all of the sites years, we found significantly higher spring plant stands with a seed treatment in the 2013/14 year and in the 2014/15 year as well, but not in the 2015/16 year. Most of the seed treatment effects were at sites in Manitoba,” she explains. “We didn’t always see a yield increase where we had a significant increase in plant stand. But again, averaging our yields over all sites and years, we saw a higher yield in 2013/14 and 2014/15 for the plots with the seed treatment compared to those without, but we didn’t see that effect in 2015/16. And most of the sites that had a yield response to the seed treatment were in Manitoba.” Of course, winter and early spring weather conditions at each site are also a crucial factor in overwinter survival and yield. As well, Lawley thinks Fusarium head blight might be influencing the study’s yield results. She explains that the study protocol didn’t include spraying for Fusarium: “When we were setting the protocol in the summer of 2013, we were less concerned about Fusarium in winter wheat. Both with having chosen Flourish, which has been more susceptible to Fusarium than expected, and with having more Fusarium in winter wheat in general during the time period of the study, it was definitely a limitation of this study.” Lawley is currently having the grain samples from the study analyzed for Fusarium head blight. The disease favours warm, moist conditions before and during flowering, so she suspects the trends will be very site-specific, depending on which sites had conditions favouring the disease and when in the growing season those conditions occurred. Regarding the effects of planting date on the days to maturity, she says, “Planting date had a significant influence on maturity date at 10 out of 13 sites in 2016. At these sites, a later planting date resulted in later maturity – on average, by eight days between the Sept. 1 and Nov. 1 planting dates.” A tool for late planting decisions “One of the interesting things about the study’s results is that the yields for the October and November seeding dates are not worse than they are. With planting in October, although we definitely have declining yields, the yields don’t drop to zero,” Lawley says. She adds, “Winter wheat needs to vernalize to be able to go from vegetative growth into reproductive mode, but even in our dormant-seeded winter wheat treatments, those plants that survived the winter were able to produce seed. “So the answer to the question of ‘Can you plant into October?’ is more promising than we thought at the beginning of the study. But producers will be the ones to decide whether there is value to them in planting in those later windows. I think that will depend on their location and the weather, but they don’t know what the coming year will be like when they’re going out to plant.” As the study finishes up in 2017, Lawley will be using the data sets from the sites to develop a decision-making tool for farmers. The tool will allow users to compare later winter wheat planting with their other cropping options, including the likely yields and profits for the various crop options. She notes, “In one growing environment, you might be happy with a given winter wheat yield, and in another growing environment where you have other crop options, you might not be happy with it.” Down the road, the tool could include consideration of some of the risk factors in the late September to October seeding window, like the probability of getting precipitation within a certain period after seeding or the probability of having a certain number of days above 0 C after seeding. That would help users to better weigh the risks of later seeding. Funding for the study is provided by Winter Cereals Manitoba, Saskatchewan Winter Cereals Development Commission, Alberta Wheat Commission, Ducks Unlimited Canada and Agriculture and Agri-Food Canada’s AgriInnovation Program. The collaborating sites hosting the trials are also key to the study’s success. They include: Agriculture and Agri-Food Canada research locations at Brandon, Portage la Prairie and Lethbridge; Westman Agricultural Diversification Organization; Prairies East Sustainable Agriculture Initiative; Northeast Agriculture Research Foundation; Wheatland Conservation Area; Western Applied Research Corporation; Smoky Applied Research and Demonstration Association; Farming Smarter; and University of Manitoba.
<urn:uuid:a32d6024-c0c5-4c05-8c76-fff870895bbe>
CC-MAIN-2021-43
https://www.topcropmanager.com/a-look-at-later-seeding-for-winter-wheat-20196/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00311.warc.gz
en
0.963073
2,471
2.59375
3
You are what you eat is one of the truest sayings in nutrition. Every morsel of food you consume enters your body and becomes part of you, affecting you at a cellular level. If you eat healthily, you can expect to be healthy. Conversely, if you eat unhealthily, your short and long-term health may suffer. Eat too much, and invariably you’ll gain weight. One way that people eat for better health and to manage their weight is the pescatarian diet (1). In simple terms, this involves following a vegetarian diet with added fish and seafood. Some pescatarians eat dairy and eggs too, but that’s a matter of personal choice. For some people, following the pescatarian diet is a healthy choice, while others do it for weight loss (2, 3). Others do it because they like the taste of the foods that are allowed. There may be ethical reasons for going pescatarian, too; the raising and slaughter of cattle for food is something many people object to on moral grounds (4). Fish and seafood also have a lower carbon footprint, so it’s better for the environment (5, 6). Whatever the reason, it’s essential to understand all the pros and cons of any diet before you start it, and while the pescatarian diet can be beneficial, there are a few drawbacks too. Pescatarian Diet – All you Need to Know What Can and Can’t You Eat on A Pescatarian Diet The standard pescatarian diet is mainly vegetarian with added fish and seafood (7). Benefits of the Pescatarian Diet Adding fish to a vegetarian diet has several benefits. Avoiding all animal flesh can create nutritional deficiencies, and adding fish can help plug these gaps. The benefits of the pescatarian diet include: More protein – while you can get enough protein from plant-based sources, it’s not easy. Just a few portions of seafood per day will ensure you get all the protein you need. Protein is especially important for exercisers, as it’s critical for muscle repair and growth. Fish contains just as much protein as meat, but it’s generally lower in fat and calories. More omega-3s – omega-3 fatty acids are very healthy fats. Some plant foods contain them, such as flax seeds and walnuts, but seafood is one of the most readily available sources of these vital nutrients. Oily fish, such as salmon, mackerel, herrings, and sardines, are some of the best sources of high-quality omega-3s. More vitamins and minerals – even a very well-balanced vegetarian diet can be low in certain vital nutrients. In addition to omega-3 fatty acids, seafood is high in vitamins B6, B12, D, and E, zinc, calcium, and selenium, all of which are in short supply on a purely vegetarian diet. Lower saturated fat intake – most land animals are high in saturated fat. Saturated fat is high in calories and is linked to weight gain and cardiovascular disease. Replacing meat with fish will automatically lower your saturated fat and caloric intake, often without resorting to smaller meals. More meal options – adding fish to a vegetarian diet makes meal planning a whole lot easier. Instead of trying to balance plant sources of amino acids to fulfill your protein quotient, you can just have a piece of fish with your usual selection of vegetables. You’ll also find eating out easier, as veggie options are often very limited and even unappetizing. In contrast, most eateries offer several fish and seafood options. You can usually ask for your fish to be grilled instead of fried, which is useful if you are trying to eat extra healthy and avoid unwanted calories. In short, adding fish to your menu opens up a lot of extra menu options. Drawbacks of the Pescatarian Diet On the whole, there aren’t too many drawbacks to the pescatarian diet. It’s mostly healthy, providing you go easy on the fish sticks and stick to unprocessed fish and plenty of fresh vegetables, fruits, and whole grains. However, there are some concerns over mercury toxicity. Mercury is a mineral that is poisonous when consumed in large quantities or for sustained periods. Mercury is mainly found in bigger fish species, such as tuna, swordfish, king mackerel, and shark. The FDA (Food and Drug Administration) recommended that young children and pregnant and nursing mothers should avoid these foods (10). Adults should limit their intake to 2-3 times per week to prevent toxicity. Other drawbacks of going pescatarian include cost and food prep. Fish can be expensive, often even more so than meat. This is especially true for popular seafood, such as salmon, cod, lobster, and king prawns. Also, prepping and cooking fish can be time-consuming, and not everyone knows how or wants to have to gut and fillet a fish. However, most fishmongers will do this for you if asked. These drawbacks are far from insurmountable, and the pescatarian diet offers more pros than it does cons. Pescatarian Diet Plan Ideas The easiest way to make the move to the pescatarian diet is to replace the meat in your meals with fish or seafood. For example, have prawns on your salad instead of chicken, or try a fish fillet burger instead of a regular beef burger. You can also experiment with different fish and seafood cooking methods. Options include: - Shallow frying - Deep frying (generally not recommended for regular use) - Sous vide (sealed in a waterproof bag and simmered in water) Serve your seafood with your choice of vegetables and whole grains, e.g., broccoli florets, carrots, sweet potato, and/or wild rice. You can liven up any fish dish by using various herbs and spices and adding sauces. What to Expect Adding fish to a vegetarian diet, especially one that was previously low in protein, could help you lose weight faster, build muscle more easily, and recover better between workouts. You may also notice that the condition of your hair, skin, and nails improves. Replacing meat with seafood may also lead to weight loss, and your blood pressure may drop if it’s currently elevated. You may also experience less bloating, as red meat can be hard to digest, while fish is usually lighter and easier to break down. Increasing your fish intake and your omega-3 intake by default may also help reduce joint pain and other sources of inflammation. Omega-3s are potent anti-inflammatories. Finally, while eating more seafood won’t automatically make you smarter, you may find that it helps with things like memory and concentration. Fish oil is the original smart drug! Is It the Right Choice for You? The pescatarian diet is one of the least controversial diets around. It’s ideal for vegetarians who want to eat more protein and for carnivores who want to stop eating meat. Providing you eat healthily (i.e., not fish sticks, deep-fried prawns, etc.), going pescatarian should do you nothing but good. However, like any diet, you should avoid eating the same foods over and over again, so don’t just eat tuna in the name of going pescatarian. Instead, eat lots of different types of fish and seafood. This will supply you with a broader range of nutrients and reduce your exposure to the toxins that could otherwise make this diet less healthy. Also, consider the source of your fish. Seek out seafood from renewable stocks and avoid farmed fish where you can. This is especially relevant if you have given up meat because of ethical concerns. A lot of diets are far too whacky or extreme for long-term use. They ban the foods you love or lead to severe hunger or cravings. You might be able to stick with it for a few days, or even a week or two, but eventually, your willpower will fail, and you’ll give in. In contrast, the pescatarian diet is more of a lifestyle choice than a weight loss diet. In fact, unless you watch your calorie intake, no guarantee eating seafood in place of meat will lead to weight loss. You could even gain weight if you eat more than usual. That said, in terms of health, going pescatarian makes a lot of sense. Seafood is generally lower in saturated fat and higher in heart-friendly omega-3s and other nutrients essential for your long-term health. Plus, there are very few drawbacks to following a pescatarian diet, providing you actually like fish, of course! Whether you are a vegetarian who wants to eat more protein or a meat-eater looking for a healthier, more ethical alternative to beef, lamb, and pork, the pescatarian diet could be just what you are looking for. Visit the Fitness Equipment Reviews homepage for more expert information & advice. - Rosell M, Appleby P, Spencer E, Key T. Weight gain over 5 years in 21,966 meat-eating, fish-eating, vegetarian, and vegan men and women in EPIC-Oxford. Int J Obes (Lond). 2006 Sep;30(9):1389-96. doi: 10.1038/sj.ijo.0803305. Epub 2006 Mar 14. PMID: 16534521. - Tonstad, S., Butler, T., Yan, R., & Fraser, G. E. (2009). Type of vegetarian diet, body weight, and prevalence of type 2 diabetes. Diabetes care, 32(5), 791–796. https://doi.org/10.2337/dc08-1886 - Key, T. J., Appleby, P. N., & Rosell, M. S. (2006). Health effects of vegetarian and vegan diets. The Proceedings of the Nutrition Society, 65(1), 35–41. https://doi.org/10.1079/pns2005481 - Fox, N., & Ward, K. (2008). Health, ethics and environment: a qualitative study of vegetarian motivations. Appetite, 50(2-3), 422–429. https://doi.org/10.1016/j.appet.2007.09.007 - Mcdermott A (2018). Eating seafood can reduce your carbon footprint, but some fish are better than others. - Scarborough, P., Appleby, P. N., Mizdrak, A., Briggs, A. D., Travis, R. C., Bradbury, K. E., & Key, T. J. (2014). Dietary greenhouse gas emissions of meat-eaters, fish-eaters, vegetarians and vegans in the UK. Climatic change, 125(2), 179–192. https://doi.org/10.1007/s10584-014-1169-1 - Rizzo, N. S., Jaceldo-Siegl, K., Sabate, J., & Fraser, G. E. (2013). Nutrient profiles of vegetarian and nonvegetarian dietary patterns. Journal of the Academy of Nutrition and Dietetics, 113(12), 1610–1619. https://doi.org/10.1016/j.jand.2013.06.349 - Nichols, P. D., Petrie, J., & Singh, S. (2010). Long-chain omega-3 oils-an update on sustainable sources. Nutrients, 2(6), 572–585. https://doi.org/10.3390/nu2060572 - Mozaffarian D, Lemaitre RN, Kuller LH, Burke GL, Tracy RP, Siscovick DS; Cardiovascular Health Study. Cardiac benefits of fish consumption may depend on the type of fish meal consumed: the Cardiovascular Health Study. Circulation. 2003 Mar 18;107(10):1372-7. doi: 10.1161/01.cir.0000055315.79177.16. PMID: 12642356. - FDA. 2020.Advice about Eating Fish.
<urn:uuid:7699a55d-04e7-4b19-bd4a-ec2d0807b8dc>
CC-MAIN-2021-43
https://fitnessequipment.reviews/pescatarian-diet/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00031.warc.gz
en
0.917738
2,603
2.796875
3
Provide leaders with information to develop a well-planned crowd-sourced data application that improves communication and speed response times in disasters. Recognize the potential benefits of crowd-sourced and employee-sourced real-time geospatial data during emergency response. Present steps taken by United States Department of Agriculture – Natural Resources Conservation Service (NRCS) Texas State Office for the deployment of the “dead cow tool” and share lessons learned. Provide a framework of questions and items that need to be addressed for the successful deployment of a real-time data collection tool. What Did We Do? In response to Hurricane Harvey in the fall of 2017, the Texas NRCS GIS staff developed on-line reporting tools to collect real-time data related to damages and animal mortalities that could be used by employees and the public. ESRI’s ArcGIS Collector Application was selected for its ease of use, ability to be used when off-line, and staff familiarity with the tool’s programming language. In this case, NRCS already had the necessary licensing for ArcGIS Online accounts. The “dead cow tool” is a near real-time reporting tool for the public to identify locations, types and magnitude of agricultural losses. This provides NRCS and other agencies with data to request funding for emergency response and recovery funds to assist the local agricultural producers. However, significant concerns were raised relative to releasing the application for public use, so the data collection applications were then limited to a handful of NRCS employees within the disaster areas. The Dead Cow Tool (displayed as Hurricane Harvey Data Collector Map) was designed to collect the following parameters: Damage Type; Livestock Type; Number of Livestock Lost; Number of Livestock in Need; Accessibility; and Comments. There was also an option to add or take images and add the location from a map previously downloaded onto a user’s mobile device (if network connectivity was lacking). Here are a few screenshots to serve as an example from an iPhone (Figures 1 – 6): Texas NRCS developed and deployed another tool for employees to complete Damage Survey Reports while in the field. “In Hurricanes Ike and Rita, staff went out in the field, took handwritten notes about the damage, wrote down the location, took pictures and then had to return to the office, to download and enter the information on their computer. They had to look up the latitude and longitude points from their notes to document the exact location and then save all that information in several different locations. It was a long process for our staff,” says NRCS State Soil Scientist Alan Stahnke. “I knew there had to be a way to make it more efficient for them.” Stahnke had been working with Steven Diehl, GIS technician, and others on his staff for several months on an ArcGIS application, ArcCollector, based on ESRI map data. They had the basics down and when Hurricane Harvey showed up on the radar, they knew they had to work fast to get the application ready for staff in the wake of Harvey’s wrath. The resulting smart phone device field tool – the Hurricane Harvey Damage Reporter – is a method to record the damage and collect information on all the points into a central database. (Littlefield, 2017) The Damage Survey Report tool reduced the time needed by field engineers by approximately 50% from the previous method. The data was available to others with access as it was entered – thereby providing timely data to managers and leadership. Additionally, it allowed the final reports to be developed by state office personnel further reducing the time required by the field – allowing them to take care of other pressing matters. What Have We Learned? Several lessons were learned: First, approve policies on data collection prior to the disaster – these need buy-in and flexibility. Second, decide how data will be released and identify typical reports. Third, develop Data Collection Applications in advance – allowing testing, training, familiarity, and formatting needed for user-friendliness. Fourth, select the correct Data Collection Tool. Fifth, identify data collection alternatives if the application cannot be realistically utilized – power outages, lack of network connection, closed roads, flooded areas, etc. It is important to prepare, plan, and train prior to a disaster to allow time to adjust and/or develop policies and reduce knee-jerk reactions. Data collection can have negative impacts if not properly administered and protected. Several identified concerns during Hurricane Harvey were protecting the data collected, preventing submittal of inappropriate language and/or photos, potential for someone submitting the data to believe that they had applied or requested assistance, and data distribution. Our NRCS GIS specialists (Texas and across the US) worked with ESRI developers to overcome some of the data protection and prevention of inappropriate material. However, obtaining clearance from leadership for public-use of the application was not obtainable in a timely matter. Real-time data collection is a useful tool for both internal customers and the public when faced with a disaster and allows the timely coordination of resources for rapid response and recovery. Disasters such as Hurricane Harvey require significant resources for response and recovery. Real-time data collection can aid in allocating resources. With animal mortalities, it is important that animals in sensitive environmental areas are properly disposed of in a timely manner. NRCS has provided technical and financial assistance for proper carcass disposal following natural disasters to reduce the associated environmental risk. Generating Reports and Maps with Information Collected — Recognize the market impacts of sharing reported losses. NRCS must follow applicable federal rules and regulations to related to personally identifiable information. Generally, a report could be published with information grouped by county based on collected data provided there is more than one producer in the county with that type of livestock or commodity. For example, if there is only one farm in a county with emus, and the producer reported their losses of 50% of their emus, USDA Agencies could not share that data, as the producer could then be identified. Policies are needed to address data collection with public interfaces. Consider modifications to existing policies or creating new policies to allow the use of crowd-source data. The intent of the app and intended use of the data must be clearly conveyed to users. USDA Agencies raised concerns the public would believe that the tool indicated that they were applying for assistance, not simply reporting. (Stahnke, Jannise, & Northcut, 2018) Prior to collecting data, appropriate policies should be written that address, how, when, where and why the information is needed and how it will be used in accordance with federal data collection requirements. The policies should be reviewed internally by a variety of users to ensure that the policy is clear and provides adequate accountability. Buy-in from all levels is needed prior to launching a data collection system to the public. Depending upon the type of organization that is collecting the data – a variety of controls may need to be established to protect the data. - How the data will be collected and shared– this area should allow flexibility. Allowing public to enter data using their own devices may be necessary to obtain the data in a timely manner. What will public users gain by sharing their information? - Will the data be shared with other agencies, non-governmental organizations (NGOs), etc.? - If employees are allowed to use their own device, is there a possibility of a litigation hold on the personal device? (USDA – Forest Service, Mobile Geospatial Advisory Group, 2015) - When does the data need to be collected? This may vary – for example, the number and type of livestock lost in a sensitive area may need to be reported as soon as the livestock are found; flooded fields and associated losses, road closures, or areas with downed power lines may have some lag-time in reporting over a period of several weeks as roads and properties become available for inspection - Where does the data that has been submitted get collected? - Who is going to oversee the data collection and create needed reports? - Will the data be adequately protected? As mentioned, some of the data collected, particularly with potential images and audio embedded with file attributes, will likely include personal or sensitive data that must be protected. - Why is the data being collected? - Will it serve a purpose and be used? - Can the data be potentially abused? - What level of data integrity is required? - Will certification or training be required for various users? - Will additional weight be placed on data from “authenticated” or “certified” users? - Who will be required to review of the proposed data collection system? - Should these vary based on the scope of the project? Setting the review levels and identifying who is authorized for deployment of the tool in advance is helpful to know what rules need to be followed. Some flexibility should be provided to allow modifications and adaptations as needed during an emergency. Selecting the appropriate data collection tool and platform is critical to success. There is an organization “Principles for Data Collection” that has created a guidance document for mobile data collection (MDC). Additionally, they host a “Digital Principles Forum — an online meeting place for peer learning, connection building, and debate on the Principles for Digital Development. Together with you, we aim to build a community that connects ICT4D, information technology, and international aid and humanitarian development practitioners with thoughtful curated content, relevant conversation and quality opportunities to improve their work.” “How to Choose a Mobile Data Collection Platform” is a guidance document prepared by the Principal for Data Collection group. Below are some of the considerations that they have identified: - Consider data and security needs including personal or sensitive data. - Consider the ecosystem – following a disaster, internet and wireless connections may be intermittent or non-existent. - Identify and prioritize selection criteria - Short-term and long-term costs - Number of users, surveys, and items - Devices and data requirements for enumerators - Security and privacy compliance - Integration with other technology - Offline collection - Short Message Service (SMS) integration - Unstructured Supplementary Service Data Integration - Authentication and user roles - Skip logic and data parameters - Data analysis - GIS and mapping - Photos, audio and video - Ease of setup and use - Research MDC platform options - Rank options - Consider whether to customize an MDC platform - Select and test your platform. TIP: Be sure to test several devices in your context before making a final selection. (Principles for Digital Development, 2018) When developing and testing applications, consider the following: - Users accessibility to AGOL, i.e., do they need a login in their company’s Enterprise ESRI platform or is the application in the public domain on AGOL. - Amount of training required for user to input - Ease of navigation and number of clicks required to complete form - Varying size of screens on user devices (small screens vs. tablets) - Test with a variety of different levels of users - Types of reports and training required for the administrator - Duration of the application and availability Creating sample reports and identifying who can see specific data in advance will aid when a disaster does occur. In Texas, this type of data might be useful to the Texas Animal Health Commission, and other agencies involved in Emergency Support Function #11 – Agriculture and Natural Resources Annex (ESF-11). The following items should be further investigated for disaster related activities: - FEMA’s National Incident Management System - How to share some information with users that have input data – allows them to know that their data is being utilized for a worthy cause - Identifying other agencies that are working on recovery efforts with the same groups - Setting up mechanisms to share data automatically rather than relying on an individual to send out reports - Methodologies for ground-truthing and screening data quickly Explore the possibilities of utilizing ESRI’s WorkForce application to track locations of employees for safety and workflow coordination. It is important to consider that even with the best tools developed and ready for deployment – they might not be able to be used in the field if there is no power to charge the mobile data collection device or ability to transmit the data back to the database. Considerations of solar chargers for the mobile devices for employees might be helpful. Establishing alternate methods of communication such as, but not limited to, land lines, postal mail, drop off locations, leaving surveys at the gates, and 800 phone numbers should be implemented. There are infinite possibilities for the collection and use of real-time data in a disaster. It is the opinion of the authors that the potential benefits greatly outweigh the risks of not obtaining and utilizing the data. We will continue to share the lessons learned to help others implement solid data collection tools. Cherie LaFleur, P.E., Environmental Engineer, USDA – Natural Resources Conservation Service, Central National Technical Service Center, Fort Worth, Texas. [email protected] Catherine Stanley, E.I.T., Water Quality Specialist, USDA – Natural Resources Conservation Service, Weatherford, Texas. [email protected] Collins, C. (2017, October 27). Retrieved from Texas Observer: https://www.texasobserver.org/agriculture-losses-estimated-200-million-harvey/ Fannin, B. (2017, October 27). Texas agricultural losses from Hurricane Harvey estimated at more than $200 million. Retrieved from AgriLife Today — Texas Agrilife Extension: https://today.agrilife.org/2017/10/27/texas-agricultural-losses-hurricane-harvey-estimated-200-million/ Littlefield, D. A. (2017, September). NRCS Develops New Web App to Expedite Agency Response to Harvey. Retrieved from USDA-NRCS: https://www.nrcs.usda.gov/wps/portal/nrcs/detail/tx/newsroom/stories/?cid=nrcseprd1351676 Principles for Digital Development. (2018, May 8). How to Choose a Mobile Data Collection Platform. Retrieved from Digital Principles: https://digitalprinciples.org/wp-content/uploads/PDD_HowTo_ChooseMDC-v3.pdf Stahnke, A., Jannise, P., & Northcut, M. (2018, 05 15). USDA NRCS Texas Personnel. (C. Stanley, Interviewer) The Weather Company. (2017, September 2). Historic Hurricane Harvey’s Recap. Retrieved from The Weather Company: https://weather.com/storms/hurricane/news/tropical-storm-harvey-forecast-texas-louisiana-arkansas USDA – Forest Service, Mobile Geospatial Advisory Group. (2015, August). Internal Document: Collector for ArcGIS Field Data Collection Pilot for Enterprise GIS Using ArcGIS Online. Alan Stahnke, State Soil Scientist, NRCS, Temple, TX. Pam Jannise, State GIS Specialist, NRCS, Temple, TX. Steven Diehl, Cartographic Technician, NRCS, Temple, TX. Mark Northcut, Landscape and Planning Staff Leader, NRCS, Temple, TX. The authors are solely responsible for the content of these proceedings. The technical information does not necessarily reflect the official position of the sponsoring agencies or institutions represented by planning committee members, and inclusion and distribution herein does not constitute an endorsement of views expressed by the same. Printed materials included herein are not refereed publications. Citations should appear as follows. EXAMPLE: Authors. 2019. Title of presentation. Waste to Worth. Minneapolis, MN. April 22-26, 2019. URL of this page. Accessed on: today’s date.
<urn:uuid:f96b3b0b-9a00-44c5-a802-e4a0c623fa2d>
CC-MAIN-2021-43
https://lpelc.org/tag/disaster-reporting/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00471.warc.gz
en
0.919231
3,354
2.859375
3
If you’ve ever had the privilege of seeing a lacrosse game, you know that it’s a physical sport. However, since many people are unfamiliar with the rules regarding contact in lacrosse, they often wonder how this element of physicality stacks up to other contact heavy sports, like football. Although lacrosse is a contact sport, it’s not as physical as football. Lacrosse is centered around speed and finesse, whereas football is centered around strength and brute physicality. Nonetheless, lacrosse players do need to have an element of physicality to perform well during games. Both lacrosse and football place emphasis on physical contact, but to varying degrees. To prove that football is more physical than lacrosse, it is necessary to take a look at the different kinds of contact in each sport and perform an objective side-by-side comparison. Read until the end to see how the contact-related injury rates of football measure up to those in lacrosse. Why Football is More Physical than Lacrosse With lacrosse and football both being contact sports, it’s only natural for athletes to be curious as to which sport boasts the largest element of contact. Having played both sports myself, I can say with certainty that football is the clear winner when it comes down to pure physicality. Physical Contact Happens at a Higher Frequency in Football The major reason for this is that contact is a given with each and every play in football. Players clash and collide every time that the ball is snapped, regardless of their position. There’s no such thing as a football play where players aren’t smashing into each other. The battle between the offensive linemen and the defensive linemen is a fixed constant throughout the entire game. As soon as the play goes live, defensive linemen explode out of their stance straight into the offensive linemen. The offensive linemen, in turn, do everything within their power to hold their ground. This is one form of contact that fails to waver. In lacrosse, contact isn’t nearly as much of a guarantee. Defenders are not allowed to physically contact players off-ball to a large degree, otherwise they risk drawing a penalty. For this reason, contact is typically limited to the ball carrier and any on-ball defenders. The only times where physical contact is allowed to any considerable extent off-ball is when there’s a loose ball on the field and players are trying to outmaneuver each other for possession. Lacrosse Contact Has More to Do with Technicality than Brute Force In addition, the underlying purpose for contact differs between the two sports. Although lacrosse defenses do rely on physical contact to pressure ball carriers, these collisions are much more technical. Lacrosse defenders contact opposing ball carriers for the purpose of dislodging the ball and forcing the opponent off of their intended path. Football defenders contact opposing ball carriers for the sole purpose of delivering a blow that’s forceful enough to knock them on the ground. Since lacrosse players aren’t actively trying to slam players onto the ground, they don’t have to direct all of their physical strength into colliding with the other player. It’s far more crucial for them to concern themselves with the accuracy and timing of their checks rather than concentrating on brute force. The Play Structure of Football Favors Physicality More So Than Lacrosse Plus, it’s important to note that football is a game that revolves around brief, explosive bursts of strength. Each play typically only lasts a couple seconds. In this short span of time, football players are expected to physically clash with one another at maximum effort. They can rest once the whistle has blown and play has stopped. In lacrosse, the play is not nearly as condensed. Play continues with few interruptions, as players run up and down the field as possession moves back and forth between teams. During these transition periods, players are rarely contacting one another. They’re merely focused on properly positioning themselves for the upcoming play. Since possession bounces between teams so frequently in lacrosse, there are plenty of times where contact is not a major concern for players. For this reason, physical contact is encouraged way more in football compared to lacrosse. Types of Physical Contact in Football The superior physicality of football over lacrosse is also evident in the types of contact present within these sports. The main types of physical contact in football all involve body-to-body collisions with opponents brute acts of strength, as described below. When people think of physicality in sports, the first thing that usually pops into mind is the art of tackling. Tackling is arguably the greatest showmanship of physicality there is on the athletic stage. When football players set their sights on tackling, they only have one goal in mind—to hit with a force hard enough to knock the other player down to the ground. Although coaches do their very best to teach fundamental tackling technique, defensive players will bring down opposing ball carriers by any means necessary if they have to. They’ll grab at the ankles, lock an opponent down with their arms, and come careening in at a full sprint straight into the ball carrier. This is the essence of what player physicality is all about. It’s one player versus another to see who has the strength to come out on top. Another prominent physical skill in football is blocking. With blocking, football players try to force defensive players back and prevent them from tackling the ball carrier. To be a solid blocker, offensive players need to get physical. There’s no way that a player can skirt around the issue of physicality when it comes to blocking. Without physicality, the opposing defensive player will easily slip past their guard and tackle their teammate. In sports, it doesn’t get much more physical than purposefully trying to push another player away with as much force as humanly possible. Blitzing is the antithesis of blocking. While offensive players attempt to block opponents from having a free shot at a tackle, defensive players attempt to drive these blockers out of their way to get to the ball carrier. The act of blitzing also involves a large degree of physicality in that defensive players must use their weight and strength to bull into opposing blockers to afford themselves an opportunity at the tackle. There aren’t very many rules governing this sort of physical contact, so players are given a lot of leeway to use whatever physical skills they have at their disposal. Speed and agility do play prominent roles in blitzing, but these athletic skills pale in comparison to the role that physicality plays in the act of blitzing. Types of Physical Contact in Lacrosse Practically every form of physical contact in football involves forcing players to the ground or forcing players out of of the way. Lacrosse shares certain similarities to the physical contact in football, but the physical contact is more of a means to achieve some overarching goal rather than serving as the overarching goal itself. One particular area of lacrosse that makes the physical element so unique is defensive stick checking. At first glance, it appears as though defensive players are allowed to hack ball carriers with no regard as to where their stick checks land or how much harm is being done to the opponent. If you look closely, however, you will realize that defensemen throw their stick checks in a calculated manner, specifically on the opponent’s stick and gloves. Defensemen utilize stick checks to disrupt the opponent’s stick handling ability. This ultimately results in the offense’s inability to create scoring opportunities and the potential for generating turnovers. It is not the intent of defensemen to bring their opponent to the ground with hard, forceful stick checks. This high level of physicality is reserved for football tackling. Instead, defensive players simply want to put pressure on opposing ball carriers to hasten their decision making and force them into making mistakes. The other major form of physical contact initiated by lacrosse defensemen is body checking. With this type of physical contact, defensive players hold their hands together on their lacrosse stick and aggressively push ball carriers off of their intended course. Body checks are an effective means of disrupting dodges, passes, and shots, especially when the opposing ball carrier is in close proximity to the goal. It’s important to mention that body checks are used sparingly. Defensemen rarely use body checks when the opposing ball carrier fails to present themselves as a threat. Only when the opposing ball carrier actively tries to dodge and get to a certain area on the field is the body check used. A lacrosse body check is not the same thing as a football tackle. Lacrosse defenders are not allowed to hold or grab an opponent and throw them to the ground. They’re only permitted to push opponents on the front or side of the body with their gloves firmly together on the lacrosse stick. Any semblance of tackling in lacrosse will immediately warrant a penalty from the officials. Defensemen aren’t the only ones to initiate contact in lacrosse. Occasionally, ball carriers bring the fight to the defensemen by driving them backward and imposing their will to reach a certain area on the field. When an offensive player uses their body as leverage to drive a defender out of their way, it’s called a bull dodge. The bull dodge has been one of the more controversial areas of lacrosse in that the lacrosse rules committee is still trying to define the rules of engagement that govern this lacrosse maneuver. As a result, the way the bull dodge is called hasn’t been the model of consistency in recent years. Typically, bull dodging involves lowering your shoulder. Some lacrosse officials consider this move to be legal, whereas other officials consider it to be illegal. At the youth level, bull dodging is largely prohibited. Professional lacrosse leagues, however, usually let this physical contact slide. Pro lacrosse player, Myles Jones, is the king of getting away with physical contact. If you want evidence, just take a look at the first play from his highlight tape below. Although physical contact isn’t the primary reason why people watch lacrosse, there are flashes of superior physicality from time to time, as shown in the clip above. Why Do Lacrosse Players Wear More Protective Equipment than Football Players? Now that we’ve established that football emphasizes more physicality than lacrosse, you’re probably curious as to why lacrosse players wear more protective equipment than football players. First off, let’s take a look at the exact equipment requirements of each sport. The table below outlines all of the protective lacrosse equipment versus all of the protective football equipment. |Protective Lacrosse Equipment||Protective Football Equipment| |Shoulder Pads||Shoulder Pads| |Protective Cup||Protective Cup| Upon detailed examination, you will find that the only additional protective equipment that lacrosse players wear are the arm pads and gloves. Other than that, all the equipment standards are the same. Lacrosse players wear extra equipment along the arms and hands to safeguard against defensive stick checks. As aforementioned, defensemen apply hard stick checks to an opponent’s stick in an attempt to dislodge the ball. Sometimes, these defensive stick checks incidentally smack the hands and arms more so than the stick. A metal shaft colliding with hands and arms that are completely unprotected is a recipe for disaster. For this reason, lacrosse players are forced to wear gloves and arm pads to minimize the risk of injury. Since football players are not dealing with the prospect of a metal shaft impacting their hands and arms, there’s no need for them to wear this equipment. It would only impede their ability to perform on the field. Football Injury Rates versus Lacrosse Injury Rates Unfortunately, an emphasis on physicality in sports is usually accompanied by an increase in injury rates. With players constantly colliding and delivering body blows to one another, there are bound to be instances where things don’t go as planned, resulting in incidental harm. Thus, when people compare the physical elements of football and lacrosse, the topic of injury is always a source of intrigue. People want to know how the injury rates compare across these two different contact sports. Fortunately, the NCAA also carries a strong interest in this area, as they want to do everything within their power to promote player safety above all else. For this reason, the NCAA has devoted a considerable amount of time and resources to investigating injury rates in both football and lacrosse. To accurately compare the likelihood of injury in football versus lacrosse, it’s necessary to take a look at injury rates rather than the total amount of injuries per season. Since football has more participants than lacrosse, comparing the total amount of injuries per season offer skewed results. The most notable statistic that compares the injury rates across these two contact sports is the number of injuries per 1,000 athlete exposures. In the following studies, an athlete exposure is defined as any time that a player was involved with either a practice or game. From this information, we can reasonably conclude that there are generally more injuries in football compared to lacrosse. This is due in large part to the greater emphasis on physical contact in football relative to lacrosse. Even though football has more injuries on average, the nature of these injuries are very similar to that of lacrosse. According to the NCAA football injury study, lower limb injuries accounted for 50.4% of all players injuries (source). In the NCAA lacrosse injury study, injuries to the lower extremities accounted for 58.3% of all injuries (source). These statistics are eerily similar, demonstrating that the effects of sports physicality may have a direct correlation with lower leg injuries. Fortunately, concussions only accounted for a fairly small percentage of injuries in both sports. Only 7.4% of football player injuries were concussions (source). The NCAA lacrosse study had similar numbers, in that concussions accounted for 7.4% of injuries in competition and 4.2% of injuries in practice (source). These dwindling concussion percentages are a very good sign. Hopefully, these positive trends will only continue onwards in the future. As a side note, keep in mind that these studies were performed recently, but there will likely be more injury studies coming out in the near future. The NCAA football injury study documented injuries from the 2004/2005 season to the 2008/2009 season. The NCAA lacrosse injury study recorded more recent data, documenting injuries from the 2009/2010 season to the 2014/2015 season. Although these numbers are about as reliable as it gets, the current injury rates may not exactly reflect what these studies present. The Bottom Line Put simply, football is more physical than lacrosse. Although lacrosse does involve a considerable amount of contact relative to other sports, this still does not compare to the physicality of football.
<urn:uuid:b8af80e4-3376-4644-b7ba-8c68e84448cd>
CC-MAIN-2021-43
https://lacrossepack.com/is-lacrosse-more-physical-than-football-ending-the-debate/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00311.warc.gz
en
0.958488
3,075
2.71875
3
Class Size Matters With the imminent start of the School Year and class rosters being finalized as we speak, I want to talk about class size and the City of Chicago Municipal Code. Small class size has been proven to significantly to improve student outcomes, especially for low-income and minority students. Yet, in CPS we regularly see class sizes of 30+ students, numbers that can be particularly devastating for younger learners. Education Researcher William J. Mathis from the University of Colorado and the National Education Policy Center published Researched Based Options for Education Policy Making, where he gave the following recommendations: “• Class size is an important determinant of student outcomes, and one that can be directly determined by policy. All else being equal, lowering class sizes will improve student outcomes. • The payoff from class-size reduction is greater for low-income and minority children. Conversely, increases in class size are likely to be especially harmful to these populations — who are already more likely to be subjected to large classes. • While lowering class size has a demonstrable cost, it may prove the more cost-effective policy overall particularly for disadvantaged students. Money saved today by increasing class sizes will likely result in additional substantial social and educational costs in the future. • Generally, class sizes of between 15 and 18 are recommended but variations are indicated. For example, band and physical education may require large classes while special education and some laboratory classes may require less.”Mathis, W. J. (2016, June). Research-Based Options For Education Policymaking. Retrieved from https://nepc.colorado.edu/sites/default/files/publications/Mathis RBOPM-9 Class Size.pdf The Municipal Code was Revamped after the Our Lady of the Angels School Fire to Keep Students Safe in Times of Need for Emergency Egress. The point of small class size is so kids can quickly and safely leave a classroom, traverse the hallways and exit the building in times of emergency, such as fire, chemical spill, gas leak, etc. Many of these fire codes pertaining to schools were written in the aftermath of the tragic Our Lady of the Angel’s school fire that killed 87 children and 3 Catholic nuns that occurred on December 1, 1958, right here in Chicago. The Municipal Code was written with student safety in mind, yet CPS blatantly ignores the City Fire Codes (and CFD and the Mayor’s Office allows it) as it shoves more and more students into classrooms designed for a predetermined and limited amount of people based on classroom size. Overcrowding of any school is considered dangerous and hazardous to life safety and, coincidentally, an overcrowded classroom is not conducive to learning. As Parents and Community Members, We Can Help Hold CPS, The Chicago Fire Department and The City of Chicago Accountable for Following the City Fire Codes Below is a step by step process that anyone can use to alert the Chicago Fire Department about dangerous and hazardous classroom overcrowding. Step 1. Know the Law! Title 13-56 of the Municipal Code of Chicago (see attached info at the end of this post) governs the occupancy of all Type I and II schools in the City of Chicago. Complaints for overcrowding must be filed with the Chicago Fire Department because the CFD is the Authority Having Jurisdiction (“AHJ”). If you witness overcrowding or what appears to be overcrowding in a classroom or you have documented proof of overcrowding in a classroom pleases take action by reporting it to the Chicago Fire Department via 311. Step 2. Call 311. After wading through numerous prompts to direct your complaint to a department or agency that can’t legally address complaints for overcrowding you will be greeted by a 311 data entry specialist. The data entry specialist will say their name which may be inaudible when first spoken. Kindly ask them to repeat their name and provide the spelling of their name before going further. Write down the name they give you and repeat the spelling of their name to verify you have the correct spelling of their name. It’s important to establish this information up front and it will be clear later in this process why you will need their last name. Step 3. Provide a description of your complaint. I’m providing a sample complaint that worked for me so you can model a successful overcrowding complaint here: “Hello, I’m calling to file a complaint of overcrowding pursuant to Title 13-56 of the Municipal Code of Chicago for the overcrowding of my child’s [ ] Grade classroom in Room number [____] at [____________________] located at [_________________ The classroom in question had over [ ] persons in it on [____________] and the room is only big enough to have no more than [____________] persons in it. Please route my complaint to the Bureau of Fire Prevention in the Chicago Fire Department. They need to send a fire inspector to determine if this classroom is in fact overcrowded.” Child’s Grade level Child’s Room number School Name School Address Total number of people estimated to be in a given classroom during the school day. You can determine an estimate or approximation for the total number of people in a classroom in a number of ways: (1) at open house, note the number of chairs in the classroom and the square footage of the of the classroom; (2) ask school staff for the total number of students in your child’s individual classroom and the number of aides, as well as the square footage of the classroom. Date that you observed the classroom or date where school is in session. The number of people allowed in the classroom at any given time, per the Municipal Code. How to calculate that number: Note classroom length and width in linear feet. (e.g., Classroom is 20 feet in length and 30 feet in width – 20 x 30 = 600 square feet, divided by 20 square feet, which the number of square feet per person set out in the Municipal Code – 600 ÷ 20 = 30 total people allowed in 20 x 30 classroom. Step 3a. Request that the 311 call taker read the documented text of your complaint back to you so you are clear that it says what you said, after you complete the process of communicating your complaint verbally to the 311 operator. Step 3b. If the 311 call taker replies saying she/he will have to route this complaint back to CPS or an organization, agency or department other than the Chicago Fire Department please inform the call taker that the Chicago Fire Department is the Authority Having Jurisdiction regarding illegal overcrowding of Type I and Type II schools and that if she/he does not route the complaint to the fire department you will need to speak to their supervisor (on the day I phoned in this very complaint the call taker refused to route my complaint to the Fire Department). I informed the call that the Bureau of Fire Prevention in the Chicago Fire Department receives referrals for overcrowding for schools, nightclubs exhibitions and places of public assembly. The situation at my child’s school is dangerous and hazardous, should there be a need for emergency evacuation of the classroom. Step 4. Don’t Take No for an Answer If the 311 call-taker refuses to route your call to the Chicago Fire Department, you’ll need to document the time of your call and ask for a SR# (Service Request number). You will need this number so you can bring it directly to the Chicago Fire Department, Bureau of Fire Prevention for additional follow-up. My best guess is that the Fire Prevention Bureau will have to request the 311 complaint be routed directly to them so it can get closed out by the appropriate governmental agency. Step 5. Wait 3 to 7 days and call back to 311 with your SR# Call back to 311 and request the disposition of your complaint and say something like: “Hello, I’m calling to follow-up on a complaint I made recently for overcrowding in a school. My SR# is 16-01234. Can you please look it up and provide me with the disposition of the complaint?” The 311 operator may only be able to share limited information regarding the complaint but at the very least they should be able to tell you whether or not the agency or department they routed the complaint to closed out the complaint. Step 6. Check back with the school to determine if the classroom was inspected for overcrowding. Call the school principal and ask if the Fire Department reported to the school to investigate a complaint for overcrowding in the classroom that you complained about. Give the particulars of the complaint to the principal to include the SR#. If the principal tells you that the inspector was here but didn’t say one way or the other if the complaint was valid or not valid (founded or unfounded) you will then have to submit a Freedom of Information Act (FOIA) request with the city via their website. Step 7. File a FOIA with the Fire Department Go online to www.CityofChicago.org to submit a Freedom of Information Act request to follow up on your initial complaint. Include the SR# and request any and all documentation generated by the Chicago Fire Department from the Office of the Fire Commissioner as well as the Bureau of Fire Prevention. Request the Fire Inspectors Name and Badge number as well as any written correspondence sent to the inspector regarding this complaint (include SR#). Request the Notice of Violation (“NOV”) if issued to the Responsible Party for the Chicago Public Schools for any and all violations written while on inspection at the (insert name of school here). If it is determined, based on the information you receive from your FOIA that your complaint for overcrowding a school classroom is valid and you do not see a Notice of Violation written by the Fire Department to the Chicago Public Schools CEO Janice Jackson or other Responsible Party, please post your information online for all to see. Don’t stand-by while the Fire Dept., CPS and the City ignore the laws designed to protect the life safety of your child. Do something about it! Take your complaint to the Authority Having Jurisdiction (Chicago Fire Dept.) to compel CPS to follow the Law! I hope you find this useful. Thanks for reading!
<urn:uuid:4d1136df-8274-482f-84f4-7b1cbf2f90f2>
CC-MAIN-2021-43
https://maryfaheyhughes.com/tag/overcrowding/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00391.warc.gz
en
0.93802
2,127
2.609375
3
FACT SHEET: Obama Administration Engages Students, Educators, and Leaders on Climate Education and Literacy Washington, DC – Today, the White House Office of Science and Technology Policy will host a Back-to-School Climate Education Event, bringing together over 150 outstanding students, educators, and education policy leaders from across the country. As part of this event, the Administration is announcing new commitments from Federal agencies and external collaborators to enhance climate literacy. These actions build off of the call-to-action issued through the Climate Education and Literacy Initiative, launched in December 2014 to connect students and citizens with the best-available, science-based information about climate change. Addressing the climate-change challenge and the bold goals articulated in President Obama’s Climate Action Plan will require a next-generation workforce that is equipped with the knowledge and skills to develop and implement solutions. The commitments being announced today by the Administration and independent entities support progress towards these goals. Expanding Science On a Sphere® to Include Renewable Energy Data and ClimateBits Videos. Science On a Sphere® (SOS) is a global display system, developed by National Oceanic and Atmospheric Administration (NOAA) researchers, that uses computers and video projectors to display planetary data onto a six-foot-diameter sphere, analogous to a giant, animated globe. The Department of Energy (DOE) is announcing the release of new energy-related SOS datasets—representing wind, solar, and geothermal energy—to help people learn about renewable energy resources around the world via visualizations. Today, the following members of the Association of Science Technology Centers will begin presenting these new resources to their visitors: - Boonshoft Museum of Discovery, Dayton, OH - Children’s City, Dubai, United Arab Emirates - Danville Science Center, Danville, VA - Denver Museum of Nature and Science, Denver, CO - Imagination Station Science and History Museum, Wilson, NC - ‘Imiloa, the Astronomy Center of Hawai’i, Hilo, HI - International Museum of Art and Science, McAllen, TX - Maryland Science Center, Baltimore, MD - McWane Science Center, Birmingham, AL - Nurture Nature Center, Easton, PA - Oregon Museum of Science and Industry, Portland, OR 2 - Orlando Science Center, Orlando, FL - Science Central, Fort Wayne, IN - South Florida Science Center and Aquarium, West Palm Beach, FL - Techmania Science Center, Pilsen, Czech Republic - The Wild Center: Natural History Museum of the Adirondacks, Tupper Lake, NY Additionally, NOAA, the University of Maryland, and NASA’s Goddard Space Flight Center have collaborated to produce a series entitled ClimateBits, minute-long videos that explain and visualize key concepts in climate science. These resources are available around the globe through NOAA’s SOS network, which has more than 120 member institutions worldwide, including many of the world’s largest science museums, visitor centers, zoos, aquariums, laboratories, and schools. Announcing a National Climate Game Jam to Support Climate Literacy. In October 2015, game developers, climate scientists, and educators will gather at sites around the country to create new game prototypes that allow players to learn about climate change through science-based, interactive experiences. This event follows on an initial commitment from NOAA through the Climate Education and Literacy Initiative. Today, NOAA is announcing that the collaboration has grown to include a number of partners who will host game jams: the Smithsonian Institution, the California Academy of Sciences, the Wilson Center, the Polar Learning And Responding (PoLAR) Climate Change Education Partnership, the Paleontological Research Institute, and STEMhero. Promising prototypes will be provided with development support to complete games for educators and students to use in the classroom. In November 2015, visitors to the Smithsonian National Museum of Natural History (NMNH) will be able to play selected games created during the jam. NMNH will also host an after-hours event for college students and young adults to provide an opportunity for game testing and conversations with experts and game designers. Conducting Climate Workshops for Educators. As part of NOAA's portfolio of activities to strengthen ocean, climate, and atmospheric science education, the Climate Stewards Education Project (CSEP) will support six regional workshops in 2016, focused on topics such as climate impacts on natural resources, actions to mitigate and adapt to climate change, and learning through modeling and simulations. CSEP will bring hundreds of K-12 and college and university educators into active communities of climate learning, providing them with sustained professional development, collaborative tools, and support to build a climate-literate public that is actively engaged in climate stewardship. Launching a New Partnership for Digital Climate and Ocean Education Resources. NOAA is announcing a partnership with the Public Broadcasting Service (PBS) to incorporate NOAA’s trusted scientific content into PBS LearningMedia, making resources available to 1.6 million registered users of this online educational platform. 3 NOAA will collaborate with WGBH, STEM Lead for PBS LearningMedia, to incorporate assets from climate.gov and NOAA’s Ocean Today sites—providing near real-time data and high-quality videos on topics relevant to the classroom. Moving forward, NOAA will work with WGBH to develop future content for the site. Improving Climate Literacy of Federal Employees from Natural-Resource Management Agencies. This October, Federal natural-resource management agencies, including the Department of the Interior (DOI), the U.S. Department of Agriculture (USDA), the Environmental Protection Agency (EPA), NOAA, and the U.S. Army Corps of Engineers (USACE), will release a framework for building the climate literacy and capabilities among their agency staff. The framework, which was called for in the Administration's Priority Agenda for Enhancing the Climate Resilience of America's Natural Resources, will define shared climate-education goals among agencies, describe a strategy for building overall workforce climate literacy and technical staff capabilities, and articulate an approach for collaborating on climate education with external partners and stakeholders. Launching Climate Science and Communication Courses in 2015-2016 through an Interagency Partnership. In October 2015 in Anchorage, Alaska, the Earth to Sky partnership, led by the National Aeronautics and Space Administration (NASA), National Park Service (NPS), U.S. Fish & Wildlife Service (USFWS), and NOAA, will conduct the first of a planned series of regionally focused courses on climate science and effective communication techniques. The course, which targets informal-education professionals, is sponsored by NASA’s Arctic Boreal Vulnerability Experiment (ABoVE) and will include content on climate science and effective communication techniques. Participants will develop action plans for using course content to educate many thousands of visitors to Alaska’s National Parks, Bureau of Land Management sites, and other locations about the causes, consequences, and solutions associated with climate change. This pilot course will form the model for additional courses to be held in 2016 in various regions of the country. Developing an Online Educator Framework. New science education standards represent an opportunity to prepare teachers to address the interdisciplinary nature of climate change and societal responses. NOAA, in collaboration with TERC and the Cooperative Institute for Research in Environmental Sciences (CIRES) at the University of Colorado Boulder, is committing to develop an online Educator Framework and supporting resources for Teaching Climate and Energy Literacy for NOAA’s climate.gov this winter. The Framework will support K-12 teachers to provide instruction across the curriculum laid out in the National Research Council’s Framework for K-12 Science Education. This collaboration will leverage the content of the Climate Literacy and Energy Awareness Network (CLEAN) collection and the Third National Climate Assessment to greatly expand the Teaching Climate section of climate.gov. 4 The White House Office of Science and Technology Policy issued a call-to-action in October 2014 for organizations to lift America’s game in climate education. Today, following on the initial set of announcements articulated at the launch of the Climate Education and Literacy Initiative in December 2014, a number of external organizations are announcing additional commitments and activities that enhance climate literacy. These include: programs and projects to integrate best-available climate science into classrooms and visitor experiences; tools and resources to connect students, educators, and visitors to climate information; events and activities that engage students and educators in local climate solutions; training opportunities for educators; and more. The Alliance for Climate Education (ACE). ACE educates young people about the science of climate change and empowers them to take action. ACE is committing to launch a first-of-its-kind digital climate-education program for high schools this fall. This online experience will be modeled after the live ACE Assembly that provides an interactive climate-education experience for high-school students. Having already reached 2 million students in person, the digital program will enable ACE to scale its impact to educate and activate millions more students and educators in currently under-reached communities. Climate Central. Next month, Climate Central will launch a new website (WxShift) that delivers all of the hallmarks of the weather forecast but uses them as a jumping-off point to teach people about climate change. WxShift will give people a deeper understanding of the environment where they live by connecting weather with local and relevant climate trends and analyses. Users will have access to information on key climate-change indicators, regularly updated videos of scientists linking weather to the larger climate picture, and daily climate-change news from journalists, meteorologists, climate scientists, and expert contributors. This resource will serve as an easy, engaging way for people to learn more about climate change through the ways they experience the weather. Climate Interactive. Today, Climate Interactive, in partnership with the Massachusetts Institute of Technology Sloan School of Management and University of MassachusettsLowell Climate Change Initiative, is launching the World Climate Project, featuring a “serious game” that puts players into the role of international climate-solution negotiators. The launch includes publicly available software, new facilitator materials, and a new release of the greenhouse-gas emissions simulation tool, C-ROADS. Climate Interactive aims to reach at least 10,000 people by December 2015, increasing awareness of the global challenges in addressing climate change in advance of the United Nations Framework Convention on Climate Change (UNFCCC)’s 21st Conference of the Parties (COP21). 5 CLEO Institute. The Climate Leadership Engagement Opportunities (CLEO) Institute, based in Miami, Florida, is announcing that this fall, it will begin to offer its Climate Science, Seriousness, and Solutions Training Program to officials across the region of the Southeast Florida Regional Climate Compact. The Compact region includes 108 municipalities and over 5 million residents and is highly vulnerable to the impacts of climate change. CLEO’s Program will draw upon the Third National Climate Assessment, the President’s Climate Action Plan, and current research on climate education and engagement to enhance understanding and climate readiness. CLEO piloted the program this summer, working with City of Ft. Lauderdale officials, to train more than 2,000 employees and inform their climate-resilience efforts. The initiative may evolve into a national program over the coming months and years. Climate Generation: a Will Steger Legacy. Climate Generation will engage educators in the UNFCCC COP21 meeting, to be held in December in Paris, as both learners and climate communicators for their schools and communities. A delegation of ten Education Ambassadors will be sent to the COP21 through Climate Generation’s Window into Paris program. These teachers, representing diverse subject areas, grade levels, educational settings, and geographic regions, will share their unique perspectives with thousands of students, educators, citizens, and policy leaders through daily blogs and webcasts. Their firsthand accounts of COP21 will provide invaluable insight and opportunities to integrate climate change into classrooms throughout the country. Earth Day Network and Rovio Entertainment. Earth Day Network and Rovio Entertainment, the creators of Angry Birds, have teamed up to create “Champions for Earth,” a global Angry Birds Climate Change Tournament. The Tournament will take place in September, during Climate Week NYC and the United Nations General Assembly, and will engage millions around the globe. The game will include sciencebased climate messages that have been developed in consultation with experts from Federal agencies and academic institutions. Champions for Earth will connect players all over the world, including millions of “millennials,” with climate-change information and actions. Green Schools, Inc. Today, Green Schools, Inc. is announcing that it will hold the first annual "Bringing the Green Future Home" competition, a competition open to K-12 students and schools, later this fall. Under the competition, students will compete for scholarships and prizes by creating educational videos, blogs, and vlogs that inspire action on climate change and other environmental challenges. Students will then work to amplify their videos through a variety of social-media channels. National Wildlife Federation (NWF). Today, NWF is launching new online climateeducation resources for back-to-school. Climate Classroom is a new website developed in conjunction with the filmmakers and supporters for the documentary series Years of Living Dangerously. The lesson plans and resources (designed for students from grades 6 6-12 and college undergraduates) correspond to the science and subject matter presented in the documentary series; they encourage students to analyze the relevance of climate change to their daily lives and investigate how they, as individuals, can be part of the solution. This interdisciplinary curriculum highlights careers in science, provides writing prompts, and outlines service-learning projects to connect science to language arts, social studies, and life skills. Climate Classroom Kids—a companion resource for younger students (grades K-5), their parents, and educators—utilizes photography and stories of animals to enhance understanding of the effects of climate change on wildlife habitat and introduces students to actions that reduce carbon pollution. Combined, the two programs provide educational information for a wide range of learners and leverage NWF’s educational resources and programs for schools, campuses, and homes. TERC, the University of Texas Austin, North Carolina State University, and Michigan State University. These institutions, with funding from the National Science Foundation (NSF), NASA, and NOAA, have partnered to develop a set of investigationbased EarthLabs modules to help students gain skills in Earth and climate sciences. This winter, these institutions will release the finalized set of nine modules, designed to help high-school students increase their skills to draw conclusions from data, make decisions based on evidence, and effectively communicate what they have learned about climate and environmental change. The capabilities that students will develop through their work with the EarthLabs modules will prepare students to meet the National Research Council's Framework for K-12 Science Education and the Next Generation Science Standards and train them to contribute as climate-literate members of the future workforce. WGBH Boston. WGBH Boston, PBS’s largest producer for television, the web, and mobile, is announcing that it will host a Forum on Digital Media for Climate Education in November 2015. Through support from The Kendeda Fund, this program will bring together 200 key stakeholders—including teachers, media producers, educational researchers, and policy makers—to explore the evolving landscape, products, and engagement models surrounding digital media in support of climate-science education. Presentations will be streamed live to online audiences nationally, and viewers will be able to participate in panel discussions via social media.
<urn:uuid:a7a64415-c84f-48f3-bd5f-f71619a6a2f2>
CC-MAIN-2021-43
https://obamawhitehouse.archives.gov/the-press-office/2015/11/16/fact-sheet-obama-administration-engages-students-educators-and-leaders
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00710.warc.gz
en
0.91754
3,207
3.015625
3
For an expanded understanding of how the conscious and unconscious mind create life experiences, you can read: Chapter 4, The Unseen Anatomy: The Power of Thought and Emotion, in Edgework – Exploring the Psychology of Disease, by Ronald Peters, MD Thoughts and Thinking Perhaps nothing more clearly distinguishes humans than sophisticated thinking. The ability to think at high analytical levels separates us from most, and perhaps all, other animals. Consider the role of thought in creating the human experience. Humans discovered the nature of electricity, gravity, physics, and chemistry – and how to utilize them for human benefit – through the process of penetrating, analytical thought. We create our daily experience, too — from the trivial to the crucial — largely by directing and focusing the personal force called thought. How we perform our work, which clothes to wear, where to go today, whom to call, which car to buy – such decisions define our day-to-day lives and all are birthed by thought. Attitudes and Beliefs But only rarely do we start from scratch when we think. The direction our daily thoughts will travel in is largely determined by “packages” of prior thinking called attitudes, opinions, and beliefs. These packages become controlling forces in our lives, setting limits within which we live. Those with similar attitudes and beliefs become our friends and colleagues, reinforcing our way of thinking and helping to convince us that we are “right.” …… As William Shakespeare said, “There is nothing good or bad, but thinking makes it so…” The Healing Power of Thought – The Placebo Effect Some of the most direct evidence for the ability of thought to produce bodily change comes from study of the so-called placebo effect. A placebo is a treatment that produces a positive result simply because the patient believes the treatment will work. The placebo itself has no intrinsic medicinal properties to produce that benefit. Placeboes have been shown in endless medical studies to produce a 35-45 percent improvement in whatever medical condition is being treated. Even doctors who are otherwise skeptical about the psychological aspects of illness accept, and often utilize, the power of placeboes. A dramatic example of the placebo effect occurred with the cancer drug Krebozian back when it was being touted in some media stories as a potential new miracle treatment. A patient with metastatic lymph node cancer had been told by his doctor that he had only two weeks to live; when he heard of Krebozian, he insisted on being included in clinical trials for the drug. After ten days on the medication, the man’s tumors shrank and he was discharged from the program. Two months later, he heard a news report that the Krebozian trials were producing discouraging results; shortly thereafter, his tumors reappeared. His physician, well aware that his patient’s beliefs were at the bottom of this yo-yo effect, told him that the batch of Krebozian used in the trials was later discovered to have been defective. He then injected the man with water, telling him that he was getting fresh, intact Krebozian. Once again the patient went into remission. He remained in good health until two months later, when he heard that Krebozian was “worthless” in the treatment of cancer. Two weeks later, he died of his original disease. When the tranquilizer Valium was at its peak of medical use, 30 million Americans annually were taking it to reduce their stress and anxiety. Since then, however, 30 double-blind studies have shown Valium to be no more effective than a placebo. Similarly, the drug Aureomycin was hailed as an effective treatment for atypical or viral pneumonia. It was given to thousands of patients over a four-year period until a controlled study showed that it too had no advantage over placeboes. The placebo effect has been demonstrated even more dramatically with surgery. Consider the following experiment conducted in 1958. At the time, obstructive coronary artery disease was treated by sewing the internal mammary artery, an artery in the chest wall, into the heart to improve blood flow to it and thereby reduce angina pain. Half the patients in the experiment received the full surgical procedure while the other half received only a skin incision with no arterial graft. Of course, the patients weren’t let in on the ruse — both groups believed they had received the same, complete surgery. Following the operation, the two patient groups showed equal improvement in their angina symptoms, required less of the medication nitroglycerin (a common treatment for angina), and performed better on treadmill stress tests. In fact, clinical outcomes across the board were exactly the same for both groups. In 1981, a similar project was carried out with patients in Denmark suffering from Meniere’s disease. The symptoms of Meniere’s disease can be very debilitating with constant dizziness, buzzing in the ears, and eventual deafness. Thirty patients underwent surgery to receive an inner ear shunt to treat the symptoms, but unbeknownst to them, only fifteen actually received a shunt. At the three-year follow-up point, 70 percent of the patients in both groups were experiencing significant symptom relief. Numerous other studies have confirmed the power of placebos, which is really the power of belief. For example, one third of people get as much pain relief from placebos as they do from morphine. In other research, placebos: - Reduced gastric acidity in ulcer disease; - Proved more effective than aspirin and cortisone in the treatment of rheumatoid arthritis; - Lowered blood pressure; and - Suppressed coughs as effectively as codeine. In a 1974 Scientific American article entitled the “Ethics of Giving Placebos,” the authors stated that “35-45 percent of all prescriptions are for substances that are incapable of having an effect on the condition for which they are prescribed.” Dr. Halstead Holman of Stanford University has noted that “three of four of the most commonly prescribed drugs treat no specific illness.” Psychobiology – Your Body is Listening to your Thoughts Thoughts, especially those energized by feelings, influence the body’s chemistry and physiology by forming chemicals called neurotransmitters. First discovered in the 1970s, neurotransmitters are essentially the brain’s messenger service because they enable the brain to talk to the rest of the body. They are chemicals that transmit nerve impulses across the synapses that separate nerve cells throughout the body. Also called neuropeptides, they include chemicals with specialized functions that go by such names as lymphokines, cytokines, and growth factors; many of the body’s hormones are neurotransmitters, as well. The sum total of all this neurotransmitter activity creates what has been termed a neuropeptide network, allowing thoughts to influence the mindbody’s physical processes. Your immune system, digestive tract, muscles, heart, and all other organs “listen” to your thoughts via the messenger service of the neuropeptide network and react just as directly as if you were sending a mental message to your hand to lift a finger. Researchers are now beginning to penetrate the universe of the mindbody beyond the level of the neurotransmitters. Using a sophisticated new tool called positron emission tomography, or PET, scientists can now explore tiny events inside the brain. PET studies have shown how thought physically activates the brain with changes in regional blood flow as well as changes in the flow of regulatory hormones and the powerful neurotransmitters. This new field of scientific research, the “psychobiology” of thought, bridges the previously wide gap between psychology and medicine and reveals even more convincingly than the discovery of neurotransmitters themselves that the movement of consciousness is a physical – that is, electrochemical -event in the mindbody. Every thought that flows in your endless stream of consciousness and self-talk can potentially make a significant impact on your mindbody’s physical systems (although the effect is greatly muted by the self-opposing nature of thought, as we’ll see below). Indeed, if we were more aware of the impact of thought on the mindbody, we would take greater care to think “healthfully.” Even those of us who have heard about the new advances in mindbody research tend to ignore the effects of our thoughts because thoughts are such a familiar and constant part of our daily experience. No matter what we are doing, we must endure the endless mental chatter inside our heads. Most of it seems rambling and inconsequential, and even ridiculously contradictory. We want this and then we want that instead, and then we think we don’t deserve any of it. One moment, our thoughts bring us down with worry, resentment, and self-criticism; the next moment, they make us giddy with inspiration, creativity, and excitement. It appears to us as if our thoughts just add up to inner background noise that cancels itself out and amounts to nothing in the end. But if we “do the math” on our own thoughts or just listen to ourselves and others around us, we discover that each person has definite patterns to their way of thinking. Some are pessimistic about any opportunity that presents itself, some optimistic. Some fret about every bump that appears in their path, some take even big bumps in their carefree stride. Clearly, if thoughts make a difference to our health, we would want them to be as positive as possible. Yet at various times, even the cheeriest among us are weighed down by self-imposed limitations, discontents, concerns, and pain. It simply is not possible to control all of our thoughts and make them positive -– there are way too many of them. But we can be aware of the tone and content of our thoughts as they stream by and redirect them to be more uplifting. We can “re-frame” thoughts and turn half-empty glasses into half-full ones. And we can even concentrate and focus our thoughts on specific goals, including healing ourselves. I believe that science will eventually discover that no drug has as much power over our body systems as focused thought.
<urn:uuid:69c20cbc-a7ef-4271-8e44-17b47d9acb49>
CC-MAIN-2021-43
https://www.healmindbody.com/knowledges/thought/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00151.warc.gz
en
0.965872
2,125
3.265625
3
Manuscript image from the Theological Works of John VI Kantakouzenos. Byzantine, Constantinople, 1370-1375 Earlier today I submitted an article to a journal using hagiography to dispute the idea that in the Byzantine world Christ was distant from worshippers, the unapproachable God, the Pantokrator on high. Because I’m a lumper, I could not help bringing in, alongside many references to Late Antique ascetic literature East and West, a couple of references to sixth-century art. When people think of Christ Pantokrator, the image from many Eastern Christian domes springs to mind, such as this eleventh-century one from the Church of the Holy Apostles, Athens (my photo): Or, better, this famous thirteenth-century mosaic in Hagia Sophia, Constantinople: I didn’t mention these in the article, but for many people they convey distance and inaccessibility of the divine Person. A Justinianic mosaic that I did mention and which can be seen to communicate a similar idea of unapproachable glory and Light is the icon of the Transfiguration from St Catherine’s Monastery, Sinai: The thing is, when I read monastic literature of the fifth and sixth centuries, I do not find an inaccessible Christ. Although there is evidence of the growing cult of the saints (see Peter Brown, The Cult of the Saints — and I’ve read a good review of Robert Bartlett, Why Can the Dead Do Such Great Things?), the piety of the vast majority of early Byzantine ascetic/mystic/monastic texts is Christocentric, and Christ is not far or unapproachable to His followers. Thus, the preferred Christ Pantokrator is sixth-century, not eleventh or thirteenth. Like the Transfiguration, it comes from St Catherine’s, Sinai: This is one of the most famous icons in the world; it is the first of the Pantokrator type, from what I recall, and one of the oldest Eastern Mediterranean images of Christ to survive. Christ Pantokrator appears here with one half of his face gentle, one half stern. He is the perfect Desert abba, if you think of it. The oldest Coptic icon may also be sixth-century and currently resides in the Louvre (I’ve seen it!). It is a different vision of Christ from any of the above — Christ and Apa Mena (my photo): Christ is Pantokrator. All-powerful. For He is the Second Person of the Trinity. Christ can also be our Friend. For such is how He described Himself to His Disciples. Come, let us follow Him. Last night, at the recommendation of Fr. Raphael, I watched the second episode of Andrew Graham-Dixon’s 2007 documentary Art of Eternity, ‘The Glory of Byzantium’. In this episode, he visits some of the great sites of Byzantine art from c. 500 with St. David’s in Thessaloniki to 1315 with the Chora monastery in Constantinople.* In between, Graham-Dixon brought us to Ravenna — San Apollinare Nuovo and San Vitale — as well as Hosios Loukas Monastery in southern Greece and the 13th-c mosaics in Hagia Sophia. Before we go any further, the apsidal mosaic from St David’s, Thessaloniki: Along the way, he interviewed an iconographer and a priest. The iconographer explained to Graham-Dixon the idea that a Byzantine icon has ‘rhythm’; this use of the word didn’t make a lot of sense to an Anglophone, so he had the iconographer explain. Basically, Byzantine icons are drawn in such a way that the perspective is not at all like looking through a window (which would be the goal of Renaissance art). Instead, the idea is that the image is coming at you out of the wood on which it is painted. This rhythm of the image, this movement towards you, explained the iconographer, brings you into the world of the image. It is no longer a strictly two-dimensional object, no longer merely geometric. You are participating in the image yourself. This, he said, is central to the Orthodox theology that lies behind Byzantine icons. In Orthodox theology, you know something by participating in it. We know God, to use the greatest example, by participating in God (an idea not without biblical precedent, if you get your ideas of ‘participation’ correct). Thus, when you behold an icon, you are participating in the image itself. Later, Graham-Dixon interviewed an Orthodox priest. The priest explained many things about icons and their importance. He, too, brought out the significance of participation. When the Orthodox venerate an icon, they are not actually venerating the tesserae of the mosaic or the paint and plaster on wood, but the person of whom the image is made. This is an important distinction lost on many Protestants and, I fear, some Orthodox as well. What this priest left out, or had cut from the interview, is the main reason icons are important. Icons are the full affirmation of the reality of the incarnation of God the Word. God became flesh and pitched his tent among us. He had two eyes, two ears, a mouth, and a nose. He walked on two (dirty) feet around the Judaean and Galilaean countryside. He touched real lepers with real hands. He preached with a literal voice from an actual larynx. He shed real tears at the death of Lazarus. He died a real death for us on the Cross. He rose again in just as real (if not more real) a body as before. With the Incarnation, we behold God. Face-to-face. For 33 years He was literally present to the human race in an actual human form. This means that the prohibition on images doesn’t apply to Jesus. We may not know exactly what he looked like, but we do know this — he looked like a man. Because he was a man. Fully human, yet fully divine. As in this mosaic over the doorway into the church in Hosios Loukas Monastery: By allowing images of Christ, we produce a tangible way of celebrating a full affirmation of the incarnation of the Creator God Who irrupted into human history and changed things forever. When you take this theology of the incarnation that lies behind the theology of the icon, and then reflect on the idea of participation in Orthodox theology, you come across something beautiful. It is not truly the icon itself, the physical object, that is worthy of veneration, but the One Whom it represents. And when we behold an icon of Christ face-to-face, we are invited to participate in that image, to participate in the action of the image, to participate in the life of the Person Who looks upon us. Graham-Dixon’s documentary is not available on DVD for normal people, unfortunately — I got it from the Edinburgh College of Art library, recorded from TV onto a DVD. I think it may be illicitly available on YouTube, though… *If you have a date in Constantinople, she’ll be waiting for you in Istanbul. Yesterday I took advantage of free museum day in Paris to make my third trip to the Musée nationale du Moyen-Age (aka Musée de Cluny). Some items not previously viewed were on display, sometimes because they’ve redone some displays, sometimes because I may not have paid enough attention in previous visits. Anyway, besides some really amazing ivory carvings that really deserve their own posts, I spent a little time with some fragmentary Gothic sculpture. But I took no photos of that sculpture. Nonetheless, here’s something like what I saw, only more complete, from the central portal of Chartres Cathedral: These three figures, you will note, are extraordinarily tall and slender. Kind of cubey around the edges, too. This is in part because they are, in fact, pillars. Since they serve an architectural function and are not stand-alone statues, they have been adapted to the space. Nonetheless, I have seen other mediaeval figures like this; this slender, elongated form is not reserved for Gothic column-statues. Byzantine icons also tend to be sort of … low on flesh, if you will. This lack of fleshiness was first pointed out to me on a trip to the Troodos Mountains in Cyprus, where our guide, Fr Ioannis, a painter and iconographer, asked some of the better-informed what struck them about some of the frescoes at Panayia Podithou. The answer: They look fleshier than a lot of classic Byzantine icons. Fr Ioannis explained that this was due to ‘Western’ (add, ‘Renaissance and later’) influences upon Cypriot iconography. A classic Byzantine icon will be long and slender with nary a muscle and certainly no bulk to the figures. I present to you, as an example, the fresco of the Transfiguration on the exterior of St Sozomen’s Church, Galata, Cyprus (15th-c, my photo): You can see here that the figure of Christ in particular is a fairly unfleshy sort. This Byzantine style is also visible in an ivory plaque in the Musée de Cluny depicting the coronation of Holy Roman Emperor Otto II* and his wife, the Byzantine princess Theophano in 982/3: The above is not my photo; mine was taken on my phone and is blurry. Nonetheless, this Byzantinising image is also very religious. In the centre is Christ who legitimates Otto II’s rule as Holy Roman Emperor; He is the largest, central figure, crowning the two monarchs who are dressed in Byzantine style. Compare it to my photo of this ivory carving of Christ crowning Romanos and Eudoxia in Constantinople a few decades earlier. What this waifiness signifies, I believe (and as the post title suggests), is the spiritualisation of the human form. It is not necessarily a retreat from the goodness of the human body; the East and West are both accused of this in the Middle Ages, but if you take this visual evidence with the written evidence of the best theologians, you will see that there was a very strong belief in the inherent goodness of the human body as part of God’s creation. In the Renaissance, the spiritual aspect of God’s good act of creating was found in expressing naturalism, from Fra Angelico to Michelangelo. In the Middle Ages, it was found in expressing spiritual truth. The human person is not only a pscychosomatic unity but also inspired, inspirited, spiritual. We are tripartite — spirit-soul/mind/nous-flesh. Naturalism grounds the image in the present reality too much for the mediaeval mind. The goal is to set the mind on things above (Col. 3:2). Therefore, not only in subject matter (Christ, his Mother, the saints, Bible stories) but in style, that which is above is transmitted to our minds through the art. The human form is elongated. Its muscle is toned down. It is still explicitly and specifically human in these mediaeval images. But now it is also otherworldly. It is spirit-and-body all at once. In a human face visible to you on the street today, you cannot see the soul. In contrast, in a mediaeval statue, ivory, or painting, you see the inner as well as the outer. This spiritualising impacts the art in more ways than this, but I’ll leave it there for now. The next time you see such a form, I hope its intrinsic beauty will strike you to spend some time in your own nous looking for the spiritual and then moving upward to the God of the uncreated light. I recently returned from a couple of weeks of research in Italy — a week and a bit at the Biblioteca Marciana, Venice, and two days at the Biblioteca Capitolare, Verona. The Marciana is located quite near the magnificent Basilica of San Marco, pictured at left. I, therefore, had many opportunities to visit San Marco and its mosaics. The style of art that predominates in San Marco is called Veneto-Byzantine (my other blog on that); it is very similar to Byzantine iconography but also shares traits with Romanesque (no surprise, since both directly descend from Late Antique Roman art). Today’s title takes a quotation from Rowan Williams’ book The Dwelling of the Light: Praying with Icons of Christ — icons are theology in line and colour. This is evident throughout San Marco. Visually, I was impacted powerfully simply by setting foot inside San Marco — the brilliant gold of the place penetrates the soul. I could not help but utter praise to Almighty God under my breath the first two times I entered. The first time I entered I stumbled to a standstill as I beheld the glory of the place. (NB: In what follows, all images can be viewed full-screen if you click on them.) It is a truly beautiful space. The golden field represents heaven. The iconographical plan of the main domes is Christological. As a visitor coming from the West, you encounter the mosaics in anti-chronological order, but they begin in the East with the rising sun, with the golden apsidal half-dome of Christ Pantokrator, the all-powerful. Above the chancel is the Dome of the Prophets, foretelling Christ with the Lord Himself in the centre. The next dome is the Dome of the Ascension, then the Dome of Pentecost, then the Dome of the Last Judgement. As the sun traces its trajectory, so does the story of Christ. In between the domes of Pentecost and Ascension is the Christological vault. On one side stands the Crucifixion, on the other the Resurrection (portrayed in the Byzantine manner as what we would call the Harrowing of Hell), and in the centre the Empty Tomb. Below, the mosaics tell the story of Christ’s final days. What is the theological significance of the main decorative scheme? The apsidal mosaic reminds us — Christ Pantokrator, Christ Almighty, Christ our God who was crucified for us. Christ who lives and saves us. There has often been a temptation to deny the fullness of Christ’s divinity, from certain Gnostic groups to Jehovah’s Witnesses. San Marco calls us to worship Christ as fully God. In the atrium, we see this in the depiction of the creation. For whom do we see making the animals? The cross in the halo gives away who this young, beardless man is — it is God the Son, the living Word, Christ, who creates. Yet He is not merely depicted in glory, but also on earth; we see not only his last moments, as I mentioned above, but also the Garden of Gethsemane, the temptation by the devil, and some of his teaching ministry. The other temptation has been to deny his full humanity, from certain Gnostic groups to those who claim he was an alien — or those whose vision of him as God would swallow up the man he was as well. San Marco’s mosaics are a testament to the full humanity and full divinity of Christ. They are a reminder of what the great theologians of history — Athanasius, Augustine, Cyril, Leo, Aquinas, Palamas — have sought to balance in our minds as we think on our Lord and Saviour. And they do it through a medium accessible to all — the domes of a basilica. This past Christmas, one of the gifts I asked for was the Byzantine crucifix pictured above, which was available at a local Christian book shop. I wanted it because of my interest in Eastern Orthodoxy as well as the aesthetic beauty of it; it now hangs above my desk at home where I can look upon a reminder of the glorious, cosmic event that transformed the world and my own life. Upon looking at this crucifix, however, it became clear to me that this was not actually a Byzantine crucifix. It looks Byzantine, especially given that Christ is standing in victory, not hanging in agony, but it is not. A big give away, besides the western Mediaeval style of the figures, is the Latin inscription above our Lord’s head: Jesus of Nazareth, King of the Jews. Not only is it in Latin, but it is not what Byzantine crucifixes tend to say. They tend to call him the King of Glory, not of the Jews. Today I was wasting time on the interwebs, and, feeling like a bit of a fool, I now know where the crucifix is from: I know I’ve seen images of this crucifix before I asked for the one at Christmas, but somehow it escaped me that they were one and the. The significance of this crucifix is as follows. Francis of Assisi, when he had recently rejected his father’s wealth and all the rest, was in the old church of San Damiano praying one day. Hanging in the church was the crucifix in question. Praying before this crucifix, Francis was told by Christ to rebuild His church. Thinking the Lord meant San Damiano, Francis did just that. Later he learned that the church to be rebuilt was the one made of living stones, and Francis began his mission of evangelisation in earnest. This crucifix, then, is very famous and holds a special place in the world of Franciscans. As a work of art, it is interesting, as was pointed out at The National Shrine of St. Francis of Assisi. In this article, we are drawn to three elements in this painting of the crucified God. First, we see the crowd of people beside/surrounding Christ, Mary the Virgin and John the Evangelist on one side, Mary the wife of Cleopas, Mary of Magdala, and Longinus the centurion on the other. At the second level, where Christ’s arms are outstretched embracing the world we see four angels and two men surrounding a black chamber — the empty tomb. The men are Sts. Peter and John, the apostolic witnesses of the empty tomb. Third, above our Saviour’s head we see Him ascending into Heaven and greeted by angels. The salvation event is before us here, with the crucified God standing central as a King in control, crucified, resurrected, ascending before our very eyes. Depicted in line and colour is the salvation of the world, the theology of our own lives. Here we see the centrepiece of our faith on the San Damiano crucifix, the crucifix that the Lord used to draw Francis to transform the world.
<urn:uuid:2dadda10-0e70-4e67-b1a8-24d807a8ddac>
CC-MAIN-2021-43
https://thepocketscroll.wordpress.com/tag/byzantine-art/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00111.warc.gz
en
0.950205
3,991
2.765625
3
This essay was produced by one of our professional writers as a learning aid to help you with your studies Is Machiavelli an Immoral Teacher of Evil? This essay will consider whether or not Machiavelli was a teacher of evil, with specific reference to his text The Prince. It shall first be shown what it was that Machiavelli taught and how this can only be justified by consequentialism. It shall then be discussed whether consequentialism is a viable ethical theory, in order that it can justify Machiavelli’s teaching. Arguing that this is not the case, it will be concluded that Machiavelli is a teacher of evil. To begin, it shall be shown what Machiavelli taught or suggested be adopted in order for a ruler to maintain power. To understand this, it is necessary to understand the political landscape of the period. The Prince was published posthumously in 1532, and was intended as a guidebook to rulers of principalities. Machiavelli was born in Italy and, during that period, there were many wars between the various states which constituted Italy. These states were either republics (governed by an elected body) or principalities (governed by a monarch or single ruler). The Prince was written and dedicated to Lorenzo de Medici who was in charge of Florence which, though a republic, was autocratic, like a principality. Machiavelli’s work aimed to give Lorenzo de Medici advice to rule as an autocratic prince. (Nederman, 2014) The ultimate objective to which Machiavelli aims in The Prince is for a prince to remain in power over his subjects. Critics who claim that Machiavelli is evil do not hold this view, necessarily, because of this ultimate aim, but by the way in which Machiavelli advises achieving it. This is because, to this ultimate end, Machiavelli holds that no moral or ethical expense need be spared. This is the theme which runs constant through the work. For example, in securing rule over the subjects of a newly acquired principality, which was previously ruled by another prince, Machiavelli writes: “… to hold them securely enough is to have destroyed the family of the prince who was ruling them.” (Machiavelli, 1532: 7). That is, in order to govern a new principality, it is necessary that the family of the previous prince be “destroyed”. Further, the expense of morality is not limited to physical acts, such as the murder advised, but deception and manipulation. An example of this is seen in that Machiavelli claims: “Therefore it is unnecessary for a prince to have all the good qualities I have enumerated, but it is very necessary to appear to have them. And I shall dare to say this also, that to have them and always to observe them is injurious, and that to appear to have them is useful.” (Machiavelli, 1532: 81). Here, Machiavelli is claiming that virtues are necessary to a ruler only insomuch as the ruler appears to have them. However, to act only by the virtues will be, ultimately, detrimental to the maintenance of the ruler, as they may often have to act against the virtues to quell a rebellion, for example. A prince must be able to appear just, so that he is trusted, but actually not be so, in order that he may maintain his dominance. In all pieces of advice, Machiavelli claims that it is better to act in the way he advises, for to do otherwise would lead to worse consequences: the end of the rule. The defence which is to be made for Machiavelli, then, must come from a consequentialist viewpoint. Consequentialist theory argues that the morality of an action is dependent upon its consequences. If the act or actions create consequences that, ultimately, are better (however that may be measured) than otherwise, the action is good. However, if a different act could, in that situation, have produced better consequences, then the action taken would be immoral. The classic position of consequentialism is utilitarianism. First argued for by Bentham, he claimed that two principles govern mankind – pleasure and pain – and it is to achieve the former and avoid the latter that determines how we act (Bentham, 1789: 14). This is done either on an individual basis, or a collective basis, depending on the situation. In the first of these cases, the good action is the one which gives the individual the most pleasure or the least pain. In the second of these cases, the good action is the one which gives the collective group the most pleasure or the least pain. The collective group consists of individuals, and therefore the good action will produce most pleasure if it does so for the most amount of people (Bentham, 1789: 15). Therefore, utilitarianism claims that an act is good iff its consequences produce the greatest amount of happiness (or pleasure) for the greatest amount of people, or avoid the greatest amount of unhappiness (or pain) for the greatest amount of people. This, now outlined, can be used to defend Machiavelli’s advice. If the ultimate goal is achieved, the consequence of the prince remaining in power must cause more happiness for more of his subjects than would otherwise be the case if he lost power. Secondly, the pain and suffering caused by the prince on the subjects whom he must murder/deceive/steal from must be less than the suffering which would be caused should he lose power. If these two criteria can be satisfied, then consequentialism may justify Machiavelli. Further, it is practically possible that such a set of circumstances could arise; it is conceivable that it could be the case that the suffering would be less should the prince remain in power. Italy, as stated, at that time, was in turmoil and many wars were being fought. A prince remaining in power would also secure internal peace for a principality and the subjects. A prince who lost power would leave the land open to attacks and there would be a greater suffering for the majority of the populous. On the subject, Machiavelli writes: “As there cannot be good laws where the state is not well armed, it follows that where they are well armed they have good laws.” (Machiavelli, 1532: 55) This highlights the turmoil of the world at that time, and the importance of power, both military and lawful, for peace. Machiavelli, in searching for the ultimate end for the prince retaining his power, would also secure internal peace and defence of the principality. This would therefore mean that there would be less destruction and suffering for the people. Defended by consequentialism, the claim that Machiavelli is evil becomes an argument against this moral theory. The criticisms against consequentialism are manifold. A first major concern against consequentialism is that it justifies actions which seem to be intuitively wrong, such as murder or torture, on not just an individual basis, but on a mass scale. Take the following example: in a war situation, the only way to save a million and a half soldiers is to kill a million civilians. Consequentialism justifies killing the million civilians as the suffering will be less than if a million and a half soldiers were to die. If consequentialism must be used in order to justify Machiavelli’s teachings, it must therefore be admitted that this act of mass murder, in the hypothetical situation, would also be justified. A second major concern is that it uses people as means, rather than ends, and this seems to be something which is intuitively incorrect, as evidenced in the trolley problem. The trolley problem is thus: a train, out of control, is heading towards five workers on the track. The driver has the opportunity to change to another track, on which there is a single worker. Thomson argues it would be “morally permissible” to change track and kill the one (Thomson, 1985: 1395). However, the consequentialist would here state that “morality requires you” to change track (Thomson, 1985: 1395), as there is less suffering in one dying than in five dying. The difference in these two stances is to be noted. Thomson then provides another situation: the transplant problem. A surgeon is able to transplant any body part to another without failure. In the hospital the surgeon works at, five people are in need of a single organ, without which they will die. Another person, visiting for a check-up, is found to be a complete match for all the transplants needed. Thomson asks whether it would be permissible for the surgeon to kill the one and distribute their organs for those who would die (Thomson, 1985: 1395-1396). Though she claims that it would not be morally permissible to do so, those who claimed that changing tracks in the trolley problem would be a moral requirement – the consequentialists – would also have to claim that murdering the one to save five would also be a moral requirement, as the most positive outcome would be given to the most people. Herein lies the major concern for a consequentialist, and therefore Machiavelli’s defence: that consequentialism justifies using people as means to an end, and not an end within themselves. A criticism of this is famously argued for by Kant, who claims that humans are rational beings, and we do not state that they are “things”, but instead call them “persons” (Kant, 1785: 46). Only things can permissibly be used only as a means, and not persons, who are in themselves an end (Kant, 1785: 46). To use a person merely as a means rather than an end is to treat them as something other than a rational agent which, Kant claims, is immoral. This now must be applied to Machiavelli. In advising the murder and deception of others, he is advocating treating people as merely a means, by using them in order to obtain the ultimate end of retaining power. Though this ultimate end may bring about greater peace, and therefore pleasure for a greater amount of people, it could be argued that the peace obtained does not outweigh the immoral actions required in creating this peace. Further, it must also be discussed whether Machiavelli’s teaching is in pursuit of a prince retaining power in order to bring about peace, or whether it is in pursuit of retaining power simply that the prince may retain power. The former option may be justifiable, if consequentialism is accepted. However, this may not the case for the latter, even if peace is obtained. Machiavelli’s motives will never be truly known. Such a problem as this demonstrates further criticisms of consequentialism, and therefore Machiavelli himself. If he was advising to achieve power for the sake of achieving power, he would not be able to justify the means to this end without the end providing a consequentialist justification – if, ultimately, the prince retains power but there is not a larger of amount of pleasure than would otherwise be the case. To pursue power in order to promote peace is perhaps justifiable. However, as is a major concern with the normative approach of consequentialism, the unpredictability of consequences can lead to unforeseen ends. The hypothetical prince may take Machiavelli’s advice, follow it to the letter, and produce one of three outcomes: - Power is obtained and peace is obtained. - Power is obtained but peace is not obtained. - Neither power nor peace is obtained. Only in the first of these outcomes can there be any consequentialist justification. However, this then means that there are two possible outcomes in which there cannot be a consequentialist justification, and it is impossible to know, truly, which outcome will be obtained. This is the criticism of both Machiavelli and consequentialism: that the risk involved in acting is too great, with such a chance of failure and therefore unjustifiable actions, when it is impossible to truly know the outcomes of actions. The nature of the risk is what makes this unjustifiable, in that the risk is against human life, wellbeing, and safety. Machiavelli condones using people as merely a means to an end without the guarantee of a positive end by a consequentialist justification. In conclusion, it has been briefly demonstrated what Machiavelli put forward as his teachings. It was further shown how the only justification for Machiavelli’s teachings is a consequentialist approach. However, criticisms put against Machiavelli and consequentialism, such as the justification of mass atrocities, using people as means to ends, and the unpredictability of the pragmatic implementation, show it to fail as an acceptable justification of his teachings. Therefore, it is concluded that Machiavelli is a teacher of evil. Bentham, J. (1798). An Introduction to the Principles of Morals and Legislation. Accessed online at: http://socserv.mcmaster.ca/econ/ugcm/3ll3/bentham/morals.pdf. Last accessed on 26/09/2015. Kant, I. (1785). Groundwork for the Metaphysics of Morals. Edited and Translated by Wood, A. (2002). New York: Vail-Ballou Press. Machiavelli, N. (1532). The Prince. Translated by Marriott, W. K. (1908). London: David Campbell Publishers. Nederman, C. (2012). Nicollo Machiavelli. Accessed online at: http://plato.stanford.edu/entries/machiavelli/. Last accessed on 02/10/2015. Thomson, J. J. (1985). The Trolley Problem. The Yale Law Journal. Vol. 94, No. 6, pp. 1395-1415.
<urn:uuid:785d06b2-9153-414c-adc5-81dc730e794c>
CC-MAIN-2021-43
http://freeessaypro.com/free-essays/is-machiavelli-an-immoral-teacher-of-evil/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00390.warc.gz
en
0.965397
2,922
2.9375
3
By Dave DeWitt On an August evening in A.D. 595, the Loma Caldera in what is now El Salvador erupted, sending clouds of volcanic ash into the Mayan agricultural village of Cerén, burying it twenty feet deep and turning it into the New World equivalent of Pompeii. Miraculously, all the villagers escaped, but what they left behind gives us a good idea of the life they led, the food they ate, and the chile peppers they grew. The Ancient Ash In 1976, while leveling ground for the erection of grain silos, a Salvadoran bulldozer operator noticed that he had plowed into an ancient building. He immediately notified the national museum, but a museum archaeologist thought that the building was of recent vintage and allowed the bulldozing to continue. Several buildings were destroyed. Two years later, Payson Sheets, an anthropologist from the University of Colorado, led a team of students on an archaeological survey of the Zapotitan Valley. He was taken to the site by local residents and quickly began a test excavation, and radiocarbon dating of artifacts proved that they were very ancient. He received permission from the government to do a complete excavation of Cerén. The site was saved. The crew of the Cerén excavation, Dr. Sheets and his students returned for five field sessions at Cerén, most recently in 1996. Their discoveries are detailed on their web site http://ceren.colorado.edu . One of the most interesting things they discovered was, in the words of Dr. Sheets, “We had no idea that people in the region lived so well 14 centuries ago.” The ash preserved the crops in the field, leaving impressions of the plants. The plants then rotted away, leaving perfect cavities, or molds. Using techniques that were developed at Pompeii, the archaeologists poured liquid plaster into the cavities. By removing the ash, the ancient fields were revealed and could be studied. Interestingly, the Native Americans of Cerén used row and furrow techniques similar to those still utilized today; corn was grown in elevated rows, and beans and squash were grown in the furrows in between. In a courtyard of a building, “We even found a series of four mature chile plants with stem diameters over 5 centimeters (2 inches),” wrote Dr. Sheets. “They must have been many years old.” Chile peppers are rarely found in archaeological sites in Mesoamerica, so imagine the surprise of the researchers when they discovered painted ceramic storage vessels that contained large quantities of chile seeds. “One vessel had cacao seeds in the bottom, and chiles above, separated by a layer of cotton gauze,” Dr. Sheets revealed. “It is possible that they would have been prepared into a kind of mole sauce.” Also found were corn kernels, beans, squash seeds, cotton seeds, and evidence of manioc plants and small agave plants, which were used for their fiber to make rope rather than being fermented for an alcoholic beverage, pulque, as was done in Mexico. A plaster cast of a chile stem with a 2-inch diameter I emailed Dr. Sheets, hoping to discover the shape and size of the chiles and thus deduce the variety being grown. But no whole pods were found, just the seeds and some pod fragments. The size of the chile stem indicated that the plant had been grown as a perennial, but all chile plants are perennial in tropical climates and can grow to considerable size. A polychromatic vessel like this one held stored chile seeds Dr. Sheets wrote me back about an article by Dr. David Lentz, the botanist who had studied the plant remains, and I tracked it down in the journal Latin American Antiquity that I found in the Zimmerman Library at the University of New Mexico. Dr. Lentz wrote about the seeds and the pod fragments, “It appears that many of these fell from the rafters of buildings where they would have been hung for drying or storage.” He added that the chile seeds from the site were the first in Central America found outside Mexico, and he speculated that those seeds in vessels were probably being saved for future planting. The Taming of the Wild Chile But what kind of chile was grown in Cerén? There was an intriguing clue in the article: a photograph of a chile seed compared with a bar indicating the length of one millimeter. The seed was 3.5 millimeters wide. Since the size of the seed is directly related to the size of the pod (generally speaking, the larger the pod, the larger the seed), perhaps it was possible to guess the size of the pod by comparing that ancient seed to seeds I had stored in my greenhouse. Paleoethnobotanists, the scientists who study the plants used by ancient civilizations, have theorized that chiles were first used as “tolerated weeds.” They were not cultivated but rather collected in the wild when the fruits were ripe. The wild forms had small, erect fruits which were deciduous, meaning that they separated easily from the calyx and fell to the ground. During the domestication process, whether consciously or unconsciously, early Native American farmers selected seeds from plants with larger, non-deciduous, and pendant fruits. The reasons for these selection criteria are a greater yield from each plant and protection of the pods from chile-hungry birds. The larger the pod, the greater will be its tendency to become pendant rather than to remain erect. Thus the pods became hidden amidst the leaves and did not protrude above them as beacons for birds. The selection of varieties with the tendency to be non-deciduous ensured that the pods remained on the plant until fully ripe and thus were resistant to dropping off as a result of wind or physical contact. The domesticated chiles gradually lost their natural means of seed dispersal by birds and became dependent upon human intervention for their continued existence. Because chiles cross-pollinate, hundreds of varieties of the five domesticated chile species were developed by humans over thousands of years in South and Central America. The color, size, and shape of the pods of these domesticated forms varied enormously. Ripe fruits could be red, orange, brown, yellow, or white. Their shapes could be round, conic, elongate, oblate, or bell-like, and their size could vary from the tiny fruits of chiltepins or tabascos to the large pods of the anchos and pasillas. But very little archaeological evidence existed to support these theories until the finds at Cerén. An Educated Guess It was exciting to think that perhaps we had a window into the ancient chile domestication process. Because their seeds were collected, and the plants were growing in a courtyard, the chile plants at Cerén were obviously cultivated and were more than just “tolerated weeds.” It was time to break out my metric ruler and start measuring seeds. I came up with the following table, ranked by seed width: The first conclusion I reached was that the Cerén chiles were small-podded. They certainly were not as large as anchos, whose seeds are twice the width of those of the Cerén chiles. They could, of course, have been chiltepíns, because the seeds were only half a millimeter wider than chiltepín seeds. But if they were somewhere between the size of chiltepíns and piquins, that would have made the pods about 1 centimeter long, less than half an inch. And since there is evidence that the chile pods had been hung up to dry with agave twine, that process would quickly dry the small-podded plants and their fruits. The correlation between seed size and pod length is not exact. Note that the habanero, which is nine times the length of the chiltepín, has seeds only 1 millimeter wider. Also note that the de arbol variety, which is also 4.5 cm long, but much thinner, has seeds only 1 millimeter wider than the Cerén chiles. I believe that we are witnessing the early domestication process begun by the Maya people. The complete domestication of chiles from chiltepíns to anchos and the development of many varieties would not happen until the Aztec culture of nearly a millennium after A.D. 595. This is my personal theory, and I am not a paleoethnobotanist, though sometimes I wish I had studied that discipline. The Cuisine of Cerén In addition to the vegetable crops of corn, chiles, beans, manioc, cacao, and squash, the archaeologists found evidence that the Cerén villagers also harvested wild avocados, palm fruits and nuts, and certain spices such as achiote, or annatto seeds. In fact, Dr. Sheets observed, “The villagers ate better and had a greater variety of foodstuff than their descendants. Traditional families today eat mostly corn and beans, with some rice, squash, and chiles, but rarely any meat. Cerén’s residents ate deer and dog meat.” They also consumed peccary, mud turtle, duck, and rodents, but deer was their primary meat. Fully fifty percent of the total bones found on the site belonged to white-tailed deer, and many of those deer were immature animals–giving rise to a very interesting theory. Linda Brown, who wrote the 1996 Field Season Preliminary Report entitled “Household and Village Animal Use,” noted, “Cerén residents may have practiced some form of deer management. One of the deer procurement strategies the Cerén villagers may have utilized is ‘garden hunting.’ Garden hunting consists of allowing deer to browse in cultivated fields and household gardens where they can be hunted. While some vegetation is lost to browsing, the benefits include easy access to deer when needed.” Expanding upon that theory, she wrote, “The ethnohistoric data make many references to the Maya partially taming white-tailed deer. Specifically, historical sources note that it was women who were responsible for taking in, semi-taming, and raising deer. [Diego de] Landa mentioned that women raise other domestic animals and let the deer suck their breasts, by which means they raise them and make them so tame that they never will go into the woods, although they take them and carry them through the woods and raise them there. Apparently, during historic times, there was a designated place in the woods where women would take deer to browse until they needed them. Scholars have argued that pre-Columbian women may have raised deer, dogs, peccary, and fowl much like contemporary Maya women raise pigs and fowl for food, trade, and special occasion feasts. Perhaps the Cerén women raised dog, fowl (a duck was tethered inside the Household 1 bodega), and semi-tamed deer as a contribution to the domestic and ceremonial economy.” It is always a challenge for archaeologists to reconstruct ancient cuisines and cooking techniques. The Cerén villagers did not have metal utensils, but they did have fired ceramics that could be used to boil foods. They could grill over open flames, and perhaps fry foods in ceramic pots using cotton seed oil or animal fat. They had obsidian knives that could cut as cleanly as metal. They had metates for grinding corn into flour and mocaljetes for grinding fruits, vegetables, chiles, and spices together into sauces. An artist’s rendering of what an Based on the archaeological evidence, I have devised some recipes that reflect the main ingredients used in the cooking of Cerén, adapted, of course, for modern kitchens. One of my basic theories about the history of cooking is that we should never underestimate our predecessors’ culinary sophistication, so I cannot presume that 14 centuries ago the Maya were preparing boring food. Especially since we know that they had chiles. Royal Chocolate with Chile Although this drink was served to royalty in the large Mayan cities, the discovery of chile in conjunction with cacao in Cerén indicates that even commoners knew how to make this concoction. - 1 ½ cups water - 1/4 cup cocoa - 1 tablespoon honey - 1/4 teaspoon hot chile powder, such as piquin - 1 vanilla bean pod In a pan, heat the water to boiling. Add the remaining ingredients and stir well. Serve immediately with the vanilla bean for garnish in the drink. Yield: 1 serving Heat Scale: Medium The Earliest Mole Sauce Why wouldn’t the cooks of Cerén have developed sauces to serve over meats and vegetables? After all, there is evidence that curry mixtures were in existence thousands of years ago in what is now India, and we have to assume that Native Americans experimented with all available ingredients. Perhaps this mole sauce was served over stewed duck meat, as ducks were one of the domesticated meat sources of the Cerén villagers. - 4 tomatillos, husks removed - 1 tomato, toasted in a skillet and peeled - ½ teaspoon chile seeds - 3 tablespoons pepitas (toasted pumpkin or squash seeds) - 1 corn tortilla, torn into pieces - 2 tablespoons medium-hot chile powder - 1 teaspoon achiote (annatto seeds) - 3 tablespoons vegetable oil - 2 cups chicken broth - 1 ounce Mexican or bittersweet chocolate In a blender, combine the tomatillos, tomato, chile seeds, pepitas, tortilla, chile powder and achiote to make a paste. In a pan, heat the vegetable oil and fry the paste until fragrant, about 4 minutes, stirring constantly. Add the chicken broth and the chocolate and stir over medium heat until thickened to desired consistency. Yield: About 2 ½ cups Heat Scale: Medium Venison Steak with Juniper Berry and Fiery Red Chile Sauce |Venison steaks on the grill (photos by Lois Ellen Frank) This recipe is by Lois Ellen Frank, from her book Foods of the Southwest Indian Nations (Ten Speed Press, 2002). Both the venison and the juniper berries are available from mail-order sources. Of course, grape juice or wine would not have been available to the Maya, but Lois has adapted this recipe for the modern kitchen. - 1 tablespoon dried juniper berries - 3 cups unsweetened dark grape juice or wine - 2 bay leaves - 1 ½ teaspoons dried thyme - 2 shallots, peeled and coarsely chopped - 2 cups beef stock - 6 venison steaks, 8 to 10 ounces each - 2 tablespoons olive oil - 1 tablespoon salt - 1 tablespoon freshly ground black pepper - 4 whole dried chiles de arbol, seeds and stems removed, crushed To make the sauce, wrap the juniper berries in a clean kitchen towel and crush them using a mallet. Remove them from the towel and place them in a saucepan with the grape juice or wine, bay leaves, thyme and shallots. Simmer over medium heat for 20 to 25 minutes, until the liquid has been reduced to 1 cup. Add the stock, bring to a boil, then decrease the heat to medium and cook for another 15 minutes until the sauce has been reduced to 1 ½ cups. Strain the sauce through a fine sieve and keep it warm. Brush the steaks on both sides with the olive oil and sprinkle with salt and pepper. Place the steaks on the grill and grill for 3 minutes, until they have charred marks. Rotate the steaks a half turn and grill for another 3 minutes. Flip the steaks over and grill for another 5 minutes until done as desired. Ladle the sauce onto each plate, top with the steaks, pattern-side up, and sprinkle the crushed chiles over them. Yield: 6 servings Heat Scale: Medium Pepita-Grilled Venison Chops Here is a tasty grilled dish featuring native New World game, chiles, and tomatoes, plus pepitas–toasted pumpkin or squash seeds. Garlic is not native to the New World, but is given here as a substitute for wild onions, which the people of Cerén would have known. - 5 tablespoons pepitas - 3 cloves garlic - 1 tablespoon red chile powder - ½ cup tomato paste - 1/4 cup vegetable oil - 3 tablespoons lemon juice or vinegar - 4 thick-cut venison chops, or substitute thick lamb chops Puree all the ingredients, except the venison, in a blender. Paint the chops with this mixture and marinate at room temperature for an hour. Grill the chops over a charcoal and piñon wood fire until done, basting with the remaining marinade. Yield: 4 servings Heat Scale: Medium Three varieties of beans were found beneath the ash in the village kitchens of Cerén. Certainly they were boiled, and since they are bland, they were undoubtedly combined with other ingredients, including chiles and primitive tomatoes. The Cerén villagers would have used peccary fat for the lard and bacon, and of course would not have had cumin. But they probably would have used spices such as Mexican oregano. - 3 cups cooked pinto beans (either canned or simmered for hours until tender) - 1 onion, minced - 2 tablespoons lard, or substitute vegetable oil - 5 slices bacon, minced - 3/4 cup chorizo sausage - 1 pound tomatoes, peeled, seeded, and chopped - 6 serrano chiles, stems removed, minced - 1 teaspoon cumin (or substitute Mexican oregano) Saute the beans and onion in the lard or oil for about five minutes, stirring constantly. In another skillet, saute the bacon and chorizo together. Drain. Combine the beans and onion with the drained bacon and chorizo in a pot, add the other ingredients, and simmer for 30 minutes. Heat Scale: Medium This recipe combines three Native American crops: squash, corn, and chile. Although we don’t know for sure, my theory is that the Cerén villagers would have known how to use green chile. I have taken the liberty of substituting New Mexican chiles for the small Cerénean chiles, making a milder dish. The villagers, of course, would not have used butter, milk, or cheese, but rather fat and water flavored with palm fruits. - ½ cup chopped green New Mexican chile, roasted, peeled, stems removed - 3 zucchini squash, cubed - ½ cup chopped onion - 4 tablespoons butter or margarine - 2 cups whole kernel corn - 1 cup milk - ½ cup grated Monterey Jack cheese In a pan, saute the squash and onion in the butter until the squash is tender. Add the chile, corn, and milk. Simmer the mixture for 15 to 20 minutes to blend the flavors. Add the cheese and heat until the cheese is melted. Yield: 4 to 6 servings Heat Scale: Medium
<urn:uuid:a7ac6410-3cc1-4fb6-b3d5-71203843f265>
CC-MAIN-2021-43
https://www.fieryfoodscentral.com/2008/07/02/out-of-the-ash-the-prehistoric-chile-cuisine-of-ceren/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00150.warc.gz
en
0.961755
4,037
3.40625
3
John Steinbeck’s 1966 Plea To Create A NASA For The Oceans In the September 1966 issue of Popular Science, author John Steinbeck made the case for giving deep-sea exploration the same attention as the space race. ROV Deep Discoverer Three years before the first humans landed on the moon, Nobel-prize winning author John Steinbeck published a passionate plea in Popular Science for equal efforts to explore “inner space.” In an open letter to editor-in-chief Ernest Heyn, Steinbeck argued that the investigation of Earth’s oceans was critical to the success of humanity and deserved the same funding and organization as space exploration. You can read this letter as it originally appeared in the September 1966 issue of Popular Science. Dear Ernie Heyn: I know enough about the sea to know how pitifully little we know about it. We have not, as a nation and a world, been alert to the absolute necessity of going back to the sea for our survival. I do not think $21 billion, or a hundred of the same, is too high a price for a round-trip ticket to the moon. But it does seem unrealistic, unreasonable, romantic, and very human that we indulge in these passionate pyrotechnics when, under the seas, three-fifths of our own world and over three-fifths of our world’s treasure is unknown, undiscovered, and unclaimed. John Steinbeck, 1962 Please believe, Ernie, that my passion for the world’s seas and underseas does not lessen my interest in our space probes. When the astronauts go up in their beautiful skyrockets, my stomach goes up with them until it collides with my lungs and pushes them against my throat. I set my clock for two a.m. recently to watch a crazy scarecrow-like structure settle gently on the moon, a job of such intricacy as to stagger the imagination. But besides the sweetness and delicacy of the thinking, planning, and building, the very fact that we do it proves that human beings have not changed—we are still incurable, incorrigible romantics. We may think back with wonder on people capable of a search for the Golden Fleece, capable of picking up their lives and going on crusade in the Middle Ages—but are we any different? The budget for getting two Americans on the moon is $21 billion, and it will necessarily come to much more. And what they will bring back will be what Dr. Urey calls a pocketful of rocks for him to analyze. If you ask an American why we want to get to the moon, he will usually say, “To get there before the Russians,” and the Russians probably use the same answer—to get there before we do. Pressed further, the one polled will go off into a burble of pseudoscientific jargon and equally pseudomilitary nonsense. Dr. Urey gives a truer reason—because we are curious. And it seems to me that one of the definitive diagnostics of the human animal, besides being the key to his success in survival and triumph over the forces of nature, is curiosity. But, while the lifeless rubbled surface of the inconstant moon becomes increasingly littered with the burnt-out bones of vehicles, the bathyscaphe has visited the deep and unknown places of the earth only a few times. There is never much argument about appropriations for space shots, but a recent request for money to explore, map, and evaluate the hidden places of our mother earth brought howls of protest from Congressional leaders and the inevitable question—is it really necessary? Ernie, I’m going to try to put down some of the reasons why I think it is really necessary to explore the sea. There is something for everyone in the sea… food for the hungry… incalculable wealth… the excitement and danger of exploration… It is a pitiful few thousand years that have passed since men and women roamed the earth eating anything that didn’t eat them first. Men moved as the game moved, and the game followed the grass. With the domestication of animals, the roving continued, but only following the grass. With the beginning of agriculture, the crops stood still and most men settled down. But over the years, by selection, the animals changed and the cereals changed. The grains we use today have little resemblance to their ancestor seeds, and the animals could not be recognized by their early progenitors. Man has changed the face of the earth and the inhabitants thereof, with the possible exception of himself. But the seas he has not changed. In our relation to three-fifths of the world, we correspond exactly to Neolithic man—fearful, ignorant, and swinish. We peck like sandpipers along the edges for the small treasures the restless waves wash up. We raid the procession of the migrating fishes, killing all we can. Even the killer whale herds the sperm whales and kills them only when it needs food— but we have wiped out some species entirely. We have not improved nor changed a single species of sea-going fish. And the huge agriculture of the seas we have ignored completely, except to rip out the fringes for iodine or fertilizer. I said that three-fifths of the earth’s surface is under the seas—but, with the washing down from the continents of minerals and chemicals, it is probable that four-fifths of the world’s wealth is there. More important in the near future, the plankton, the basic reservoir of the world’s food, live in the sea. We have not even learned to make this boundless bank of protein food available for our bellies. Of all the mysteries and enigmas of our world, man is the strangest and most incomprehensible. Without the pressure of cold, hunger, disease, danger from outside, and even greater danger from the quarrelsome combativeness in his own heart, it is probable that he would still be living in trees, and still eating everything he could kill or break up into bite-sized chunks. Survival has been the mother of our inventiveness. War has spawned not only weaponry, but a knowledge of mechanics in all directions. General Hap Arnold once remarked that without war we would probably never have developed the airplane, and between wars development just about ceased. We have wiped out the animal predators that once decimated embattled families. We are by way of defeating the micro-enemies which secretly invaded our bodies to strike from within. And finally we find ourselves faced with the most ghastly enemy of all—ourselves, too many of us in a world with a limited food supply. And hungry men will destroy anything, even themselves, to get food. We peck like sandpipers along the edges for the small treasures the restless waves wash up. At the same time we have invented the cold war, a continuing state of hostility between wars, which keeps our inventiveness in the mechanics of destructiveness alive. This, too, may be the result of our uneasiness in the face of our exploding numbers. Meanwhile, the intricate and expensive skyrockets litter space with an orbiting junk pile, and we can easily justify it as a means of defense. But it is possible that we may be driven back to our mother, the sea, because we are running out of supplies. Two men with their pockets full of moon rocks will not solve the situation. On the other hand, the planning, computing minds which so gently laid that crazy-looking scarecrow on the moon could easily design the means, not only for exploring our watery world, but for placing whole producing cities on the sea bottom. If our inventive minds were given the money and the incentive of necessity for the desalting and moving of sea water, it would be a very short time before life-giving water would flow to desert places which make up so much of our world, so that they might flower and produce. To me, personally, the oceans mean safety, mystery, and wonder. During the depression I lived by the sea and took most of my protein food from it and lived very well indeed. I have studied the endless variety of ocean animal life—hundreds of thousands more species than are to be found on land. Several years ago I went along as an observer on the Mohole Project. You remember that was the expedition which put down a drill string to the earth’s crust under 18,000 feet of water near Guadalupe Island, off the west coast of Mexico. We didn’t get very far—took six cores through sediment and into the basic basalt of the earth’s crust. But on the basis of those six cores, textbooks had to be rewritten. What we found was older than we expected and different from what we thought was there. But an attempt to probe farther has met strong resistance from some money-allotting members of Congress. The men in the rockets are rather like human sacrifices, taking a part of all of us with them. Oceanology, on the other hand, is slow, undramatic, and singularly unrewarded, although the gifts it can bring to us are measureless and will soon be desperately needed. Many wonderful men are working, studying, evaluating. At this writing there is a convention in Moscow attended by most of the world’s profound students and authorities in oceanography, oceanology, seismology, zoology. They have gathered to discuss and to describe what they have learned, and what they hope to learn. Experiments are going on all over the world. Cousteau has men living undersea, and so has the American Navy. Men are learning the techniques of changing pressures. Whereas the astronauts must become accustomed to weightlessness and vacuum, the undersea men must learn to endure the opposites. They receive little official encouragement. What the exploration of the wet world lacks, and must have to proceed, is organization. Undersea study is split up into a thousand unrelated groups, subjects, plans, duplications, having neither direction nor directors. There is no one to establish the path to be followed and see that it is taken. Our space probes could not have gotten off the ground without NASA, a management for analysis, planning, engineering, and coordinating, having the power to give orders and the money to carry them out. The movement to possess the sea must be given the strength and structure to move. We must explore our world and then we must farm it and harvest its plant life. We must study, control, herd, and improve the breeds of animals, because we are shortly going to need them. And we must mine the minerals, refine the chemicals to our use. Surely the rewards are beyond anything we can now conceive, and will be increasingly needed in an over-populated and depleting world. There is something for everyone in the sea—incredible beauty for the artist, the excitement and danger of exploration for the brave and restless, an open door for the ingenuity and inventiveness of the clever, a new world for the bored, food for the hungry, and incalculable material wealth for the acquisitive—and all of these in addition to the pure clean wonder of increasing knowledge. Why, Ernie, even the lawyers will have a field day. No one owns the underseas. Think of the happy thunder of argument over property rights. For myself, I am hungry for the experience. When the next Mohole expedition goes out, I am going with it. I want to go down in the bathyscaphe to the great black depths. I can’t wait. Surely all this should have at least equal backing with space. This letter originally appeared in the September 1966 issue of Popular Science.
<urn:uuid:c9caae2e-38cd-4a3d-9152-4644873c9cce>
CC-MAIN-2021-43
https://www.popsci.com/article/technology/john-steinbecks-1966-plea-create-nasa-oceans/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00070.warc.gz
en
0.955131
2,457
2.75
3
Among the many public health problems exacerbated by the COVID-19 crisis, opioid-related overdose deaths have increased sharply since the pandemic began. Provisional data from the Centers for Disease Control and Prevention indicate that more than 93,000 deaths resulted from drug overdose in the US in 2020, reflecting an increase of 30% compared with the previous year.1 One factor likely contributing to this trend is the reduction in access to mental health and substance abuse treatment during the pandemic, including access to medications for opioid use disorder (MOUD), previously referred to as medication-assisted therapy (MAT).2 The concept of MAT is “now considered out of date because it implies that medication is not treatment itself, although in reality, research from the past 5 years has shown that medications alone can treat addiction for many individuals,” explained Ashish P. Thakrar, MD, fellow in the National Clinician Scholars Program at the Perelman School of Medicine at the University of Pennsylvania in Philadelphia.3 Compared with nonpharmacologic approaches, research consistently supports the benefits of MOUD, including fewer deaths, higher rates of sustained recovery, and greater cost-effectiveness.4-6 In a recent cohort study of 40,885 insured individuals, Sarah Wakeman, MD, assistant professor of medicine at Harvard Medical School and medical director for the Massachusetts General Hospital Substance Use Disorder Initiative, and colleagues observed drastic reductions in overdoses at 3 and 12 months (76% and 59%, respectively) in those receiving treatment with buprenorphine or methadone.4 In addition, these therapies were linked to substantial reductions in use of opioid-related acute care at 3 and 12 months (32% and 26%, respectively) vs no treatment. The other treatment options examined — including inpatient detoxification, residential treatment, naltrexone, and intensive and nonintensive behavioral health services — were not associated with reductions in overdose or opioid-related acute care use.4 Despite its demonstrated benefits, however, MOUD has been vastly underutilized since before the pandemic. In the study by Wakeman et al, only 12.5% of the sample received MOUD with buprenorphine or methadone, and 2.4% received the generally less-effective naltrexone.4 In research published in June 2021 in the Journal of Hospital Medicine while Dr Thakrar was a fellow in the division of addiction medicine at Johns Hopkins Bayview Medical Center in Baltimore, he and colleagues reported that resident internal medicine teams started only 10% of eligible patients on MOUD at their site (only buprenorphine was examined).5 After a comprehensive, resident-led training program for providers, prescriptions of buprenorphine at discharge increased to 24%. Updates in MOUD and solutions to increase utilization were discussed in extended interviews with Dr Thakrar and Dr Wakeman. What is the current state of MOUD in the US, and what was the impact of the COVID-19 pandemic on this essential resource? Dr Thakrar: These medications are the standard of care and should be available to all individuals with opioid addiction. However, the reality is that, in any given year, less than one-quarter of patients with opioid addiction are actively in treatment with 1 of the 3 medications approved by the US Food and Drug Administration (FDA) for OUD: buprenorphine, methadone, and extended-release naltrexone. Access to these treatments is highly variable depending on geographic location, insurance coverage, and treatment setting. During the COVID-19 pandemic, federal agencies made 2 important regulatory changes regarding MOUD. First, they allowed prescribers to start buprenorphine via telemedicine, whereas they were previously required to have an in-person appointment with patients to prescribe this drug.7 Second, they allowed methadone clinics to provide more take-home medications to patients earlier in recovery. Research is ongoing to determine the impact of these changes, but early reports indicate that they were crucial in giving patients flexibility and increasing access to these life-saving medications. Dr Wakeman: Despite decades of evidence demonstrating that medication treatment with buprenorphine and methadone are far and away the most effective treatment we have, most people with OUD are not treated with these lifesaving medications, and most providers don’t provide these treatments. These situations occur in part because of how we have chosen to regulate these medications, as methadone and buprenorphine treatment requires additional licensure, which has created this opt-in system whereby most providers aren’t able to offer these therapies. Practically, this means that there are vast treatment deserts and counties without a single buprenorphine prescriber, resulting in people having to travel far distances to find an opioid treatment program for methadone treatment. Recent changes that allow for prescribers to get an X waiver without taking the 8 to 24 hours of previously required training is a step in the right direction, but this is far from the sweeping changes that are needed to ensure these medications are available to all who need them.8 What are believed to be the reasons why MOUD is so vastly underutilized? Dr Thakrar: There are 3 major barriers to increasing the use of MOUD: stigma, misunderstanding of these medications, and restrictive regulations. Too many clinicians and providers attribute addiction to poor willpower or a moral failing instead of to a chronic medical condition. As a result, they fail to realize that addiction is treatable. Many providers are unfamiliar with these medications and do not feel comfortable starting them; others fail to realize that these are long-term medications that benefit patients for years. Last, restrictive and confusing regulations around these medications make it difficult to know when providers are legally permitted to start them. Dr Wakeman: There are several barriers. First, policies regulating these treatments have created a system that makes these lifesaving medications a scarce resource while we are in the midst of an overdose epidemic, which is exactly the opposite of what we should be doing. We should be creating a system where it is easy for both providers to offer the most effective treatments and patients to access these medications. Instead, it is far easier to access illicit opioids than it is to access medication treatment. The next barrier, which policies help enshrine, is stigma. Stigma exists among providers, many of whom haven’t received any education in addiction and therefore carry the same biased beliefs as the general public. Rather than seeing addiction as a health condition, many providers and the public see this as an issue of bad behavior or, even worse, a criminal legal issue. We may say that this is a health issue now, but our policies and treatment models still reflect a deep and abiding belief that people who use drugs are untrustworthy, bad, and different. Therefore, our systems reflect a general punitive model full of the tropes that many have heard in the mainstream media and society: that we should be practicing “tough love,” that we shouldn’t be “enabling” people, or that they need to “hit bottom.” All of that is completely false, yet these notions continue to permeate the addiction treatment system and general medical providers’ beliefs about addiction. The other way stigma plays out is in antipathy toward medications specifically. In part because of a misunderstanding of what addiction is, people erroneously believe that treatment with medications like methadone or buprenorphine is “addictive.” This confuses physical dependence with addiction. Addiction is defined as compulsively using a substance despite harm. Taking a daily medication that allows you to be healthy, to work and parent, and not die from overdose does not meet this definition. If needing a medication every day were the same as addiction, then anyone who takes thyroid medication, insulin, or antidepressants would be “addicted.” On a related note, what is needed to ensure that more people in recovery and those close to them receive naloxone kits and training? Dr Wakeman: First, we could make naloxone available over the counter. Second, we could ensure that prices stay low and that price gouging by the pharmaceutical companies is prohibited. Lastly, we could make it standard practice that naloxone is prescribed to any person with OUD and their family and friends, as well as available in community-based settings, similar to defibrillators. In addition to MOUD, what other types of treatment and support are important to increase the odds of long-term recovery in these individuals? Dr Thakrar: Care for OUD needs to be personalized. Some patients transition to recovery with medications alone, but many need or benefit from psychotherapy, case management, housing assistance, peer support, or some combination of these services. It is also important to remember that even patients who do not desire abstinence deserve compassionate, evidence-based care to reduce the harms of using drugs. This includes harm reduction services such as syringe exchanges, distribution of naloxone to reverse overdose, overdose-prevention sites (also known as safe consumption sites), and housing-first provisions that do not mandate abstinence as a requirement for housing. Dr Wakeman: Medication is by far the most effective treatment for OUD. Additional psychosocial treatments should be available but never required. Recovery supports like recovery coaching or community-based options like recovery community centers and mutual help can be helpful but also should be voluntary. Nontreatment supports that address social determinants of health are absolutely vital and under-resourced. Housing is crucial. It is incredibly difficult to engage in treatment for any chronic condition if you are dealing with daily trauma, poverty, racism, and being unhoused. Addressing those broader barriers must be a part of comprehensive solutions to the overdose crisis. Disclosure: As noted in her paper, Dr Wakeman received personal fees from OptumLabs during the study described herein. 1. Steenhuysen J, Trotta D. US drug overdose deaths rise 30% to record during pandemic. Reuters. July 14, 2021. Accessed online July 26, 2021. 2. World Health Organization. COVID-19 disrupting mental health services in most countries, WHO survey. October 5, 2020. Accessed online July 26, 2021. 3. Capurso N. Toward accurate terminology for opioid use disorder treatment. Comment on [Fairley M, Humphreys K, Joyce VR, et al. Cost-effectiveness of treatments for opioid use disorder. JAMA Psychiatry. 2021;78(7):767-777. doi:10.1001/jamapsychiatry.2021.0247] 4. Wakeman SE, Larochelle MR, Ameli O, et al. Comparative effectiveness of different treatment pathways for opioid use disorder. JAMA Netw Open. 2020;3(2):e1920622. doi:10.1001/jamanetworkopen.2019.20622 5. Thakrar AP, Furfaro D, Keller S, Graddy R, Buresh M, Feldman L. A resident-led intervention to increase initiation of buprenorphine maintenance for hospitalized patients with opioid use disorder. J Hosp Med. 2021;16(6):339-344. doi:10.12788/jhm.3544 7. Kosten TR, Petrakis IL. The hidden epidemic of opioid overdoses during the coronavirus disease 2019 pandemic. JAMA Psychiatry. 2021;78(6):585-586. doi:10.1001/jamapsychiatry.2020.4148 8. Substance Abuse and Mental Health Services Administration. FAQs about the new buprenorphine practice guidelines. Accessed online July 26, 2021. This article originally appeared on Psychiatry Advisor
<urn:uuid:8ba62d1e-9eae-4829-8a33-97e6f37b2e04>
CC-MAIN-2021-43
https://www.infectiousdiseaseadvisor.com/home/topics/practice-management/commentary-on-oud-policy-changes-needed-to-address-opioid-epidemic/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00110.warc.gz
en
0.940422
2,464
2.859375
3
Three Stages of Disaster Response Denial, Deliberation, The Decisive Moment In her book on disaster survival, Amanda Ripley (2008) identifies the common response patterns of people in disaster situations. She argues that three phases of response are commonly seen. These are denial, deliberation and the decisive moment. Each of these stages is discussed below: Contrary to the common perception of people panicking and stampeding during a disaster, Ripley found that it was more common for people to deny that a disaster was happening. The investigation completed by the Naitonal Institute of Standards and Technology (2005) into the collapse of the World Trade Center towers in 9/11/2001 found that on average, people on the lower floors of World Trade Center I waited three minutes to start evacuation and those closer to the impact floors waited an average of five minutes before they started evacuating. The occupants often indicated that they spent this time speaking to others about what was happening and gathering belongings. Clearly, this delay could have led to many more deaths had the fires caused by the impact been more severe or spread more quickly. When people did start to evacuate, they did not panic or stampede. (NIST, 2005; Ripley, 2008). They moved purposefully to the fire exits and left in an orderly fashion. This is despite the fact that they had heard an enormous explosion that shook the building and the presence of smoke and fire on many floors. Ripley attributes this to normalcy bias. That is, our brains tend to interpret information as if it is part of our everyday experience. Because of this, people tend to underestimate both the likelihood of a disaster and the possible effects of the disaster. It takes time for the brain to process the novel information and recognize that the disaster is a threatening situation. Human Brain vs. Lizard Brain: For the purpose of this training, we will view the brain as having two basic operating systems. These systems are the Human Brain and the Lizard Brain. The Lizard Brain corresponds to the older more primitive brain structures (emotional brain) whereas the Human Brain corresponds to the more modern brain structures (rational brain). All animals have a lizard brain system. Humans have the most developed rational brain of any animal. When information is received from one of these senses, it splits into two streams. One feeds into rational system and one into emptional system. One of the advantages of Lizard Brain is that it is fast. The drawback to this speed is that it is limited to a set of pre-programmed responses. It is effortless. There is no need to think when the Lizard Brain is controlling something. It just happens. When we are conscious about what is occurring, we are using the Human Brain. The big advantage of this system is that it is flexible. The Human Brain allows us to learn, weigh options and develop plans. This comes at the cost of speed. When it comes to reaction time the Human Brain is much slower than the Lizard Brain. It also takes effort to engage the Human Brain, and the Human Brain does not function well under stress.Our brain has a series of alarm systems that activate to prepare us to deal with the threat. A loud noise, for example, may activate our startle reflex (which might cause us to flinch) . In cases where a loud noise is a threat, the startle reflex starts the process of getting us ready to act. As the series of alarms is activated, (heart rate, breathing, blood flow to large muscles), our body becomes focused on the threat. These changes make us faster, stronger and more focused.This process is largely in the Lizard Brain. As stress mounts, the ability to think rationally decreases. Given enough stress, everyone becomes stupid. This is our Human Brain shutting down and our Lizard Brain taking the lead. At high levels of stress, people can only do that which is pre-programmed into the Lizard Brain. For many people, these actions are limited to fighting, freezing or fleeing. You may also experience several common sensory side effects of high stress levels that many police officers report experiencing during deadly force encounters: Tunnel Vision - your field of focus may narrow to only the most immediate threat and you may not see peripheral details. Audio Exclusion - you may stop hearing what is happening. Time Dilations - things may seem to move in slow motion. Out of Body Experiences - you may feel as if you are outside of your body watching the event happen. Reduced Motor Skills - you may experience reduced efficiency of your fine motor skills. These are side effects of your stress response system preparing your body to deal with a threat. These Lizard brain responses to threats developed during a time when man's most likely threat was going to come from single source that needed to be dealt with physically and immediately. For example - a tiger jumps out of the bushes in front of an early human. The human needs to either fight it or flee. The threat environment faced by people today is often much more complicated. A person might face situations where they are confronted with multiple armed suspects and innocent victims. While the Lizard Brain has its uses, it is clear that the Human Brain is needed in may of the dangerous situations that a person may face today. The following section tells how to keep the Human Brain functioning longer. Use Willpower: In the case of a violent encounter, the Lizard Brain is setting off a variety of panic alarms. By exerting willpower, a person is trying to get the Human Brain to override these alarms. This can be done, but it takes conscious effort. Willpower is however, a limited resource. IT can prevent or delay some stress situations but eventually it will fail. Combat Breathing can help utilize willpower. Breathing through the nose for a three count, holding the breath for a two count, breathing out for a three count, and then pausing for a two count before beginning the next breath has been shown to lower people's heart rates dramatically for a short period of time and can help circumvent the Lizard Brain alarms. Take Care of Yourself: Research shows that people who are more fit are also generally more able to cope with stress. This may be due, in part, to a fit person's regulatory system being better able to deal with the physiological swings caused by stress, and part may be due to willpower. Exercising requires the use of willpower. Improved diet and sleep habits can also reduce your basic stress level. Act: Freezing is almost always the wrong response. It leads to a feeling of helplessness. When people feel helpless, their stress levels increase, which further hinders functioning. Taking action - any action - can help give a sense of control and help reduce stress response. When The Human Brain is Compromised No matter how much willpower you have, how good shape you are in, or how much you have planned or prepared, you will run into situations where your Lizard Brain overwhelms your Human Brain at least for a short time. Prepare for this. Here are a few tips: Shift the Emotion: When experiencing feelings of panic and fear, it is easier to shift the fear response to anger than it is to restore control. Don't get scared. Get mad at the offender! Prepare Critical Skills to Function when the Human Brain is Not: Train a skill to the point where it is automatic. Once you master a skill, it becomes a Lizard Brain function and requires no Human Brain activity. Observe, Orient, Decide and Act (The OODA Loop) : You must first see (observe) what is happening. Next you must position yourself to respond (orient). Follow orientation, you must determine a course of action (decide); and finally you must perform that act (act). Use your Human Brain to Develop Scripts: Finally you can use your Human Brain when you are NOT under stress to think about what you should do in a stressful situation. It is possible to think through likely scenarios and the appropriate responses to those scenarios to prepare action scrips. When under stress, you can then access these prepared plans. The plan you have thought through beforehand is also likely to be of better quality than one you come up with on the spot. One of the cool things about your Human Brain is that you can use it to program your Lizard Brain when you are not under stress. This will ensure that your Lizard Brain will do a better job when you are under stress. At this point, people in a disaster have to decide what to do. IF the person does not have a preexisting plan, this creates a serious problem because the effects of life threatening stress on your bodily systems severely limit your ability to both perceive information and to make plans. Making Decisions Under Duress:Stress increases heart rates. This can have adverse effects on our ability to make appropriate decisions in a life threatening situations. Condition White (60 Beats Per Minute) Normal resting heart rate. This condition usually occurs when you are in a comfortable and secure environment. Condition Yellow (90 Beats Per Minute) Fine motor skills begin to deteriorate. This condition occurs when your body is at a heightened state of alert. Condition Red (120 Beats Per Minute) Complex motor skills deteriorate - peak physical performance in gross motor. You are stronger, faster and will bleed less An attack is imminent or in progress. Condition Grey (150 Beats Per Minute) Cognitive processing deteriorates, tunnel vision, auditory exclusion, time dilation. The environment is becoming overwhelming Condition Black (175 Beats Per Minute) System overload, freezing, voiding of bowels and bladder. The Decisive Moment Once a decision has been made, act quickly and decisively. Failure to act quickly can result in you remaining in a position to be injured or killed during an active shooter event. It is important to know your surroundings when you find yourself in a dangerous situation The faster we can get through the phases of Denial and Deliberation, the quicker we will reach the Decisive Moment and begin to take action that can save your life and the lives of those around you.
<urn:uuid:a486a965-4360-42ec-b149-618e813173cc>
CC-MAIN-2021-43
https://www.avoiddenydefend.org/stages
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00110.warc.gz
en
0.950532
2,065
3.203125
3
The Canadian Action Party stands for the use of the Bank of Canada as an independent entity that would be responsible for regulating and sustaining our Canadian economy. Please, read our historical overview to better understand the key role that a central and national bank can play in developing the economy of Canada. The Formation of The Bank of Canada Until the Bank of Canada (BoC) opened in 1935, the Treasury Board, which administered the Finance Act of 1923, worked as an independent entity, isolated from needs of Canadian society. The Treasury Board had no mandate to ensure that the advances made to the banks answered the needs of the economy. Subsequently, during the Great Depression, Canadians saw the unsatisfactory nature of this arrangement. In 1934, our Canadian Parliament passed the Bank of Canada Act, and then Parliament founded the Bank of Canada a year later. Since 1938, a single shareholder has owned the Bank of Canada – our federal government (i.e. the Canadian taxpayers). The Use of The Bank of Canada, 1938 – 1974 The “nationalization” of 1938 perfected a mechanism that allowed Canada’s central bank to create money to finance federal projects on a near interest-free basis. The Bank of Canada may, thus, make loans to the Government of Canada or any province (BoC Act Article 18 (c), (i) (j) or guaranteed by Canada or any province (c)). See Article 18 for a full explanation. Initially, the Bank of Canada fulfilled its mandate. First, our bank was of great assistance in getting Canada out of the Great Depression. In later years, we financed the war efforts through the Bank of Canada. Our bank was also responsible for building infrastructure and social systems in Canada into the early 1970s. However, in this time period, world-wide financial policies began to change in response to international political strategies. Central Banking Strategies Until the late 1960s central banks like the Bank of Canada held inflation, i.e. general price rise, in check by regulating the chartered banks. Canadians may recognize the names of a few of these chartered banks: the Royal Bank, Scotiabank and the Bank of Montreal. Like other central banks, the Bank of Canada used one or a combination of three main tools in their efforts to regulate financial activity in Canada. First (1), our central bank raised the rates for overnight loans to the chartered banks in order to help them meet their net cheque-clearance or other financial obligations. Hence, the chartered banks had to balance their books at the end of each day. They also paid interest to the the Bank of Canada on the funds required for this daily balancing process. Clearly, the Bank of Canada required and encouraged that chartered banks be accountable for all of their funds. When a chartered bank lent money, it had to have these funds on hand. The funds on which they charges interest to customers had to be real; chartered banks could not use imaginary funds for loans. Statutory Reserve Requirement Secondly (2), central banks regulated the chartered banks by raising what we call the statutory reserve requirement. This is the percentage of deposits made with the banks by the public that the banks had to redeposit with the Bank of Canada to back their chequing and other short-term accounts. Such redeposits had earned the banks no interest, but, again, here we see another central banking procedure that encouraged the chartered banks to not only be accountable, but also to work within the resources which they had available. The central bank also regulated the chartered banks by “jaw-boning”(3). They advised the chartered banks of regions or industries where the central bank did not want bank credit increased or even maintained at its present level. One could imagine that this regulating technique did not make the Bank of Canada too popular with some businesses of regions of Canada, but it does make one wonder why the chartered banks needed these kinds of regulations in the first place. What difference does this make for Canadians today? A New Global Monetary Policy In the 70s the monetary policy of Monetarism was adopted. (monetarists hold that the money supply alone determines price – and just about everything else! At the same time, central banks worldwide began attempting to control inflation by reigning in the money supply. Instead of judiciously augmenting the money supply, the central banks worldwide had done the opposite. The actions of the central banks disregarded the inevitable effects on interest rates. Economic growth was stymied. The chartered banks had to work within a framework that restricted economic growth. A Loss for Canada Canadians worked within this declining system until the early 1990s, when the longevity of the Bank of Canada came under attack. In mid-1991, without debate or press release, parliament quietly adopted a bill that phased out the statutory reserves over a two-year period (subsection 457 of Chapter 46 of the Statutes of Canada.) This bill left higher interest rates the only means of “fighting inflation”. Read on to discover the ramifications of the loss of our central bank as a regulator of our economy… Big Business Gains a Foothold Interest rates, however happen to be the revenue of money – lenders as the sole way of fighting price rise, or inflation. What a conflict of interest! What a way to drain Canadian resources into the cache of the few. ( see – the social lien section). Bank of Canada Suffers a Death Blow At the same time, a campaign was launched to enshrine the independence of the central bank from the government. Essentially this campaign decreased the ability of the Bank of Canada to enact any regulatory functions. Though the Bank of Canada Act sets forth that all shares are owned by the federal government, that in the event of a disagreement on broad policy between the governor of the Bank of Canada and the Minister of Finance, the latter shall have the right, after thirty days written notice to conform, to dismiss the Governor. If that does not add up to the good old capitalistic definition of ownership, i.e. non-independence, what does? “Zero Inflation” Runs our Canada into Debt As well as removing the Bank of Canada’s ability to act as an independent agent, the Federal government proclaimed that “zero inflation,” a perfectly flat price level, was essential for our economy. Most of Canada’s federal debt was run up in the attempt to enforce these provisions, which contradicted the Bank of Canada’s charter to finance our country’s debt. Such contradictions, however, did not deter Mr. Crow, and subsequent Bank of Canada Governors, from pursuing like policies to this day! What can they do? They are merely following orders. Two Unbelievable Facts! As we keep this historical picture in mind, it is important that we now consider two unbelievable facts. They are so astonishing that most people simply won’t believe them!! Indeed, they really do defy the imagination! Unbelievable Fact # 1: How Money is Created. Money is created out of nothing. Myth: it’s based on Gold: Not so! The Gold Standard was abandoned years ago. -Well….it’s not quite created ‘out of nothing’: it’s created out of a faith based on the credit of a nation: otherwise, it would be worthless. If I give you a $20 bill, you believe (have faith) that you can use it as a medium of exchange to buy other goods or services. Moreover, there are two ways to create money (out of essentially nothing). Two Ways to Create Money (1) GCM (Government Created Money) is created by the federal government. People understand this method. Most people when asked would say, “Well, the government creates money.” That’s true. But how MUCH of the money supply each year does the government create? About 5%. That’s all. So who creates the rest? (2) BCM (Bank Created Money): the private banking system creates money. How does the private banking system “create” money? Simple! But unbelievable! Bear in mind that MONEY IS CREATED OUT OF NOTHING. So, when you make that $30,000 loan at your bank for a new truck, that amount is typed into you bankbook. Seconds earlier it didn’t exist! Now YOU owe that money TO the bank, + interest! Myth: the money for your loan is somehow “backed” by deposits on-hand in the bank where the loan is made… Not so! You, as a citizen or a business, don’t have a choice. Much though you might like to, you can’t create money. You have to borrow your money from the private banks. These chartered banks continually create money out of nothing and subsequently make huge amounts of profit out of nothing. Unbelievable Fact # 2: The Government’s Choice But governments have a choice! The federal government can EITHER create its own debt-free and interest-free money (GCM) OR borrow it AS debt, and AT interest from the private banks (BCM). The provincial and municiple governments can choose to borrow, at low interest rates, from EITHER the Bank of Canada OR borrow from the private banking system at substantial interest rates. GUESS WHICH CHOICE OUR GOVERNMENTS MAKE? YOU GUESSED IT! Some 95% of our money is created as BCM (Bank Created Money) You may say, “-So what? Some abstract argument about ‘how money is created’ doesn’t effect me, anyway…” -Oh yes it does! You’d better believe it! In conclusion, the Canadian Action Party has reviewed the historical record and has presented a case study above. We agree that a prudently managed central Bank of Canada is essential for Canadians. An active Bank of Canada will reign in our federal debt. Its regulatory abilities will mean less funds for multinational companies such as those who currently own our chartered banks. A strong public central bank is a plus for Canada. Canada will be able to focus her finances on infrastructure and business development that will benefit the larger part of Canadians who pay the larger part of taxes. For a comprehensive review of the value of a country having a strong central banking system, it is recommended that you read Ellen Brown, (2013) “From Austerity to Prosperity: The Public Bank Solution,” Third Millennium Press: Baton Rouge, Louisiana. ISBN 978-0-9833308-6-8.
<urn:uuid:debbe2ce-cdcb-42fb-9814-44a662d58066>
CC-MAIN-2021-43
https://actionparty.ca/index.php/bank-of-canada/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00230.warc.gz
en
0.950699
2,220
3.765625
4
Corn milled like flour was crumbled with 5% butter containing a high level of conjugated linoleic acid, then kept exposed to air on an aluminium tray at a layer of 1 cm thickness. Its acid number, peroxide number and fatty acid composition were measured weekly. It was established that during a 24 week long period, there was very little change i...n the composition of fatty acids, but after this, in parallel with the increasing acid number and peroxide number, the amount of unsaturated fatty acids decreased, while those values for saturated fatty acids did not change considerably. With these investigations, the authors proved the antioxidant effect of conjugated linoleic acid. The composition of fatty acids in food products is a significant factor in human health. Feeding can significantly influence the composition of fatty acids in the animal fat. We analysed the effect of feeding high CLA-content (conjugated linoleic acid) feed on the composition of fatty acids in pork. The animals were grouped according to the fol...lowing: Group 1) feeding experimental, ghee-mixed feed for 76 days, Group 2) feeding the same feed, but only for 33 days, Group 3) feeding sunflower-oil-mixed feed for 76 days. Ghee contains CLA in high amount. The aim of our experiment is to analyse how the high CLA content influences the fatty acid content of pork. In the end of the fattening experiment the animals were slaughtered, then samples were taken from the loin, ham, abdomen and backfat from 10 animals from each group and analysed the fatty acid content. We found significant differences between the average fatty acid content of the samples. As an effect of feeding ghee-enriched feed, the CLA content significantly increased, compared to the control group. However, the linoleic acid and the arachidonic acid content were lower, and the proportion of fatty acids was also lower when feeding control feed. In 1990ys antiatherogen, antioxidant and anticarcinogen effect of conjugated linolacids (CLA) was detected. From this reasons, our aims in this study were producing pork rich in CLA and studying the change of fatty acid composition of the produced pork cooked different kind of fats. For frying palm and sunflower oil and swine fat were used. Thi...gh was cutted for 100 g pieces. Meat pieces were fried at 160 °C for 1 and 8 minutes. Estimation of frying data it was determined that higher (0.13%) CLA content of pork was spoiled (60-70%) except in case of swine fat cooking, because it is extremly sensitive for oxidation and heating. Swine fat has higher (0.09%) CLA content than plant oil, protecting the meat’s original CLA content. Cooking in swine fat did not have significant effect on fatty acid composition of meat. Low level of palmitic acid contect of sunflower oil (6.40%) decreased for half part of palmitic acid content of pork (24.13%) and it produced cooked meat with decreased oil acid content. Contrary of above, linoleic acid content of fried meat was increased in different folds as compared to crude pork. If it was fried in sunflower oil with high level linoleic acid increased (51.52%) the linoleic acid content in fried pork. The linoleic acid content of the high level CLA pork increased four times (48.59%) to the crude meat (16.59% and 12.32%). The high palmitic acid content of palm fat (41.54%) increased by 60% the palmitic acid content in fried pork, low stearic acid (4.44%) and linoleic acid content (10.56%) decreased the stearic and linoleic acid content of crude meat. The aim of our investigation was to determine the effects of increased PUFA (polyunsaturated fatty acids) content on the colour, total pigment content, organoleptic characteristics and oxidative stability of poultry meat. The experiment was carried out with 1200 Ross-308 cock chicklings. Animals were fed with a 3 phase diet, and in each phase,...additional fat was added to the feed. The isocaloric and isonitrogenic feed was produced as the breeder organization suggested; only the fat content differed (4 treatments: pig fat (lard), sunflower oil, soy oil, flax-seed oil). The different fat complements did not influence broiler production. However, the fatty acid composition of meat was similar to the fatty acid composition of feed (additional fats). The analyses of meat samples, after a storage period, did not significantly prove the possible negative effects of higher PUFA content. The authors examinated behaviour of twenty-eight, grouped-housed Hungarian large white x Hungarian landrasse gilts in grassland-based production system. The social rank was recorded on two days following, social rank was unaffected by even age or weight of gilts. The daily liferhythm was recorded on four different days in two weeks period 30% of the whole paddock was used specifically by pigs as resting and excretion area. The aim of our investigation was to determine the effects of increased α-linolenic content in food on the colour, total pigment content, organoleptic characteristics and oxidative stability of poultry meat. The experiment was carried out with 1200 Ross-308 cock chicklings. Birds were fed three-phase diets, contained four different fat sources:... lard, sunflower oil, flaxseed oil and soybean oil. According to the experiment, the different oil sources had no effect on growth performance, but the fatty acid composition of diets was reflected in the meat fatty acid profile. We could detect just slight change in colour in the treated meat, which was not caused by the decreased pigment content. The detected change in colour during the storage was not in relation to initial PUFA content. TBA level did not prove the accelerated lipid peroxidation which was expected in case of higher α-linolenic containing the meat. The data obtained in meat storage trial, could not prove clearly the negative effect of the higher α-linolenic content of the meat. The experiments were carried out in a 2x2 factorial treatments with three replicates, and were completed with 32P phosphorus metabolism measurement. Hungarian Large White x Dutch Landrace growing pigs with 15–18 kg starting live weight were involved in the experiment. The experimental scheme was the following: Diet consisted of maize and extracted soybean meal. Both components have high phytase content and low phytase activity. 1/a animals received their P-supply according to their needs and 1/b animals got 10% less than their actual P-need in the first part of the experiment. In the second part of the experiment both groups (2/a, 2/b) received identical P-supply and 500FTU/kg P supplementation. Apart from P- and phytase-supplementation, the piglets’ diet was identical. Total P digestibility was 52% without phytase supplementation, which increases by 4% when P was added according to need and by 12% increase of decreased P-supply. Digestibility of nutrients somewhat increased as effect of phytase supplementation. According to the results of 32P experiments, inorganic P digestibility of MCP was 82–90.8%, which decreases to 73.4–87.2% in case of phytase supplementation. Parallel with tendency, native P digestibility of the diet was 31.5–32.2%, which increased to 42.5–54.5% in the case of phytase supplementation. Results support the that inorganic P input can be decreased by phytase supplementation and as a consequence P output, the concept and environmental pollution can at the some time be decreased. The authors examined the nutrition value of the meat of shot wild boars (wild pigs) (n=66) from three wild boar enclosures with different feeding intensity and also the technological properties of the meat. Samples were taken immediately after the evisceration. Considering the storing and processing properties of game meat the samples were take...n from m. serratus anterior. As for dry matter examination results, the highest values were measured in case of semi-intensively fed wild boars, then followed the data from the samples of intensively and extensively fed wild boars. The fat content from the meat samples of intensively and extensively fed wild boars proved to be lower while in case of the semi-ntensively fed wild boars it was higher. In females the dry matter content, while in males the fat content was higher. As for the protein content there were no differences in either the feeding groups or in the genders. It was only the water holding capacity of the samples from the meat of the females from semi-intensive feeding intensity wild boar enclosure that fell in between normal values. The applied technology is an alternative approach to pigkeeping-systems. An outdoor pig production breeding sows are kept at pasture either year-round or in a certain period of the year. The important equipments of outdoor pig production are farrowing or grouping sows inhuts, which protect pigs against the effects of extreme weather, and electr...ic fences, which surround and divide the pasture. Concentrate feed can be fed from the ground or from feeders which are made of steel or timber. One of main advantages of this pig keeping system is the total mobilizable keeping technology. Within the scope of the study we are performing an experiment to make a comparison between coventional system and free range sows keeping technology. Pannonhybrid F1 gilts were used in this experiment, 28 gilts were kept on pasture all day and 28 gilts are kept in conventional, indoor system. In this work the results of gilts-rearing are presentated as a part of our two-years experiment.
<urn:uuid:6232e8b3-9930-455b-90f5-99e7744b2211>
CC-MAIN-2021-43
https://ojs.lib.unideb.hu/actaagrar/search?authors=J%C3%A1nos%20Gundel
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00591.warc.gz
en
0.964202
2,070
2.59375
3
Section 1: Contextual Overview of the Development of the English Legal Profession Before a full sketch of the history of the Training Contract can be drawn, it is necessary to provide a brief introduction to the development of the English legal profession as a whole. From the mid-12th Century, there existed a Bench of learned men at Westminster who were an extension, and administrators, of the King’s justice and heard legal pleas. After a few decades, they decided to travel the realm and administer justice locally, and naturally their number grew. The development of anything that could be called a ‘profession’ was exceedingly slow at this time because ancient principles such as a pleader having to appear and speak on his own behalf hindered anyone being able to speak for the pleader and therefore represent him. However, some representatives were permitted and a few select names began to appear regularly on the records of the King’s Bench. At a very general level, in circa. 1250, there were two types of professional appearing, (1) a class of Serjeants at Law who presented the pleader’s case and responded to any argument that arose out of it, and (2) Attorneys who appeared on behalf of a claimant and spoke for him. The Serjeants’ workload became focused on appearing in court, whereas professional Attorneys handled the managerial, preparatory side of affairs. These broad distinctions developed over the centuries into what we now know as Barristers and Solicitors. The profession was taking shape and the 1275 Statute of Westminster imposed penalties on lawyers who were found to be deceitful, an early sign of regulation. It took time for the above distinctions to be clarified through practice and throughout the 1300s there were a group of students that learnt the ways of the court, although they were not attached to any man in particular but to the Court itself. In the 1400s, we see the word Solicitor specifically used and there was much work in the most used Court (the Common Pleas) due to the proliferation of litigation and increase in types of action; therefore the profession grew. The role of Attorney still existed but the two roles overlapped significantly, although the role of Attorney was not officially abolished until 1873. Attempts were made in the 1500s to regulate this new branch of professionals but few regulatory inroads were built. From 1590 to 1630 in particular, certain judges attempted to eliminate the profession as it was seen as less honourable and gentlemanly than the role of Barrister. Their attempts failed and, in the early 1600s, Solicitors were most certainly a distinct profession in their own right. Around this time, men began to operate as Solicitors in partnership with each other and as their businesses grew it became customary for new entrants to the profession to work and learn under these Solicitors, as ‘Articled Clerks’. These were effectively contracts that bound the Clerk to their master for a certain length of time. Certainly, by the 1630-50s, it was a strong convention that these Articled Clerkships had to be undertaken, but, as of yet, there was no regulation or law on the issue. It is at this moment that we can pinpoint the early beginnings of what are now known as Training Contracts. Section 2: Specific Development of the Solicitors’ Training Contract Things slowly and ponderously developed, as they always seem to along the winding path of English Legal History. Until the Attorneys and Solicitors Act 1728, it was not required by law that there be a central record of practicing Solicitors, however some Courts did keep Books of Attorneys for their own purposes before this time. The Act specified that after the 1st December 1730, no man could practice as a Solicitor unless his name was on the Roll, and significantly, no man could practice as a Solicitor unless he had undertaken an Articled Clerkship for at least a term of 5 years. Further regulation came into place over the coming years, such as the Continuance of Laws Act 1748, which specified that Articled Clerks, on completion of their Articles, had to file a statement to this effect at the Court within 3 months. This time limit was later increased to 6 months. By 1843, pressure led to the reduction of the Articled Clerkship to a term of 3 years if you graduated from a degree at the Universities of Oxford, Cambridge, Dublin, Durham or London – as you were of a higher calibre. The 1785 to 1867 Register of Articled Clerkships shows the vast majority of terms being 5 years, but they are occasionally higher at 6 or 7 years and some at 3 years. One example recorded was for a mere 10 months. In 1785, 129 Articled Clerkships (in the busiest court, the Common Pleas) were registered which we can contrast against the 4,869 Training Contracts registered with the Solicitors’ Regulation Authority in 2011. Please see some specific examples of Articled Clerkships in Appendix 1 to this post. The Articled Clerkship continued to develop and began to generate additional complexities, exceptions and methods to ensure the quality of training involved was up to the high standards of a noble profession. In the Solicitors Act of 1860, it was established that if you worked as a de facto Articled Clerk for 10 years, you could enter the profession fully if you completed 3 years of a formal Clerkship. I believe this is the origin of the term known within the profession as ‘ten-year man’. Around the same time as the 1728 Act, a group of Solicitors set up the ‘Society of Gentlemen Practisers in the Courts of Law and Equity’. This was the predecessor body of the Law Society, which was incorporated in 1826, and now deals with many aspects of regulating Solicitors. The Solicitors’ Regulation Authority, a subsidiary arm of the Law Society, now specifically deals with the regulation of Training Contracts. It was the early forms of these bodies that imposed high standards on Solicitors and led the profession to be seen on the same level as Barristers. In 1877, further legislation made it a requirement that you had to pass exams set by the Law Society before being allowed admittance to the Profession, however exams had been carried out by the Law Society since 1836. By 1936; you were required to submit evidence of good character to the Law Society at least 6 weeks before starting your Articled Clerkship. As an interesting aside, the Solicitors (Articled Clerks) Act 1918 made provision for time spent serving in the war as counting towards the term of years of your Articles. By the time we reach 1922, the starting point is that a 5 year Articled Clerkship is still required although terms of 3 or 4 years became more prevalent. At this stage, the Law Society required a mandatory academic year to be undertaken by Clerks, although many still solely qualified through Articles. The quality of training of an Articled Clerk was again emphasised in the Solicitors Act 1936 where it is specified that a Solicitor cannot take on a Clerk until they have practiced for at least 5 years themselves. An Act of 1956 codifies for the first time a structure of what one must do to enter the profession of Solicitor. You must have (1) completed an Articled Clerkship (by this time commonly referred to as just ‘Articles’ or ‘Articles of Training’, (2) passed a course of Legal Education, and (3) passed the Law Society’s exams. Towards these ends, the 1965 Act grants the Law Society powers to create provisions regarding the education and training of those wanting to be Solicitors. The Training Regulations of 1970 specified that the longest time that could be served under Articles was 4 years, although if you were a law graduate, this was most commonly 2 years. The currently in force 1974 Act allows for further Training Regulations to be created, in conjunction with the Secretary of State. The transition between the 1989 and 1990 edition of the Training Regulations changed the term ‘Articles of Training’ to that of ‘Training Contract’ in an attempt to use simpler and clearer language. The term of years in those regulations is set at 2 years. It has been called a Training Contract since 1990 and the very detailed Solicitors’ Regulation Authority Training Regulations 2011 are the provisions that currently govern it. There is currently a consultation being conducted by the Solicitors’ Regulation Authority named ‘Training for Tomorrow’ which may significantly change the rules and procedures relating to Training Contracts. This consultation finishes on the 28th of February 2014 and we can only wait and see where the Legal History of the Training Contract will develop next. NB – Information in the below Appendices can be used but only if credit is given to this Blog. Recommended citation: Ben Darlow, ‘History of the Solicitors’ Training Contract’ <link to this blog post> accessed [day] [month] [year] Appendix 1: Examples of Articled Clerkships in the Court of Common Pleas between 1785 and 1867 - Fiennes Wykham – Clerk to Richard Bignall of Banbury – Articles dated 12th July 1785 - Thomas Berryman – Clerk to Samuel Plaisted of Bernards Inn, London – Articles dated 16th October 1788 - William York Jr – Clerk to William York Snr of Thrapston, Northampton – Articles dated 27th November 1800 - Michael Kennedy – Clerk to Edward Codd of Kingston-upon-Hull – Articles dated 10th November 1817 - Thomas Powell Watkins – Clerk to Charles Bedford of Worcester – Articles dated 4th November 1843 - Fred John Wise Jr – Clerk to Fred Wise Snr – Articles dated 8th July 1865 *Observation – from the register of approximately 9,500 Clerks in this 82-year period, approximately 1 in 5 are Articled to their father. *Observation – unfortunately, the earliest Register of Articled Clerkships, between 1713 and 1837 is mould damaged at the National Archives. Appendix 2: Examples from the Roll of Solicitors admitted 1729 to 1788 - John Applegarth – admitted to the Roll on 8th July 1729 – Examined by Mr E. Probyn - John Forrest of Middlesex – admitted to the Roll 3rd July 1729 – Examined by Mr R. Raymond - Benjamin Holt of Hereford – admitted to the Roll 28th June 1729 – Examined by Mr E. Probyn - Samuel Plummer of London – admitted to the Roll 27th June 1729 – Examined by Mr R. Raymond - John Darrell of Cheshire – admitted to the Roll 14th June 1788 – examined by Mr Ashurst *Observation – the register contains approximately 7,600 solicitors admitted to the Roll over this 60 year period. Appendix 3: Oath required by the Attorneys and Solicitors Act 1728 to be admitted onto the Roll of Solicitors “I [Forename][Surname] swear that I will truly and honestly demean myself in the practice of an Solicitor, according to the best of my knowledge and ability. So help me God.”
<urn:uuid:0fe377f6-e837-4473-be3c-0e163afcd8e2>
CC-MAIN-2021-43
https://englishlegalhistory.wordpress.com/tag/court-of-common-pleas/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00349.warc.gz
en
0.976528
2,408
3.390625
3
The New International Encyclopædia/Koran KORAN, kō′ran or kō̇-rän′ (Ar. ḳur'ān, lection, from ḳara'a, to read; cf. the later Heb. Miḳra, the written Book, i.e. the Bible). The sacred book of the Mohammedans. The name was given by Mohammed himself to a single revelation, or a collection of revelations, and was afterwards applied to the body of his utterances as gathered together in one book, forming the basis for the religious, social, civil, commercial, military, and legal regulation of Islam. The Koran is also known under various other names, such as: Furḳ-ān (salvation); Al-Muṣḥaf (the volume); Al-Kitāb (the Book, in the sense of ‘Bible’); Al-Dhikr (the reminder, or the admonition). According to the orthodox views the Koran is coeval with God, uncreated, eternal. Its first transcript was written from the beginning in rays of light upon a gigantic tablet resting by the throne of the Almighty, and upon this tablet are also found the divine decrees relating to things past and future. A copy of it, in a book bound in white silk, jewels, and gold, was brought down to the lowest heaven by the angel Gabriel, in the blissful and mysterious night of Al-Kadr, in the month of Ramadan. Portions of it were, during a space of twenty-three years, communicated to Mohammed, at both Mecca and Medina, either by Gabriel in human shape, “with the sound of bells,” or through inspiration from the Holy Ghost, “in the Prophet's breast,” or by God Himself, “veiled and unveiled, in waking or in the dreams of night.” Traditions vary with respect to the length of the individual portions revealed at a time, between single letters, verses, and entire chapters (or suras). Setting aside the fanciful and semi-mystical speculations, there is general agreement among Mohammedans that the earliest revelation is represented by verses 1 to 5 of sura xcvi., which begins with the words, “Proclaim the name of thy Lord, who has created all things.” At the beginning of his career Mohammed did not make any efforts to have his utterances preserved. While it is possible that he was able to read and write, he certainly did not write any of the suras himself. It was only as his movement spread that the importance attached to the Prophet's ‘revelations’ suggested the necessity of giving them a more permanent form, and in the second part of his career, after the flight to Medina (622), he appears systematically to have dictated his revelations to a scribe; and it would appear that he also revised the form of earlier utterances which had been either orally preserved or written down promiscuously by some of his zealous followers. Within a year of Mohammed's death (632) the first attempt at a collection of the Prophet's utterances was made by Abu-bekr. He intrusted the task to Zaid ibn Thabit, the last secretary of Mohannned. Copies of these utterances already existed, and it was from these that Zaid prepared an authoritative compilation to be known henceforth as the Koran. This volume passed, after the death of Abu-hekr, into the hands of Omar, and by Omar was intrusted to the keeping of Hafsa, one of the Prophet's wives, the daughter of Omar. Differences of opinion in regard to the text of the Koran still prevailed after Zaid's edition was completed, and accordingly a second redaction was instituted in the thirtieth year of the Hejira by Caliph Othman, not for the sake of arranging and correcting the text, but in order to insure unity. This work was intrusted to four editors of recognized authority, of whom Zaid was one. With respect to the succession of the single chapters, 114 in number, no attempt was made at establishing continuity, but they were placed side by side according to their respective lengths; so that immediately after the introductory exordium follows the longest chapter, and the others are ranged after it in decreasing size, though this principle is not strictly adhered to. They are not numbered in the manuscripts, but bear distinctive, often strange-sounding, headings; as: the Cow, Congealed Blood, the Fig, the Star, the Towers, Saba, the Poets, etc., taken from a particular matter or person treated of in the respective chapters. Every chapter or sura begins with the introductory formula, “In the name of God, the Merciful, the Compassionate.” It is further stated at the beginning whether the sura was revealed at Mecca or at Medina. Every chapter is subdivided into smaller portions (Ayah, Heb. Oth, sign, letter), varying in the ancient copies (of Medina, Cufa, Basra, and Damascus, and the ‘vulgar edition’) between 6000 and 6036. The number of words in the whole book is 77,639, and an enumeration of the letters shows an amount of 323,015 of these. Other (encyclical) divisions of the book are into 30 ajzā and into 60 ahzāb, for the use of devotional readings in and out of the mosque. Twenty-nine suras commence with certain letters of the alphabet, which are supposed by Mohammedans to be of mystic import, but which are probably monograms of private collectors or authorities. The contents of the Koran as the basis of Mohammedanism will be considered under that head, while for questions more closely connected with authorship and chronology, consult Mohammed. Briefly it may be stated here that the chief doctrine laid down in it is the unity of God, and the existence of but one true religion, with changeable ceremonies. As teachers and warners of mankind, God, at different times, sent prophets to lead back to truth, Moses, Christ, and Mohammed being the most distinguished. Both punishments for the sinner and rewards for the pious are depicted with great diffuseness, and exemplified chiefly by stories taken from the Bible, the apocryphal writings, the Midrash, and pre-Islamic history. Special laws and directions, admonitions to moral and divine virtues, more particularly to a complete and unconditional resignation to God's will (see Islam), legends, principally relating to the patriarchs, and, almost without exception, borrowed from the Jewish writings (known to Mohammed by oral communication only, a circumstance which accounts for their frequent odd confusion), form the bulk of the book, which throughout bears the most palpable traces of Jewish influence. Thus, of ideas and words taken bodily, with their Arabicized designations, from Judaism, may be mentioned: Ḳur'ān = miḳra (reading); furḳān (salvation); the introductory formula, bismillah (in the name of God); taurāt = tōrah (book of law); jinnah = gan ēden (paradise); jahinnam (hell); darasa = darash (to search the scriptures); subāt, sabt = shabbāth (day of rest); sakinah (majesty of God). It is especially in the later suras that Mohammed, for the edification of his hearers, introduced (in imitation of Jewish and Christian preachers) stories and legends of biblical personages. The suras may be divided into three general classes: those delivered during the first years of Mohammed's preaching in Mecca, those delivered during the latter part of his stay in that city, and those delivered in Medina. In the oldest suras Mohammed is concerned mainly with depicting the power and unity of God, with the resurrection and the judgment day, with depicting the blessedness of paradise and the tortures of hell. These subjects are elaborated in the suras of the middle and last period. While in the earlier ones Mohammed claims to be only a preacher sent to warn people, in the later ones he steps forward boldly with the claim of being a divinely sent prophet, whose utterances represent revelations made to him by the angel Gabriel. The duties obligatory upon Moslems are all discussed in the later suras, though the formation into codes was reserved for the Mohammedan theologians. Incidentally his polemics against his personal enemies, and especially against Judaism and Christianity, are introduced into the Koran, the Jews being accused of falsifying the Scriptures, the Christians of running counter to the doctrine of the unity of God by the assumption that Jesus was a son of God. The discourses themselves are of a rambling nature, and numerous social customs are touched upon. In this way the Koran becomes a mirror in which Mohammed's personality is reflected with a clearness which leaves little to be desired. It properly was taken as the basis for the elaboration of a Mohammedan system of theology, for there is scarcely any topic connected with the law upon which it does not touch, though never exhaustively. Its lack of system, and its discursiveness, make the Koran hard reading, but its interest and value to the student are all the greater because of the assurance these very defects give us that we have in the Koran a work that is in all essential particulars authentic. The general tendency and aim of the Koran is found clearly indicated in the beginning of the second chapter: “This is the book in which there is no doubt; a guidance for the pious, who believe in the mysteries of faith, who perform their prayers, give alms from what we have bestowed upon them, who believe in the revelation which we made unto thee, which was sent down to the prophets before thee, and who believe in the future life,” etc. To unite the three principal religious forms which he found in his time and country—viz. Judaism, Christianity, and heathenism—into one, was Mohammed's ideal; and the Koran, properly read, discloses constantly the alternate flatteries and threats aimed at each of the three parties. No less are certain abrogations of special passages in the Koran, made by the Prophet himself due to the vacillating relation in which he at first stood to the different creeds. The language of the Koran has become the ideal of classical Arabic, and no human pen is supposed to be capable of producing anything similar; a circumstance adduced by Mohammed himself, as a clear proof of his mission. The style varies considerably; in the earlier suras concise and bold, sublime and majestic, impassioned, fluent, and harmonious; in the later ones verbose, sententious, obscure, tame, and prosy. There are passages of great beauty and power suggesting the Hebrew prophets. By means of the difference in style between the earlier and later suras modern investigators have endeavored to form a chronological arrangement. A general consensus has now been arrived at; though questions of detail must always remain in dispute, as many of the suras are composite in character. A great deal depends also upon internal evidence, which fortunately is found in considerable abundance. Mohammed, especially in the later years of his career, was in the habit of introducing allusions to events of the day, to disputations with Jews and Christians, to his ambitions and aims, into his discourses; and since, in addition to the Koran, we have the copious collections known as Hadith (q.v.) containing utterances, sayings and doings, and decisions of Mohammed at the various periods of his career, it is in many cases possible to attach utterances in the Koran to specific occasions, and thus fix the age of the sura in which a certain expression or opinion occurs. The Koran is written in prose, yet the two or more links of which a sentence is generally composed sometimes rhyme with each other, a peculiarity of speech (called saj') used by the ancient soothsayers (kuhhān-kōhēn) of Arabia; only that Mohammed used his own discretion in remodeling its form and freeing it from conventional fetters; and thus the rhyme of the Koran became an entirely distinctive rhyme. Refrains are introduced in some suras, and plays upon words are not disdained. The outward reverence in which the Koran is held throughout Mohammedanism is exceedingly great. It is never held below the girdle, never touched without previous purification; and an injunction to that effect is generally found on the cover which overlaps the boards, according to Eastern binding. It is consulted on weighty matters; sentences from it are inscribed on banners, doors, etc. Great lavishness is also displayed upon the material and the binding of the sacred volume. The copies for the wealthy are sometimes written in gold, and the covers blaze with gold and precious stones. The Koran has been commented upon so often that the names of the commentators alone would fill pages. The most renowned are those of Zamakhshari (died A.H. 539), Beidhawī (died A.H. (685 or 716), Mahalli (died A.H. 870), and Suyuti (died A.H. 911). The principal editions are those of Hinkelmann (Hamburg, 1694); Maracci (Padua, 1698); Flügel (Leipzig, 1883); besides many editions (of small critical value) printed in Saint Petersburg, Kazan, Teheran, Calcutta, Cawnpore, and Serampore, and by the many newly erected Indian presses. There is a chrestomathy with notes and vocabulary by Nallino (Leipzig, 1893). The first, but very imperfect, Latin version of the Koran was made by Robertus Retensis, an Englishman, in 1143 (ed. Basel, 1543). The principal translations are those of Maracci, into Latin (1698); Sale (1st ed. 1734, one of the best translations in any language, edited by Wherry with additional matter, 1881-80), Rodwell (2d ed., 1870), and Palmer (1880), into English; Savary (1783), Garcin de Tassy (1829), Kazimirski (1840), into French; Megerlin (1772), Wahl (1828), Ullmann (1840), Grigull (1901), and Henning in the Reclam Universal-Bibliothek, into German; Reckendorf into Hebrew (1857); besides a great number of Persian, Turkish, Malay, Hindustani, and other translations made for the benefit of the various Eastern Mohammedans. The attempt to reproduce the style and rhyme of the original was first made by J. von Hammer (1811); this was improved upon by A. Sprenger (1861-65), Fr. Rückert (1888), and by M. Klamroth (1890). All of these are in German. The Speeches and Table-Talk of the Prophet Mohammed, chosen and translated by Stanley Lane-Poole (London, 1882), is a selection from the best that is in the Koran. Of concordances to the Koran may be mentioned that of Flügel (Leipzig, 1842), and the Nojon-ol-Forkan (Calcutta, 1811); La Beaume, Le Koran analysé (Paris, 1878), is a topical index to the French translations of Kazimirski and others. There are Koran lexicons by Dieterici (2d ed., Berlin, 1894) and Penrice (London, 1873). The introduction and notes to Sale's translation contain material that is still of value, though in large measure superseded now by Nöldeke, Geschichte des Korans (Göttingen, 1860); Weil, Historisch-kritische Einleitung in den Koran (Bielefeld, 1844); Grimme, Mohammed, 2ter Theil; Einleitung in den Koran; System der koranischen Theologie (1895); Hirschfeld, New Researches into the Composition and Exegesis of the Koran (Eng. trans. London, 1902). Consult also the lives of Mohammed and other works mentioned in the articles Mohammed and Mohammedanism.
<urn:uuid:97407e1d-45b4-44c1-a428-80f321e72b1a>
CC-MAIN-2021-43
https://en.wikisource.org/wiki/The_New_International_Encyclop%C3%A6dia/Koran
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00310.warc.gz
en
0.960609
3,461
3.0625
3
Eating a healthy diet, such as the Mediterranean diet, has a positive impact on health, but little is known about the effects of including unhealthy foods in an otherwise healthy diet. Now researchers at Rush University Medical Center have reported diminished benefits of a Mediterranean diet among those with high frequency of eating unhealthy foods. The results of their study were published in Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association on Jan. 7. “Eating a diet that emphasizes vegetables, fruit, fish and whole grains may positively affects a person’s health,” said Puja Agarwal, Ph.D., a nutritional epidemiologist and assistant professor in the Department of Internal Medicine at Rush Medical College. “But when it is combined with fried food, sweets, refined grains, red meat and processed meat, we observed that the benefits of eating the Mediterranean part of the diet seems to be diminished.” A Mediterranean diet is associated with slower rates of cognitive decline in older adults. The observational study included 5,001 older adults living in Chicago who were part of the Chicago Health and Aging Project, an evaluation of cognitive health in adults over the age of 65 conducted from 1993 to 2012. Every three years, the study participants completed a cognitive assessment questionnaire that tested basic information processing skills and memory, and they filled out a questionnaire about the frequency with which they consumed 144 food items. The researchers analyzed how closely each of the study participants adhered to a Mediterranean diet, which includes daily consumption of fruit, vegetables, legumes, olive oil, fish, potatoes and unrefined cereals, plus moderate wine consumption. They also assessed how much each participant followed a Western diet, which included fried foods, refined grains, sweets, red and processed meats, full-fat dairy products and pizza. They assigned scores of zero to five for each food item to compile a total Mediterranean diet score for each participant along a range from zero to 55. The researchers then examined the association between Mediterranean diet scores and changes in participants’ global cognitive function, episodic memory and perceptual speed. Participants with slower cognitive decline over the years of follow-up were those who adhered closest to the Mediterranean diet, along with limiting foods that are part of Western diet, whereas participants who ate more of the Western diet had no beneficial effect of healthy food components in slowing cognitive decline. There was no significant interaction between age, sex, race or education and the association with cognitive decline in either high or low levels of Western diet foods. The study also included models for smoking status, body mass index and other potential variables such as cardiovascular conditions and findings remained the same. “Western diets may adversely affect cognitive health,” Agarwal said. “Individuals who had a high Mediterranean diet score compared to those who had the lowest score were equivalent to being 5.8 years younger in age cognitively.” Agarwal said that the results complement other studies showing that a Mediterranean diet reduces the risk of heart disease, certain cancers and diabetes and also support previous studies on Mediterranean diet and cognition. The study also notes that most of the dietary patterns that have shown improvement in cognitive function among older adults, including the Mediterranean, MIND, and DASH diets, have a unique scoring matrix based on the amount of servings consumed for each diet component. “The more we can incorporate green leafy vegetables, other vegetables, berries, olive oil, and fish into our diets, the better it is for our aging brains and bodies. “To benefit from diets such as the Mediterranean diet, or MIND diet, we would have to limit our consumption of processed foods and other unhealthy foods such as fried foods and sweets.” The study and its findings cannot be readily generalized. Future longitudinal studies on diet and cognition among the middle-aged population are needed to extend these findings. The concept of ultra-processed food (UPF) as a descriptor of unhealthy foods within dietary patterns is increasingly recognised in the nutrition literature [1,2,3,4,5] and authoritative reports [6,7]. Understanding of the contribution of UPFs to dietary quality and as a risk factor for diet-related diseases, disorders and conditions is rapidly emerging . Yet, limited consideration has been given to UPF in strategies aiming to improve population health . A crucial missing step in closing that gap is a review of the evidence base of the associations between UPF consumption and adverse health outcomes. Dietary risk factors are leading contributors to the global burden of disease (GBD), responsible for an estimated 11 million deaths from non-communicable diseases (NCDs) (22% of all adult deaths) and 15% of disability life years (DALYs) lost in 2017 . Leading contributors to diet-related deaths are cardiovascular disease (CVD), cancer and type 2 diabetes . Contributors to DALYs from non-fatal chronic conditions include asthma, musculoskeletal conditions and mental health disorders . Implicated dietary risk factors include certain nutrients, foods and dietary pattern exposures. Nutrient exposures include high amounts of sodium [10,12], saturated fat, trans-fat and added sugar . Food exposures include low amounts of whole grains, fruit, vegetables, nuts and seeds and fish [10,12], and high amounts of red meat, processed meat, potato chips and sugar-sweetened beverages (SSB) [12,13]. Dietary patterns include low scores on the Healthy Eating Index or Alternative Healthy Eating Index , or Mediterranean Dietary Pattern ; low adherence to the Dietary Approaches to Stop Hypertension diet ; or a high score on the Western dietary pattern [17,18,19,20]. In a novel approach to food categorization, NOVA (a name not an acronym) classifies foods and beverages ‘according to the extent and purpose of industrial processing’ [21,22], an aspect generally overlooked by public health nutrition science, policy and guidance. In 2009, a Brazilian research group, following studies on national trends over 25 years on household food acquisition and health implications [23,24], concluded diets containing high proportions of UPFs are intrinsically nutritionally unbalanced, harmful to health, or both . This led to the development of the NOVA food classification system , which has since evolved [21,22,26,27,28,29,30]. The NOVA classification assigns foods to one of four groups, based on ‘the extent and purpose of industrial processing’ : - (1) ‘unprocessed or minimally processed foods’ (MPF), comprising edible parts of plants, animals or fungi without any processes applied to them or natural foods altered by minimal processing designed to preserve natural foods to make them suitable for storage, or to make them safe, edible or more palatable (e.g., fresh fruit, vegetables, grains, legumes, meat, milk); - (2) processed culinary ingredients (PCI), which are substances extracted from group 1 (e.g., fats, oils, sugars and starches) or from nature (e.g., salt) used to cook and season MPF, not intended for consumption on their own; - (3) processed foods (PF), where industrial products are made by adding PCI to MPF (e.g., canned vegetables in brine, fruit in syrup, cheese); and - (4) UPFs, which are defined as ‘formulations of ingredients, mostly of exclusive industrial use, that result from a series of industrial processes (hence “ultra-processed”), many requiring sophisticated equipment and technology’ (e.g., sweet and savoury snacks, reconstituted meats, pizza dishes and confectionery, among others) . Ingredients characteristic of UPFs include food substances of no or rare culinary use, including sugar, protein and oil derivatives (e.g., high-fructose corn syrup, maltodextrin, protein isolates, hydrogenated oil) and cosmetic additives (e.g., colours, flavours, flavour enhancers, emulsifiers, thickeners, and artificial sweeteners) designed to make the final product more palatable . Since NOVA was established, nutrition researchers worldwide have increasingly implicated UPFs with poor dietary quality, and with adverse metabolic and health outcomes across a range of populations and country contexts . Furthermore, UPFs have become dominant components in diets of populations worldwide , contributing up to more than 50% of energy intake in high-income countries [32,33], and up to 30% in middle-income countries [34,35], with consumption volumes rapidly increasing [36,37,38]. Because middle-income countries are home to the vast bulk of the world’s population, understanding the implications of rising UPF consumption for global human health is of utmost importance. Several reviews have reported on UPFs and health outcomes [2,3,4,5,7,39]. However, despite the large and rapidly growing body of evidence linking UPFs with adverse health outcomes, the number of reviews and summarizing reports to date have been scarce, possibly delaying the inclusion of the ‘extent and purpose of industrial processing’ as an independent factor for assessing the health potential of diets. As most dietary advice relies on systematic reviews and meta-analyses when reviewing evidence, a comprehensive review could be helpful in strengthening the evidence base and moving this field forward. To our knowledge, no review to date has employed a systematic search to identify all studies, without the restriction of health outcomes or study design. The aim of this narrative review was to systematically identify and appraise the findings of studies on healthy participants (adults, adolescents and children) that have investigated associations between levels of UPF consumption and health outcomes. reference link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7399967/
<urn:uuid:d74b14d7-fd50-45e7-9836-1ccfa7061c29>
CC-MAIN-2021-43
https://debuglies.com/2021/01/10/what-are-the-effects-of-including-unhealthy-foods-in-an-otherwise-healthy-diet/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00670.warc.gz
en
0.936702
2,051
3.265625
3
What is a Sensory Garden and how do you make one? A sensory garden is a space that has been designed in order to allow people to interact with the garden in a way that engages their senses. It is thought that sensory gardens provide many positive benefits such as promoting cognitive and physical engagement, creating a relaxing and stimulating experience as well as improving participation. A sensory garden can be big or small and is suitable for most settings whether rural or in the city. The most important factor about the location is that it is a safe space, ideally with some sort or clearly defined boundary or fence which can enclose the garden and create a sanctuary. We believe that sensory gardens can create and immersive sensory experience stimulating all 5 of the senses. Sight, Touch, Sound, Smell AND Taste. So read on for advice and ideas to create the perfect sensory garden! "I think what really distinguishes a sensory garden from an ordinary garden environment is the inclusion of plants, materials features and objects with particular sensory qualities, used with the intention of stimulating our Senses, Seeing, Hearing, Smelling, Touching and Tasting." (www.flowerpotman.com) Often the first way we register a space is through our eyes, by sight, we enter a space and before we have touched anything our eyes are receiving visual informtation. So, at Natural Design Studio this is one of the first things we consider when making concept plans for a sensory garden. The layout must be produced in a visually readable way, drawing the eyes in, creating focal points and areas of interest - the visual layout should invite the user into the space and should lead them around the space from area to area, giving clues about how to engage with the space and highlighting features to make them more obvious. Tools we use to stimulate sight are, shape, colour, pattern, mass and void, contrast. Planting and objects can be used to complement each of these tools, colourful foliage or textured shrubs and grasses. Our sense of touch helps us to map the world around us, we receive tactile information about our environment every second of the day and this information is sent to our brain which helps us to make sense of external objects. "Touch consists of several distinct sensations...and are all attributed to different receptors in the skin." - livesceince.com. Using different objects, textures and tactile surfaces we can stimulate our brain through touch. Natural design Studio recommend using as much variety as possible with regards to touch, rough, smooth, soft, fluffy, and bumpy textures just to name a few. And we would also avoid using anything too spiky or sharp... after all our sense of touch helps us to register pain as well, so make sure that all the touchy feely objects are 'pleasant' to touch! ... Cue the giant xylophone! Yes, large musical instruments are a great way of stimulating the sense of sound, it is also a very fun and interactive way of engaging people too! We suppose there are a lot of instruments that could be interpreted for a sensory garden space such as giant strings to play notes like a guitar, keys to push like a piano, large drums and gongs and how about tubes and flutes played by people or the wind? If the budget allows then these are a great option, but they can be costly as they are often bespoke pieces and must be designed to weather the elements. BUT fear not there are MANY effective low-cost ways of bringing sound into a sensory garden. Firstly, consider the sounds that are already produced by existing features, the crunch of gravel, the tip tap of feet on a hard surface, the sound of running water or a fountain, look for ways to draw attention to enhance this. For example, adding a strip of noisy gravel or flooring... it will catch people’s attention and people will instinctively explore the surface perhaps tapping their feet or jumping to produce varied noise. Or place a stick or percussion mallet next to a set of wooden or metal railings, and watch what people do! At Natural Design Studio one of our favourite ways of using sound is to add a set of gentle windchimes or bells simple, yet effective. Often Overlooked in a sensory garden is stimulating the sense of smell, and yet it is one of the easiest features to achieve given the fact that so many plants and flowers are fragranced! Humans have 400 smelling receptors and may be able to smell over 1 trillion scents, according to researchers. Some of our favourite flowers to use are Choisya, Honeysuckle, Roses, Dianthus, Jasmin, Lavender and Gardenia - (but there are hundreds more so if you'd like some more ideas then do get in touch!) Using scent in a sensory garden helps to elevate the experience, making it more immersive as the nose sends strong signals to the brain when it experiences an array of pleasant aromas. This can help stimulate emotions and relaxation within the garden, familiar smells often make people feel good. So, by all means use as many scented plants as you can, and also think of other ways to use scent such as smell boxes where people lift the lid and there is a fragrance or nice smelly substance inside. The gustatory sense is usually broken down into the perception of four different tastes: salty, sweet, sour and bitter. Taking things up a notch is to include taste in a sensory garden, our favourite way to do this is to use edible plants and flowers. We will add a NOTE OF CAUTION here as the edible flowers should be correctly identified and verified as non-toxic/safe to consume! So, plan this element carefully and think about how to distinguish edible from non-edible plants! Herbs are generally a safe bet, things like Sage, Rosemary, Basil, Parsley and Lemon Thyme. if you'd rather use flowers then Violets, Roses, Bergamot, Elderflower and Nasturtium all have edible petals and as such would make great additions to a sensory garden. So that's the 5 Senses Covered... But interestingly research suggests that we could have a lot more than 5! "Neuroscientists are well aware that we are a bundle of senses. ... many would argue that we have anywhere between 22 and 33 different senses." Alina Bradford (LiveScience.com) Some of these newly defined senses are not perceivable to us on an everyday level (such as detecting levels of oxygen in the bloodstream) but some of them are quite obvious to our attention. Whether or not science supports the categorisation these phenomenon as 'new senses' they are certainly helpful for us to enhance and inform the creation of excellent sensory gardens. Let take a little look at these 'other senses' and see how they could help! Equilibrioception – a sense of balance. This could be applied to sensory gardens with the addition of a balancing walk or rope. Kinaesthesia – sense of movement. A great feature to add would be a a water fountain or a wheel to spin, even moving plants that sway gently in a breeze. Thermoception – Sense of temperature. This could be an interesting element to add! It is not a dynamic we have ever used in a sensory garden and we would be interested to see if it has been done before, so get in touch if anybody has achieved this feat! Propiocenption - The Sense of space. This one we would argue is essential to include in a sensory garden (big or small) think about how you can create spaces and zones, or defined areas to create different feels and experiences within the garden. Now for the sensible bits! Sensory gardens require a high degree of planning, so take the time to think through all of the space and elements and start from the ground up - grass, wood or gravel floors add to the sensory experience (touch, sight, smell and sound) If you have tarmac or concrete base for your sensory garden consider whether it's possible to add areas of these other natural, softer materials. Failing that paint the concrete with a design, refer back to section 1. Sight for ideas on that! We recommend putting adaptability and accessibility high up the priority list, different adaptations for different groups and communities are important. Think about who will be using the garden, will it be children, mixed groups, people with additional needs or using wheelchairs or walking adaptations. Will the garden be closed to a private community or open to the public? All of these will have bearing upon your design and the core planning such as main routes and paths will need to be appropriate, and seating will ned to be sufficient and well placed. Adaptations for different ages are also key, for example children will require lower down activities. The theme of the space can also be adapted to different groups, for children fun and bright gardens work well, for adults more mature and quieter contemplative spaces could be more appropriate, so get to know your audience and do a bit of research on what would best suit your group. How do we do it at Natural Design Studio? Bearing all of the above in mind we LOVE designing sensory gardens because they can enhance and improve people’s wellbeing, and thats what were all about! Integrating scientific and psychological theory can help to support and inform the development of successful sensory gardens to make them better spaces for people, we always look to develop the concept and apply recent research. PLANTS we believe that nature is at the heart of wellbeing and plants offer us so many of our sensory elements - soft arching plumes of flowers like Astilbe, Interesting shapes of flowers like Allium or Helenium Lollipops. We are always seeking out the most interactive plants and interesting foliage such as Stachys Byzantina which is soft and fluffy to touch. We have already mentioned scent, but overall to us plants are the most important element in a sensory garden. Our secret on taking things to a whole new level? The element of surprise! We like to integrate a little bit of a journey with meandering paths and screening elements such as hedges or room like zones. This creates a little world full of mystery where people are invited to explore and curiously wander around all of the elements. Our favourite kind of surprise is to have installations with cause and effect, such as a lever or handle which lifts or moves something! Nothing better to engage and excite! We hope this post has been helpful in guiding you on the subject of sensory gardens, please do get in touch if you have any questions or would like any further advice. If you would like to hire us to design a sensory garden for you then we look forwards to hearing from you and you can get in touch via our contacts page! We recommend that you run a health and safety check for all elements of a sensory garden to ensure maximum safety and wellbeing of all of the visitors.
<urn:uuid:4b7694fc-254d-454d-9b20-4230a5787c92>
CC-MAIN-2021-43
https://www.naturaldesignstudio.com/post/sensory-gardens
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00231.warc.gz
en
0.946164
2,225
3.109375
3
Off the shores of Florida’s Key Largo, buried beneath almost two centuries of coral reef formations, lay remnants of the dark side of 1820s piracy and the illegal transport of slaves from Africa to Cuba Text By Joseph Frey Early on a hot July morning we head out into the Straits of Florida to search for the wreck site of the notorious Spanish pirate ship and slaver Guerrero, as well as the reef on which Guerrero’s pursuer, the Royal Navy’s HMS Nimble, grounded. Our expedition is part of a global search for wrecks associated with the Middle Passage of the Trans-Atlantic slave trade; to date very few wrecks have been located. The slave trade, in which an estimated up to 12.5 million West Africans were brutally abducted, had been an acceptable practice in Western society for hundreds of years. But by the early 19th century attitudes were shifting. Great Britain and the United States both abolished the Trans-Atlantic slave trade in 1807, with Spain following in 1820. However, “The internal slave trade within the US continued legally and exponentially increased. Moreover, the clandestine trade across the Atlantic and between the Americas and the Caribbean continued with ferocity until the last nations [Cuba and Brazil] abolished slavery in the late 19th century,” says Paul Gardullo, Museum Curator Director, Center for the Study of Global Slavery, National Museum of African American History and Culture at the Smithsonian Institution. Our story begins on December 19, 1827, when HMS Nimble intercepts the Guerrero in the western Bahamas and begins pursuit. The chase ends during the early evening hours, after a brief exchange of naval gunfire between the two ships. Distracted by the exchange of cannon fire, the crews of the vessels don’t notice that they have sailed into shallow, coral reef-filled waters and both ships go aground in the vicinity of Carysfort Reef off Key Largo. Carrying 561 Africans destined to the slave markets of Cuba, the Guerrero hits the reef hard, bilging its hull and toppling its masts. As Guerrero quickly sinks, 41 Africans tragically die, either crushed by falling masts or drowned in the ship’s cargo hold. Nimble is lightly grounded on a neighbouring reef and after two attempts to free itself from the reef, during which ballast, shot, and eventually a carronade are jettisoned, it is eventually pulled off the reef by the American wrecking schooner Surprize on December 20th. Seizing an opportunity to escape, Guerrero’s crew highjack the American wrecking vessels Florida and Thorn and sail to Cuba with 399 Africans who are sold into slavery. Nimble transports 121 Africans rescued by Keys wreckers to Key West, of which only 91 survive a quasi-slavery ordeal and a second ill-fated transatlantic journey before arriving in Liberia two and a half years later. Very few slave wrecks have ever been located, which makes the slaver Guerrero one of the most important known, but undiscovered, wrecks in American waters today. Based on the respective research of Gail Swanson, author of The Slave Ship Guerrero, and Corey Malcom, Director of Archaeology at the Mel Fisher Maritime Heritage Society in Key West, along with marine surveys carried out during the early 2000s under Cory’s leadership, the evidence places the Guerrero wrecking and HMS Nimble stranding events north of Carysfort reef, along the boundary line between what is now Biscayne National Park (BISC) and the Florida Keys National Marine Sanctuary (FKNMS). This marine archaeology expedition will be quite different from what I’ve reported on over the past several years in Lake Ontario with USS Hamilton and USS Scourge, in the Canadian Arctic with HMS Erebus, and off North Carolina’s Outer Banks on German submarine U-576. When found, these vessels were intact, by themselves, and easily identifiable. On this section of the Florida reef, in the immediate vicinity of where Guerrero sank and Nimble grounded, Corey has identified 75 wrecks. None of the wrecks remain intact, having endured some combination of wreckers, natural forces, or more recently, treasure hunters. Leading the search for the Guerrero and Nimble sites in the FKNMS are marine archaeologists from the National Oceanic and Atmospheric Administration (NOAA) based out of Key Largo. NOAA’s principle investigator on this project is Matthew Lawrence, with whom I’ve worked previously on the U-576 expedition. In this expedition, which has been named the Turtle Reef Project, NOAA’s marine archeologists will undertake high-resolution magnetic surveys followed by diver-conducted anomaly investigations for Guerrero and Nimble-related materials at specific locations in the general vicinity of Turtle Reef, off the eastern side of Key Largo, in the waters of John Pennekamp Coral Reef State Park. Assisting the NOAA team will be trained volunteers from Diving With a Purpose (DWP) and graduate students under marine archaeologist Dr. Fritz Hanselmann from the University of Miami (UM). I’ll be diving in Biscayne National Park (BISC) with marine archaeologists from the National Park Service (NPS) and we will be joined by divers from DWP and UM. Like the NOAA expedition, the NPS team is going to conduct similar magnetic surveys and anomaly investigations for Guerrero and Nimble along BISC’s southern boundary that borders onto the FKNMS in close proximity to Turtle Reef where Corey had identified tentative sites. As we gather our gear on the BISC dock just to the east of Homestead, Florida, I meet up for the first time with some of my expedition crewmates. Chuck Lawson, a former NPS marine archaeologist at BISC, is the expedition’s project leader. Also coming out with us are Angela Jones and Shirikiana Gerima, both active members of DWP. Their initial activities with the organization evolved around coral restoration but they have now taken a keen interest in underwater archaeology. “I enjoy coral reef restoration. I’ve learned and continue to learn a great deal from this work. However, underwater archaeology is what I enjoy doing the most – probably because it has such possibility for developing a broader set of skills,” remarks Shirikiana as we speed out on our first day of ‘anomaly jumping’ – conducting diver reconnaissance on modern debris – in areas identified as BISC–7 and BISC-3. As it turns out, there’s no shortage of anomalies detected by the magnetometers. Anything with iron in it will be recorded as an anomaly, as the magnetometer cannot discern modern metal garbage from historic shipwreck materials. I can see that there is a lot more trash out here than archaeological sites. A lot of time is going to be spent anomaly jumping before a historic site is found. Of the 1,183 individual anomalies detected, around 400 are checked out over the course of the expedition. On shallower sites we snorkel to the anomalies, while in deeper waters we use tanks. After observing the anomaly we return to the surface and report. Our observations are linked to the anomaly’s identification number, which is recorded and will be entered into a data base that will locate and characterize the remains of historic shipwrecks in BISC. Towing magnetometers and anomaly jumping can become tedious, but this is vital to Phase I survey work, which identifies archaeological resources on the landscape. This includes finding sites and determining their spatial extent. Fortunately we’re given the opportunity to engage in Phase II evaluation activities. These involve “selected excavations designed to gather enough information to identify the temporal components of a site and determine its integrity and historical significance,” says Chuck as we head out to the area where BISC-10 and BISC-125 are located to carry out Phase II activities. Diving into the warm waters at BISC-10 where I don’t have to wear a wetsuit is a wonderful change from back home in Canada, where even during July it can be dry suit season. We dive in two person teams, taking metal detectors with us as we engage in systematic metal detecting and the flagging of hits. Afterwards the flagged hits are mapped via a simple trilateration – triangulation off of a measured baseline using pull tapes. This is followed by either a hand excavation or localized dredging to uncover, identify, and photograph artifacts. A small sample of potentially diagnostic artifacts are also recovered and sent to conservation. It takes about three days to work each site with three dive buddy teams. At BISC-10 the excitement builds as a cache of cannon balls and bar shot is uncovered. Could this be the site of Nimble’s grounding as her crew jettisoned shot to lighten her load in order to pull the ship off the reef beside us? Stone ballast is also found and the reef shows evidence that a ship ran aground here but managed to get free, as Nimble did. Everything that we uncover indicates this is a stranding site and not a wreck site; it lacks non-portable materials, such as a ship’s fittings, that would have been components of the integral parts of the structure. This site matches the archival accounts of what happened on that December day in 1827, yet some of the physical evidence casts doubt that it is the site of Nimble’s grounding. The stone ballast is unsettling, as a British warship would have used iron ballast; and the ordinance, after later cleaning, shows no Royal Navy markings. The neighbouring reef, BISC-125, generates a great deal of excitement as a potential Guerrero wreck site. This site proves to be much more substantial than originally expected. “There are many remains of rigging and fasteners that formed the main structure of the vessel, indicating a complete wreck site. This ship sunk and stayed on this spot. It had a copper-sheathed hull, as did Guerrero, and it also carried a variety of ordnance sizes, and all of the temporal indicators suggest that it wrecked sometime after 1805, probably not much later than 1850,” states Chuck. “Of particular excitement was the identification of an 18-pounder carronade. We did not find any smoking gun that this site was Guerrero, but all of the materials we did find point to a wreck from the same time period and at least similar construction.” Evidence against a Guerrero identification is the presence of a wooden tampion in the bore of the carronade. Tampions close the bore of a loaded gun so the water and dirt stay out when it is not in use. If Guerrero was in battle when sunk, we would expect to see the tampion out, as the gun would most likely have been operating. But most importantly there are no accouterments of slaving, such as shackles, found at BISC-125. There is good evidence both for and against identification of either Guerrero or Nimble, particularly at BISC-125, that warrants a closer look at these specific sites. “I have not given up on them yet and there will be more historic research carried out, as well as hopefully more archaeology,” comments Chuck. The ongoing search and research are as important as the wrecks themselves. Little will be left of wreck of Guerrero or evidence of the stranding of Nimble, whether or not they are ever found. What’s important is that the search for them has inspired public and volunteer organizations, such as DWP, to remember and share a lost piece of history, one that’s from a dark period of the American story. This search does its part to bring that history back to life and restore a connection to ancestors and a lost (stolen) history for many African Americans. Leave a Comment
<urn:uuid:07ce3ed2-795d-475b-854d-b3ef878b90dc>
CC-MAIN-2021-43
https://divermag.com/a-stolen-history/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00670.warc.gz
en
0.951686
2,460
3.59375
4
Alzheimer's disease (AD) is a progressive neurodegenerative condition that is also the most common cause of dementia in the elderly. In most cases, it leads to severe functional deterioration and loss of independence. Image Credit: David Carillet/Shutterstock.com Age is the most important risk factor now known for AD. The prevalence of this condition increases with age, from <3 per 1,000 person-years in the years between 65 and 69 years, to 56 per 1,000 person-years above the age of 90 years. A tenth of people over the age of 70 have cognitive impairment in the form of memory loss, and it is estimated that about half of them will have AD. This percentage increases to somewhere between one in four and one in two above 85 years. Most of these patients will live less than a decade with dementia. AD can be of early-onset or late-onset (EOAD and LOAD, respectively). The latter is much more common, beginning over 60 (some say 65) years. EOAD is the cause of 1% to 6% of all cases. In both these types, some people have a family history of AD. About 60% of EOAD patients have multiple family members with AD, of which more than one in ten show autosomal dominant inheritance over three or more generations. In some cases, families with late-onset disease have shown early-onset AD (EOAD) cases. The beta-amyloid protein implicated in AD's pathogenesis is a 42 amino acid peptide (Aβ42). It is one of several fragments created from the amyloid precursor protein (APP), such as Aβ40, after cleavage by secretases. APP is a type-I integral-membrane protein found in several tissues but highly expressed in neural synapses. Though its function remains unknown, it is likely to regulate the formation and impulse traffic through synapses. Two enzymes, the α-secretase, which catalyzes normal or non-neurotoxic cleavage of the APP, and the β-secretase, which carries out neurotoxic cleavage, facilitate the first step. The second step is the cleavage of this product by γ-secretase to produce beta-amyloid (Aβ). Genetic Variants of AD A few autosomal dominant families have shown mutations in single genes. For instance, one family in Colombia had over a thousand members with a single mutation that causes EOAD. This led to the identification of the APOE gene as a risk factor. Autosomal Dominant AD – Associated Genes Multiple genes have been associated with autosomal dominant AD but are responsible for less than one percent of cases. "A large proportion of the heritability of AD continues to remain unexplained by the currently known disease genes." Lars Bertram The APP gene is on chromosome 21q, the same one implicated in Down syndrome via trisomy. Down syndrome patients develop early AD in their forties, with amyloid deposits being visible. Over 30 missense mutations have been found close to the Aβ peptide sequence in APP, mostly affecting the secretase cleavage sites or transmembrane domains on exons 16 and 17. These mutations, such as the Swedish and London mutations, are responsible for 10-15% of early-onset familial AD (EOFAD), mostly within specific families, with AD presenting by the mid-forties. EOFAD mutations mostly cause an altered proportion of Aβ42 levels compared to other Aβ isoforms. Another type of AD involves approximately 180 PSEN1 missense mutations in about 400 families. These affect the gene at position 14q24.2 that encodes a major part of the complex required for the γ-secretase cleavage of APP. These account for 18% to 50% of autosomal dominant EOFAD. PSEN1 mutations seem to promote Aβ42 formation over that of Aβ40, thus resulting in lower levels of γ-secretase activity. This oligomer-forming peptide may thus be an early biomarker of AD in the preclinical stage. However, these mutations are associated with the most severe AD forms, being seen in every generation and at early ages, even at 30 years in some cases. Features of PSEN1-associated AD include not just dementia but parkinsonism. Some families with such mutations also show limb spasticity and seizures. PSEN2 mutations on chromosome 1 have also been identified but are rarely the cause of EOFAD and cause AD of later-onset relative to PSEN1. Like PSEN1 mutations, these show alternative splicing and are part of the secretase cleavage complex. They appear to cause greater phenotypic variability among patients in affected families, perhaps because environmental factors play a greater modifying role. Effects of EOFAD Mutations Currently, 24 APP mutations, 185 PSEN1, and 13 PSEN2 mutations have been reported. All of these cause an increase in the ratio of Aβ42:Aβ40, and one boosts the levels of multiple types of amyloid, including Aβ, thus causing Aβ aggregation. Aβ42 is more likely to form oligomers and eventually amyloid fibrils compared to Aβ40. Neurofibrillary tangles due to tau protein hyperphosphorylation are also common in the brains of AD patients. This microtubule-associated protein forms insoluble aggregates after phosphorylation. More than 90% of AD are sporadic LOAD cases, apparently resulting from interactions between multiple genes and environmental factors. The immediate relatives of patients with late-onset disease do not show a Mendelian pattern of inheritance. No single gene has yet been identified that can be blamed for this condition. Several apolipoprotein E (APOE) gene variants on chromosome 19q13.2 have been associated with sporadic LOAD. Still, other interacting risk factors are undoubtedly involved since the risk allele APOEε4 is found in many people who live to be 90 or more. This gene is key to cholesterol and triglyceride metabolism and distribution in the human body, and the APOE ε4 allele is linked to higher blood cholesterol levels. The three APOE ε4 alleles (ε2, ε3, and ε4) are thought to cause amyloid aggregation and tau hyperphosphorylation potentially. This gene is the single most important risk factor for LOAD, but its APOE ε4 allele is neither essential nor sufficient to cause AD. Rather, it reduces the age of onset of AD in a dose-dependent manner. In the brain, apoE is required to remove bad fats from the circulation for their breakdown. However, when bound to lipid in the brain, apoE attaches itself to amyloid-beta aggregates, depending on the specific isoform, which promotes Aβ removal. Individuals with APOE ε4 show higher levels of amyloid and tangles, with more mitochondrial damage. The number of copies of this allele is proportional to the risk of AD in the above-65 individual with a positive family history of the condition. With one APOE4 gene, the risk of AD is 3-4 times higher relative to those with two APOE3 genes. A single APOE2 gene reduces the risk, conversely, relative to two APOE3 genes. Conversely, the APOE 3/3 profile increases the risk of AD development by the age of 90 years threefold or more than a single APOE ε4 allele, suggesting the crucial role of other factors than APOE alone. The risk among family members of an individual homozygous for APOE ε4 (which is uncommon) is 44% by the age of 93, again demonstrating that half the people carrying one or more copies of this allele are spared AD. The ε4/ε4 genotype carries the greatest risk for AD among those with a positive family history. It is found in about one percent of Caucasians, vs one in five people in communities with familial LOAD, Blacks and Caribbean Hispanics. About 40% of those with LOAD lack even one ε4 copy of the apoE gene, while those with one copy have a lower risk of AD by the age of 87. With LOAD, first-degree relatives have a lifetime risk of one in four to one in five of AD, compared to one in ten in the rest of the population. Women have a 45% chance of AD by the age of 73 with this ε4/ε4 genotype, but only one in four men. Other genes in the vicinity have been proposed to cause a higher AD risk, but this is considered less likely than the involvement of APOE itself. The genetics of Alzheimer's disease and frontotemporal dementia - Prof Stuart Pickering-Brown Overall, 15 genes show GWAS significance, but the disease-enhancing or causing variants are known for only four: APP, PSEN1, and PSEN2 for EOFAD, and the APOE for LOAD. Many other genes show technically significant associations with increased AD risk, of which ten show the strongest links. These include SORL1 (sortilin-related receptor), ACE (angiotensin-converting enzyme 1), and Interleukin 8 (IL8). Using the genome-wide association study (GWAS), as many as one million genetic markers (single nucleotide polymorphisms, SNPs) have been tested for AD risk, yielding ATXN1 (ataxin 1) and CD33 (siglec 3), among others. AD scientist Tanzi identified APOE and CD33 as the only genes that showed GWAS significance among families and case-control series. Both protective and high-risk single nucleotide changes have been reported.
<urn:uuid:76b0971a-dfd7-45ed-ae80-2b3d5f9a76eb>
CC-MAIN-2021-43
https://www.news-medical.net/health/AAM-Genetics-of-Alzheimers.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00270.warc.gz
en
0.939658
2,063
3.671875
4
As military activity in Syria winds down, likely resumption of agricultural trade will in turn improve the outlook for food security. This will benefit all parties involved in the conflict in Syria, whether pro-government or pro-rebel, as well as neighboring countries. Restored regional food security does not only mean enhanced and stable access to food for millions of displaced Syrians, but also an improved livelihood for many agricultural and refugee host communities in neighboring countries. With a unique position as a trade corridor and a trade bottleneck in the Middle East, Syria offers access to regional agricultural markets for Lebanon, Turkey, Iraq and Jordan, all of which have been heavily affected by the Syrian war and continue to host large numbers of refugees. In fact, Syria plays an even larger role in global agricultural trade, connecting supply routes to Eastern Europe, Central Asia, Russia, and the Gulf Cooperation Council (GCC) countries. The Syria crisis prompted two major trends that worsened food insecurity in the region. The first is the decline in total agricultural trade with and through Syria. Trade routes shifted to avoid military action and/or closed borders. These longer and often riskier routes—controlled by militias, the Islamic State (IS), or even government forces demanding “passage facilitation fees”—raised transportation costs and disrupted supply chains. In addition, there was a decline in overall agricultural production, not just in Syria but also in neighboring countries due to such factors as population displacement, limited access to water, and military activity. These two trends became a vicious cycle that has had tremendous negative consequences for stakeholders in the agriculture and food supply chains, from producers and consumers to traders and governments. Furthermore, it has led to a shortage of basic foods, price hikes, and a loss in revenues. Prior to the war, there were several trading routes for food supply to, from, and through Syria. Lebanon exported its agriculture and food produce to Syria primarily through its Masnaa crossing. From there, the food produce journeyed on to Jordan through the Nassib crossing on the Syrian–Jordanian border and then to the GCC. Turkey exported its produce through two major routes: the first to Syria through the Bab al-Hawa crossing and then onto Jordan and the GCC. The second passed through the Nusaybin crossing to the Iraqi border crossings of either Rabia or Abu Kamal. Jordan and Iraq also depended on the Syria corridor to import their food supplies from Europe and Russia via Lebanese and Syrian ports. The Syrian war rerouted the food supply chains for all countries. Lebanon and Turkey had to shift supply routes to the GCC to maritime and aerial freight, while Jordan relied on receiving its imports through Israel and its southern port in Aqaba. Iraq’s imports from Turkey are currently funneled through the Kurdistan Region of Iraq. Syria continues to be a major trading partner for all its neighboring countries.1 Consequently, the closing of its borders has severely affected the formal trade that occurs between Syria and Jordan. For example, food and agriculture exports from Jordan to Syria dropped from around $89 million in 2010 to $27 million in 2016, and imports dropped from around $272 million to $64 million over the same period. Syrian farmers and exporters lost an important export destination, while traders and consumers in Jordan lost a major food source. In Jordan, this has contributed to high inflation in food prices—already affected by the influx of around 2 million Syrian refugees—which have fueled recent demonstrations. However, the ongoing normalization of political and trade relations between the two countries, as exemplified by the reopening of borders in 2018 and the resumption of bilateral food trade, will help to relieve these tensions and restore the livelihoods of Syrian families returning to rural agricultural areas, especially in Syria’s south around Daraa and Wadi Hauran. Restoring the trade routes through Syria would also help the ailing Lebanese economy. Although Lebanon has increased exports, it has not been able to capitalize on this increase due to higher transportation costs. In 2016, Lebanon’s food exports to the GCC, particularly Saudi Arabia and the UAE, rose to $263 million—despite the closure of the Jordanian borders—compared to $214 million in 2012, when the Jordanian border was still open. Iraq, a secondary destination for Lebanese food and agriculture products, imported around $40 million worth of products, up from around $30 million prior to the Syria conflict. Nevertheless, Lebanese exporters shifted their transport routes to maritime and air freight in order to avoid Syrian territory. This method costs around 60 percent more than land transport, eating into the profits of these exports. Although some sporadic trade continued overland into Iraq, traders had to pay more freight insurance in addition to unpredictable costs of paying off local militias. In addition to higher transportation costs, increased water scarcity, and competition from Syrian smugglers seeking to fetch higher prices in Lebanon, shifting trade routes have contributed to an overall reduction in agricultural production within Lebanon. Not only has this raised domestic food prices, but Syrian refugees—upon whom the Lebanese agricultural sector relies heavily for labor—are losing work opportunities with the decline of agricultural activity. Meanwhile, the Lebanese government is losing both tax revenue from production exports and increasing its trade deficit. Lebanese farmers and food industrialists praised the reopening of the Nassib border crossing between Jordan and Syria in October 2018 and have started to organically export through Syria and Jordan at their own risk. However, the Lebanese government has not yet legislated on this opportunity, as it needs to overcome internal political differences between its pro-regime and anti-regime constituents. Only then will it be able to coordinate with the Syrian government to facilitate passage of Lebanese food exports through Syria. For Iraq, the resumption of trade is particularly vital in addressing food insecurity in the recently liberated governorates of Anbar, Salahuddin, and Nineveh. Largely cut off from the rest of Iraq when under Islamic State rule between 2014 and 2017, and later cut off from IS-controlled trade routes in eastern Syria, these predominantly Sunni Arab governorates have not yet been able to return to a level of agricultural activity that ensures food security. Large swaths of former agricultural land across Salahuddin, Nineveh, and Kirkuk—the breadbasket provinces of Iraq—still lacked vegetation in the 2017 and 2018 growing and harvesting seasons. This is partly because Iraqi internally displaced persons (IDPs) have been reluctant to return. But even in areas where displaced people have returned, there has not been sufficient government support to resume agricultural activity, whether by rebuilding damaged irrigation canals, providing seeds, or subsidizing the cost of production. Prior to the Syrian crisis, Syrian agricultural exports—consisting mainly of foodstuffs and animal products—generated $1.1 billion. Since 2014, the Abu Kamal, Tanf, and Rabia border crossings between Syria and Iraq have remained closed to commercial traffic, cutting off this trade apart from some smuggling. Cross-border animal trade and the free movement of herds are particularly essential for safeguarding local sources of food in war-struck provinces such as Anbar and Nineveh. Almost one third of Iraq’s total 13.5 million sheep, cattle, and goats and 39 million poultry was lost in IS-dominated governorates of Nineveh, Salahuddin, Kirkuk, and Anbar. This was due to a number of reasons: livestock was killed during clashes; stolen amid the security vacuum; fell foul to diseases due to lack of veterinary access; or confiscated by IS and sold on local markets to keep meat prices low and accessible to the public. Access to livestock, therefore, would enable local economic activity through dairy production and cottage industries, both of which are a major source of primary income for female-headed households and displaced families. This would help stabilize these provinces in the absence of reconstruction aid from the paralyzed government in Baghdad. Furthermore, the rest of Iraq would also benefit in the short term from importing food from Syria to make up for diminished production, thereby addressing the poor conditions in these war-torn governorates and also dwindling water supplies caused by dams built by Turkey and Iran. Similarly, Kurdistan would benefit from decreasing its dependency on Turkey, currently its main food supply and trade corridor, and (to a lesser extent) Iran, which has been sealing off its borders with the region due to increased Kurdish guerilla fighting within Iran. For Turkey, food security for border areas in Anatolia would likewise improve if northern Syria were to become more stable. Even though food and agriculture exports from Turkey to Syria grew from around $200 million in 2010 to around $600 million in 2016—despite the formal closure of borders—stability in northern Syria would likely see refugees return from Turkey. This would not significantly change the national food supply but would bring down the cost, especially in major urban areas and the border towns of Antakya, Gaziantep, Sanliurfa, Hatay, Adana, Mersin, Kilis, and Mardin. While Turkey already exports around $600 million of food informally to rebel-held areas—and through them to the rest of Syria—an open border could restore trade routes for its fresh vegetables, fruits, and meats to the GCC markets. In 2016, these exports were worth at least $1.2 billion and are currently being shipped or flown daily at higher transportation costs. Above all, Syria itself would benefit the most from restored food trade in the region. Not only could customs and taxes on the transit of food and agriculture products provide much-needed income for reconstruction, the resumption of food exports is necessary to relaunch the agribusiness sector in Syria, the second largest employer after the government. Prior to the war, Syria exported around $3 billion worth of food and agriculture products. Agriculture continues to account for at least 26 percent of Syria’s GDP and provides a social safety net for around 6.7 million Syrians. With increasing focus on reconstruction, investment in agriculture could recoup the $16 billion of damages to the sector accrued during the war. Syria and its neighbors all have a vested interest in regaining regional food security. Even if a comprehensive political settlement for the Syrian conflict remains elusive, food security—within Syria and regionally—should be the basis for the cessation of hostilities and a common ground for starting reconstruction within Syria, resuming bilateral trade arrangements, and reinvigorating political dialogue. Hadi Fathallah is an economist and policy adviser at NAMEA Group; a fellow of the Cornell Institute for Public Affairs; and a member of the Global Shapers Community, an initiative of the World Economic Forum. The designations and data employed in this piece do not necessarily reflect the expression of the author or NAMEA Group concerning the legal status of any country, territory, or political solution to the Syrian crisis. Follow him on Twitter @Hadi_FAO. 1. Data in this analysis is obtained from the latest reported data available through UN COMTRADE, based on the Harmonized System (HS) standard for documenting and classifying product data and trade.
<urn:uuid:8bf005e9-5ac8-44dd-af90-6a2bbb8e22fb>
CC-MAIN-2021-43
https://carnegieendowment.org/publications/78286
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00711.warc.gz
en
0.960483
2,255
3.0625
3
It’s a long-standing tradition to discuss the effect of alloying elements in steel to achieve better properties like Nickel make steel tougher and chromium makes steel harder. This study would require investigations of a large number of ternary phase diagrams over a wide temperature range. However, to summarize, Scientist has divided alloying elements into three classes i.e. ferrite stabilizer elements, austenite stabilizer elements, and carbide former elements. With the addition of the above alloying elements group, four types of changes would possibly occur in the phase diagram including expanding austenite region, contracting austenite region, etc. So, in this article, we will give a brief introduction to steel classification, phase diagram variants, and the effect of alloying elements in steel. It’s better for your understanding, If you read the Article on TTT diagram of steel to learn about the basics of the TTT diagram and various transformation terms that will be used in this article. Steel classification is important and largely depends upon carbon percentage and alloying elements added in steel. Importance of alloying elements in steel can be understood from steel classification given below; Steel is divided into two major groups; - Carbon Steel - Alloy Steel Carbon Steel: Carbon steel is steel with carbon percentage up to 2% with no ternary alloying addition. Carbon steel is further divided into low carbon, medium and high carbon steel. Alloy Steel: If one or more ternary alloying elements along with carbon are present in steel, than it is termed as Alloy steel. More precisely, steel with addition of alloying element along with carbon to bring some positive effects in steel is an alloy steel. Alloy steel is also divided into Low alloy, and high alloy steels. Effect of Alloying elements in Steel and Phase diagram Alloying elements are added in interstitial position, and creates strain field around it. This distortion in crystal structure and presence of strain field prevents dislocation movement making steel stronger and harder. For a complete understanding of the effect of alloying elements effect in steel, it is best to study the behavior of phase diagram due to the addition of alloying elements in steel. To study effect of alloying elements in steel, it is best to understood study article on TTT diagram in steel and martensitic transformation and Widmanstatten transformation to have brief idea about, “How these phase diagram works?”. As we have explained earlier, ternary phase diagram prevails due to addition of alloying elements, that’s why iron binary phase diagram undergoes following classification; - Open gamma field - Closed gamma field - Expanded gamma field - Contracted gamma field Type -1: Open Gamma Field It can be seen in above picture of open gamma field; gamma region is expanded by raising gamma to delta transformation region and by depressing gamma to alpha transformation region. Elements which have this type of effect of phase transformation are Nickel, Manganese, Cobalt, and inert metals such as platinum. If we add Nickel and Manganese in fairly high amount, then it is possible to stabilize austenite at high temperature which can be clearly seen in picture that A3 and A1 line completely vanishes at high concentration of alloying elements. With austenite stable at high room temperature, Austenitic steel are developed using high concentration of Nickel and Manganese. Type – 2: Expanded Gamma Field This group of alloying elements is somewhat similar to open gamma field in few concepts like, similar to open gamma field, it depresses gamma to alpha transformation to lower temperature, and raises gamma to delta transformation to high temperature but range of existence of gamma field gets limited. Most important elements in this group are Carbon and Nitrogen. Elements like Copper, Zinc and Gold have similar effects. The expansion of homogenous gamma region proceeds till 2% carbon and 2.8% nitrogen. With further addition of these alloying elements, gamma field become inhomogeneous and new products gets nucleated as seen in picture. Type – 3 Closed gamma field Within this group, added elements Most common elements belong to this group are Aluminum, Beryllium, and Phosphorus. Carbide forming elements like Titanium, Vanadium, Molybdenum, and chromium also help contract gamma field by forming the carbides and reducing required carbon. Steel with large wt.% of above added elements are not suitable for heat treatment due to absence of gamma to alpha region. These means, “Martensitic transformation is not possible with these alloying additions”. Type – 4 Contracted gamma field It is somewhat similar to expanded gamma field in a concept that gamma loop is strongly contracted but is also accompanied with compound formation as shown in picture. Elements like boron along with carbide formers Tantalum, Zirconium, and Niobium have major contribution in this group. Similar to close gamma field, heat treatment of steel is not possible with alloying elements of this group. Classification of Effect of Alloying elements in Steel We have discussed four variants of iron-iron carbide phase transformation diagram provided carbon content remains same while alloying elements concentration is varied. Based on evaluation of these diagrams, alloying elements in steel are broadly classified as; - Austenite stabilizer elements (e.g., Ni, Mn, Co, Pt) We have discussed earlier; these elements are responsible for open gamma field region. Application of these type of steels is found in Austenitic steel like Hadfield steel. - Ferrite stabilizer elements (e.g., Si, Mo, Cr, Al, Be, Zr, Ti, V) With addition of ferritic stabilizers, gamma loop shrinks and maximum matrix contains ferrite. Steel are normally called ferritic steels. Application of ferritic steel is transformer sheet material which is made up of 2% Si low carbon steel. - Carbide forming elements (e.g., Cr, Mo, W, V, Nb, Ta, Ti, Zr) - Carbide stabilizer elements (e.g., Cr, Mn, Si, Zr, Hf, Mo) - Nitride forming elements (e.g., Cr, Al, Ti, Ta, V, Nb) Details of Alloying elements and their behavior in steel will be discussed later in the article. Effect of Alloying element in steel on Eutectoid Point Eutectoid transformation normally in plain carbon steel requires 0.8% carbon and 768o C temperature. This is one of very important and widely used solid state transformation converting austenite into pearlite upon cooling. With addition of austenite stabilizer elements, ferrite stabilizer elements, and carbide forming elements, eutectoid point on phase diagram changes position not only on temperature scale but also on carbon scale. With austenite stabilizers, austenite stabilizes at lower temperature as well which indicates eutectoid transformation will occur at lower temperature. While carbide forming elements will have affinity towards carbon thereby altering required carbon percentage for eutectoid reaction. This alters carbon percentage for eutectoid transformation. Variation in required temperature and Carbon percentage with alloying elements can be shown by the picture below; Distribution of Alloying Elements in steel Well, we have clear understanding of how these elements are beneficial in determining final microstructure in steel. Since, iron has BCC and FCC structure in working temperature range and solubility of interstitial atoms is limited. So, with lots of alloying elements added in steel, it’s possible that some elements remain in steel as soluble, some form carbides and some exists as inclusions. Normally, in Commercial steels, alloying elements are present in following forms: - In free state, as separate entity like particle of Platinum etc. - As intermetallic with iron or other alloying elements - As inclusion or oxide or sulfide - As a carbide compound - In form of homogenous solution in iron For understanding alloying elements behavior within iron, we will divide alloying into two groups; - Inability to form carbide (e.g Si, Ni, Al, Cu and Co) - Carbide forming elements within steel (e.g. Mo, V, Cr, W and Ti) Alloying elements which don’t form carbides or intermetallic, they will be dissolved as homogenous solid solution. Austenite and Ferrite have limited solubility for alloying elements. For example, Cu can be maximum dissolved in iron up to 7%, if it exceeds this percentage it will exist as metal inclusion. In case of nitrogen, maximum solubility limit is up to 0.015% and remaining nitrogen will exist as inclusion or will form nitrides with other alloying elements like V, Al and Ti. Elements which form inclusions are problem for steel as they are not as hard as iron and they will make structure soft, but some inclusions are very beneficial in improving performance of steel for instance, oxide forming. Iron is made up in blast furnace which can have considerable amount of oxygen in it. This oxygen reacts with iron to form of iron carbide and reduces properties. Some, elements which dissolves in steel and have more affinity towards oxygen act as deoxidizers and removes oxygen. Some carbide forming elements like V and Ti also reacts with oxygen to form oxides which pin the grains to form fine microstructure generating harder steel. The elements which form carbides in steel can exist like solid solution or carbides. The distribution of carbides depends upon of alloying elements concentration and carbon percentage. If steel contains high alloying element percentage, then all carbon will be used for carbide forming elements but alloying elements in form of inclusion will be still part of microstructure. Effect of Alloying elements in Steel TTT diagram We have characterized effect of alloying elements in steel as austenite stabilizer elements and ferrite stabilizer elements and carbide forming elements. Each element will have considerable effect on TTT diagram curves and will affect any transformations taking place. We can say, in general, alloying elements in steel will affect kinetics of all transformations including Pearlitic, baintic, and martensitic transformations. Follow TTT diagram in steel to study these transformations… Normally all transformations, either diffusion or diffusion less, depends upon critical cooling rate. The Critical cooling rate is termed as a cooling rate associated with a cooling curve tangent to C shaped TTT curve is termed as a Critical cooling rate. Any cooling rate which is faster or equal in magnitude to CCR will produce martensite. Following factors Critical cooling rate of TTT diagram; - Grains Size - Carbon Content - Alloying elements With increase in carbon content or percentage of alloying element or grain size results in shift of c-shape curve towards right making it feasible for martensitic formation. Grain Size: Fine grain size will have more grain boundary area and more nucleating points results in easier nucleation of pearlite. This promotes diffusion-based transformation. With Coarse grain size, triple points and grain boundary is very limited giving few numbers of nucleating points for Pearlitic formation resultantly delaying the Pearlitic transformation. This causes shifts of c-shape curve towards right. Carbon Content: With the increase in carbon content, chances of martensitic formation increase. This is due to ease in achieving hardenability in steel due to high percentage of carbon. Important point to consider is that increase in carbon decreases start of martensitic transformation temperature “Ms”. Alloying Addition: Different alloying elements will have different effect on steel TTT diagram. One common effect that all alloying elements will have on TTT diagram is shifting of c-shape curve towards right. With the addition of alloying elements, diffusion process slows down and time required for Pearlitic formation increases. This can be understood by following figure; Other important effect of alloying elements depends upon ferrite stabilizer and austenite stabilizer elements. Effect of Austenite stabilizer elements on TTT diagram With the Austenite stabilizer elements like Ni, Mn, Si, eutectoid transformation goes down or open gamma field curve is followed making austenite stable at a lower temperature. This causes merging of pearlite plus bainite region as pearlite transformation also gets delayed or depressed with lowering of eutectoid transformation region. Effect of Ferrite stabilizer elements on TTT diagram When the ferrite stabilizer elements are added in steel like Cr, Mo, V, etc. austenite region shrinkage and eutectoid transformation line go up separating the Pearlitic and bainitic region. With the addition of ferrite stabilize elements, not only diffusion slows down which shifts the pearlite region towards the right but also separates pearlite and bainite bay. This helps in easier control of bainitic transformation. Summary (Effect of Alloying elements in Steel) With all literature above, it becomes clear that effect of alloying elements in steel is of prime importance in achieving optimum properties. Few common observations are alloying elements that delay the diffusion-based transformation thereby reducing the critical cooling rate requirement for diffusionless transformations. An increase in grain size also promotes martensitic transformation. Increase in Carbon percentage also promotes martensitic formation. Effect of Alloying elements in steel discussed can be summarized below; |Element||Nature||Effect on Phase Diagram||Effect on Steel| |Manganese||Austenitic Stabilizer||Open Gamma Field||Improves hardenability, wear resistance, strength at elevated temperature| |Nickel||Austenitic Stabilizer||Open Gamma Field||Improves strength, toughness, corrosion resistance in combination with other materials| |Copper||Austenitic Stabilizer||Expanded Gamma Field||Improves corrosion resistance| |Cobalt||Austenitic Stabilizer||Open Gamma Field||Improve strength at elevated temperature, magnetic permeability| |Titanium||Carbide former||Close Gamma Field||Improve strength and corrosion resistance, limit austenite grain size| |Zirconium||Carbide former||Contracted Gamma Field||Improve strength and grain size| |Boron||Ferrite stabilizer||Contracted Gamma Field||Highly effective hardenability agent, improves deformability, machinability| |Silicon||Ferrite stabilizer||Close Gamma Field||Improves strength, promotes large grain size,acid resistance, deoxidizer| |Aluminum||Ferrite stabilizer||Close Gamma Field||Deoxidizer, limit austenite grain size| |Chromium||Ferritic Stabilizer/Carbide former||Close Gamma Field||Improves hardenability, strength and wear resistance, sharply increase corrosion resistance| |Tungsten||Ferritic Stabilizer/Carbide former||Close Gamma Field||Increase hardness at high temperature due to stable high temperature carbide, limit grain size| |Vanadium||Ferritic Stabilizer/Carbide former||Close Gamma Field||Increase strength, hardness, creep resistance, impact resistance and limit grain size| |Molybdenum||Ferritic Stabilizer/Carbide former||Close Gamma Field||Increase hardenability and strength at particularly high temperature|
<urn:uuid:b0336499-6ede-4d50-b6a9-6de17aecfa58>
CC-MAIN-2021-43
https://materials-today.com/effect-of-alloying-elements-in-steel/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584567.81/warc/CC-MAIN-20211016105157-20211016135157-00470.warc.gz
en
0.866731
3,242
3.34375
3
After the Depression: Overall, wages for American workers improved only after the Roosevelt New Deal began placing severe regulatory restrictions on the natural behavior of corporations that employed large numbers of workers. Then for a while economic wellbeing for most Americans (except dark-skinned Americans) waxed tolerable fair, so long as corporations remained under tight regulation. Slowly, slowly, economic destitution gave way to minimum adequacy. Overall, for many people, things got better. After the war: The New Deal really kicked in, enormously augmented by the GI Bill. The middle class grew bigger than ever before in American history. The poor shrank smaller than ever before. The rich grew richer more slowly than ever before. For a large majority of All The People (even some dark-skinned people), it was the best economic time ever experienced in the nation’s history. This uniquely nice period stands alone. It lasted from 1946 to 1980. After the New Deal years: But of course it didn’t last. Beginning in 1980 the Reagan Administration’s economic ideologues started dismantling the New Deal. The dismantling continues to this day, and it is a testimonial to the quality and depth of New Deal innovations that it is taking the slave-owner mindsets so long to dismantle them. And make no mistake—the dismantlers exhibit the same old motivations that kept people enslaved, forced to give their free labor to ole massa. Ole massa’s mindset is alive and well in modern America, and since there will always be doubters let’s get specific. The cost cutting continuum ranges from bald slavery on one end to a host of modern half-measures on the other, including these familiar examples of converting costs to profits: - Requiring workers to constantly increase productivity but refusing to share the increased profits with the workers who produce them; - Insisting that workers take pay cuts “so the corporation can remain competitive”; - Limiting employees to less than forty hours a week to avoid paying the fringe benefits which are federally required at and above forty hours labor; - Defining employees as “contractors” in order to avoid the cost of fringe benefits; - Refusing to grant pay raises even though the costs of living keep increasing; - Paying employees minimum wage or less and advising them to get on welfare; - Paying high dividends to stockholders and obscene mega-salaries to CEOs while holding employee wages to the lowest possible minimum they can get away with; - and sending American jobs overseas, leaving American workers unemployed. In these examples the trend is clear. The road to slavery is only a matter of degree. The slave owner mindset Now here’s the thing—this factual history confronts us with a distressing truth. In the colonial territories and then the United States, slavery persisted from 1519 to 1865—three hundred and forty six years—only because some humans were willing to enslave other humans. For three hundred and forty six years they did this. Willingly. Knowingly. From our nation’s 1789 founding through the year 2020 is but 231 years. The slave owner mindset has been around a long time, a lot longer than we’ve been a nation. And it has never gone away. Persons with this mindset and the means to do so were perfectly willing to buy enslaved humans, to own them as slaves, to sell them, to sell their children and ignore their grief, to ignore their essential humanness, to force them to work long hours every day for no pay, to discipline and mistreat them and occasionally to beat them until they died, and to regard them as animals—all at no consequence to the slave owner in his legal rights. Just as the slave owner’s heart was untouched by the slave’s misery and degradation, the modern CEO’s heart is untouched by the corporation’s lowest-wage employees who must work multiple jobs to afford the bare subsistence needs of life—or who become jobless when the jobs are sent overseas. The road to slavery is only a matter of degree. Moreover, slave owners were enabled to perpetrate these things because their peers were mostly slave owners too, and had the same untouched hearts and mindsets. Worst of all, the period 1519 to 1865 in our part of North America is not an anomaly in the human story. Looking backward: Slavery has been acceptable to and considered normal by otherwise respectable humans since ancient times. It is hard to find a time in human history when slavery was not acceptable and indeed considered normal. Slavery is prominent from the earliest times described in the Old Testament and comparable ancient literature. The Jewish slaves’ exodus from Egypt is central lore of Western civilization and the cultures of three Abrahamic world religions. The details are routinely taught to small children in Sunday School, with the result that they unthinkingly presume the existence of slavery in a church-related context their parents seem to regard favorably—taught to unthinkingly accept the age-old “normality” of slavery, you might say. I remember the first time I heard about it in Sunday school, when I was perhaps four or five years old, and my recollection is it was conveyed matter-of-factly, with no particular condemnation. I distinctly remember thinking that people enslaving other people was a bizarre idea. Slavery was routine in Greek and Roman cultures, full through the middle ages in Europe, and at all times in Oriental cultures. Most Americans are unaware that the sheer number of African people shipped as slaves to Brazil was seven times greater than the total number ever enslaved in the United States as both colonies and nation. In most places and times, slavery has been considered normal. In many places it still is. The slaveowner mindset is alive and well in the presumptive subsistence-level context of all those moderns who pay minimum wage. The road to slavery is only a matter of degree. Looking around today: In all modern nations we find pimps perpetrating slavery over the prostitutes whose lives, movements and wellbeing they control, often with slave-owner brutality. These moderns willingly perpetrate enslavement without remorse—and collect their unearned profits. Criminal jackals charge enormous fees to “escort” refugees and others wanting to illegally cross national borders, vulnerable to the undependable protection of human smugglers who may abandon or kill them—or turn them over to ransom collectors. When jackals are caught and incarcerated, new jackals readily appear to continue the trade—and, without remorse, harvest its huge profits, their unearned pay. There is not an Arab or Middle Eastern nation today in which slavery is not commonplace and accepted whether out of sight, barely out of sight or openly in sight. Terrorists in many countries routinely capture and enslave other humans, typically committing the men and boys to forced labor or forced combat and the women and young girls to sexual slavery. There are few nations that do not permit and engage in some degree of forced labor by convicted felons at no cost to the state which incarcerated them. As many Philippine women engaged off-the-record as housekeepers and nannies in private American homes have attested, the road to slavery is only a matter of degree. The point of this grievous recitation is that slavery is not just a thing of the past, it is alive and well all over the world today—moderns willingly enslaving other moderns. Did you think the American Civil war ended slavery? Did you actually think the human mindsets perfectly capable of enslaving other humans were all in the past, only a mere century and a half ago? There have always been humans who were willing to enslave other humans. There still are. A lot of them today are in charge of very large corporations, making the decisions on how little they can get by with paying their employees. Looking ahead: The slave-owner mindset has persisted from time immemorial, and some people whose names you know are still quite capable of renewing it. We should not be naively deceived that our society is without plenty of people perfectly capable of newly enslaving others and—as all across the Old South—willing to do so if opportunity should arise. Did you really think we had become so civilized it was impossible to restore it? Incipient slave-owner mindsets are very much with us still, perfectly willing to turn other people into slaves, to buy, own and sell slaves, perfectly willing to force enslaved human beings to work for nothing, with no trace of human caring for the health or wellbeing of the enslaved—and perfectly willing to pay less than the modern minimum wage if they’re pretty sure they can get by with it. Slavery is just a matter of degree. Look around and see the obvious: the slaveholder persuasion thrives in modern America. In order to enhance profits, people among us today—some perhaps your neighbors, or perhaps in the gated development up the road—are perfectly willing to employ other people at less than the minimum wage—even when they know full well that a minimum wage is grievously inadequate for subsistence in modern America. You know they know. Without remorse, they do not feel human caring for the low-paid employee’s health, stress, poverty or wellbeing. The slave-owner mentality easily rationalizes these wrongs: let them get on welfare, let them get food stamps, let them eat cake. In corporate culture cutting labor costs is thought acceptable—the smart thing to do. Capitalism is the system, corporations are its vehicles that sustain it. From the displays of non-caring poverty-level employee wages that everywhere surround us—from the Walmarts to the fast food joints to the low-pay fine restaurants that tell their servers to “make it in tips”—it is only a matter of degree as to how far the non-caring mind is willing to go. There is no reason to assume slave-owner mentalities will not be with us tomorrow and ever thereafter. It is only a matter of degree. It always has been. In July 1888 a schoolteacher-turned-reporter named Helen Cusack donned a shabby frock and brown veil and went looking for a job. “In factories and sweat shops, she stitched coats and shoe linings, interviewed her fellow workers in hot, unventilated spaces and did the math. At the Excelsior Underwear Company, she was handed a stack of shirts to sew—80 cents a dozen—and then was charged 50 cents to rent the sewing machine and 35 cents for thread. Nearby, another woman was being yelled at for leaving oil stains on chemises. She’d have to pay to launder them. ‘But worse than broken shoes, ragged clothes, filthy closets, poor light, high temperature, and vitiated atmosphere was the cruel treatment by the people in authority,’ she wrote under the byline Nell Nelson. Her series, ‘City Slave Girls,’ ran for weeks.”
<urn:uuid:48d39414-ce31-4dbb-9f6e-c46d54286773>
CC-MAIN-2021-43
https://fixypopulist.com/the-godly-algorithm-97-the-slave-owner-mindset/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585504.90/warc/CC-MAIN-20211022084005-20211022114005-00070.warc.gz
en
0.964343
2,278
3.59375
4
In our previous article in Industrial Heating (“Basics of Specialty Melting,” November 2012), we provided an overview of high-tech melting processes and how they improved aerospace materials. Using electroslag remelting (ESR) and/or vacuum arc remelting (VAR), ingots produced via primary melting of the raw materials are refined. In both cases, remelting is typically on the scale of multiple tons, with the final ingot then being forged and machined to its final purpose. Some ESR furnaces can produce ingots in excess of 100 tons. On the other end of the scale, there is an equally interesting suite of applications in the form of various small to medium casting furnaces. These utilize similar high-performance materials that are typically prepared as “master alloy” in a vacuum induction melting furnace (VIM) to produce castings for the aerospace, power-generation and, to a lesser but growing degree, automotive industries. The casting of metals has been with humanity for as long as metals have been melted. Archaeological finds date back approximately 5,700 years for copper and bronze materials, and evidence of copper making (with or without tin to form bronze) dates back 7,500 years. Many improvements and variations in casting processes were developed over time as materials improved, but casting truly took off with the debut of the assembly line and the increasing demands that cars and World War II military equipment placed on cast components. These increasing demands, in quality and quantity, led to the further development of casting processes. Sand casting in particular made tremendous strides in applying automation to regulate the mixing of the sand used for molds, greatly improving batch-to-batch repeatability of the sand mixing and the castings made in those molds. However, the demand for precision near-net-shape components during World War II could not be met by the relatively slow process of sand casting plus machining. Instead, investment casting was pursued. Investment casting, also known as “lost-wax casting,” dates back thousands of years. It was primarily used for the casting of ornaments and jewelry due to the level of detail that could be cast into the pieces. This as-cast detail and ability to hold precise dimensions brought it to the attention of dentists in the late 19th and early 20th centuries, culminating in the invention of a new lost-wax casting machine by William H. Taggart of Chicago, who described his invention in a 1907 paper. It took the demands of the nascent jet-engine aerospace industry to further develop the dental investment-casting process. Initially, jet engines used turbine blades made by sand casting. However, these blades then had to be machined on every surface because the sand castings were not precise enough for the requirements of the rapidly rotating turbines. The resulting demand in machining swamped the machine-tool capacity of the time and brought the near-net-shape precision of investment casting to the attention of the aerospace industry. Post-WWII, investment casting expanded into many other industries and applications, though the driving force behind advances in the technology of investment casting remains the gas turbine industry (aerospace and power generation). Why high-performance materials? In order to increase the efficiency of a gas turbine (higher thrust for a jet engine, more energy per unit of fuel consumed for a power station), the combustion temperature must increase. Long ago, this required blades, vanes and other components in the hottest parts of aircraft engines to be made from vacuum-cast superalloys. Jet engines utilize lightweight but high-temperature components, frequently made of titanium, in certain locations. Titanium reacts with typical refractory crucible materials when molten, but it can be melted in a “cold copper” (water-cooled) crucible, heated by an induction power supply and coil. On the ground, most parts for automotive applications are still air cast. Specialized casting applications, however, are increasing. Automotive turbocharger turbine wheels are perhaps the most obvious application, but many engine components are being examined in light of increasing fuel-efficiency requirements from national governments. Some parts, like engine valves (Fig. 1), are already produced in titanium or a titanium alloy but are not in widespread use. However, some engines for high-end cars further this trend and utilize even more high-performance materials to reduce weight while improving performance. Refractory Crucible Casting Furnaces Today’s typical vacuum precision investment casting (VPIC) furnace can come in a wide variety of sizes and furnace orientations (horizontal or vertical) to suit both the product being cast and the space available to install and operate the furnace and related equipment. The simplest furnaces are often batch units, so called because the melting chamber will be opened to atmosphere after every pour – a batch process. Both the induction melting system and the mold table will be held in the same chamber, and there may or may not be additional chambers for taking immersion thermocouple readings, molten-metal samples or making additions to the melt. Original vacuum furnaces were batch units, since it took time to figure out how to make effective isolation valves. Today, they meet simple process needs at a lower capital equipment cost, if also typically with a reduced yield due to a variety of factors. The alternative to a batch furnace is a semi-continuous furnace, so called because the melt chamber will remain under vacuum for multiple heats while the mold chamber, separated from the melt chamber by a vacuum isolation valve, will be opened during and after each heat to load and unload the mold. There will also be a materials charging chamber to load the master alloy barstock (or other charge material) into the melt chamber while the latter is under vacuum and often an immersion thermocouple device with its own vacuum lock as well. Although complex, because they can be more specifically designed to the desired process, these furnaces generally have improved yields or are capable of making product that cannot be made in a batch furnace. Apart from batch versus semi-continuous furnaces, the main differentiation between furnaces that utilize refractory crucibles for melting is the microstructure of the castings they produce – equiaxed grains or directionally solidified (DS). Equiax Casting Furnaces The majority of castings made have an equiaxed solidification microstructure consisting primarily of small equiax grains. The grains will vary in size and number depending on the structure of the part and how the mold is prepared for the casting. Many equiax casting furnaces are built in a vertical configuration (Fig. 2), which saves shop floor space. Some are built horizontally. These are typically larger furnaces and/or units that utilize a centrifugal casting process that spins the mold to help fill small cross-sectional thicknesses in the castings. Sometimes horizontal furnaces are used when headroom is restricted. Equiax castings are the workhorse of the casting industry. Technically, pretty much everything air cast would be considered equiaxed, given how the parts solidify. Usually, the term is specifically used in the vacuum casting industry to differentiate from DS castings. The parts themselves are often made as castings to take advantage of weight savings, cost reductions due to eliminating forging or machining operations, and similar reasons familiar to anyone who has looked at shifting parts from a fabricated component to a casting when particular alloys are not required. However, some components are more particular in alloy selection. The aerospace and power-generation industries demand this, although more industrial applications have emerged in corrosion-resistant applications and the high temperatures involved in some oil and gas work. For example, some high-performance alloys specifically contain additional elements to act as grain-boundary strengtheners, to increase the temperature a part can withstand, or to impart additional oxidation or corrosion resistance. Parts requiring complex internal cooling can indicate the use of ceramic cores inside the initial wax part, which survives the removal of the wax after the mold shelling process and leaves behind the core. After casting, the core can be removed by a variety of processes, typically including an acid leach. However, development continues on cores that can maintain dimensional stability at pouring temperatures while being easier to remove from the casting. The casting of equiax components in a vacuum furnace is relatively similar to air casting with the allowance for the use of a mold lock in the vacuum chamber to bring the mold to the induction melting furnace. Ceramic shell molds will be preheated in an oven prior to pouring to minimize the temperature shock on the shell and reduce the possibility of ceramic inclusions in the final part. They are then transferred to a mold table or mold plate in the mold chamber on the VPIC furnace. After closing the outer door, evacuating the air from the mold chamber and opening the vacuum isolation valve to the melt chamber, the mold is then moved to just under the pour lip of the induction furnace and filled. The procedure is then reversed as the mold is removed from the furnace. Some equiax molds then have an exothermic material added to promote part filling, while others just rely on thicker shell or refractory cloth padding to retain heat in key areas. The molds are allowed to cool in air, and the shell is removed from the part. Further downstream, processing will depend on the exact demands of the specific part. Directional-Solidification Casting Furnaces The other type of microstructure is considered directionally solidified (DS), with a subset of single crystal (SC or sometimes SX) for a DS casting with no grain boundaries in the part. The vast majority of these parts are used in the hottest regions of aerospace and power-generation turbines. One of the principle reasons for the development of the DS process was to extend the creep life of these parts at their extreme operating-temperature conditions. Parts that are made as DS castings typically require a certain number of columnar grains across the part to utilize the grain-boundary strengtheners in the alloy. In contrast, single-crystal alloys dispense entirely with the grain-boundary strengtheners since the point is to not have any grain boundaries at all. Both types of parts are routinely cored, as previously described. This allows cooling air to flow through the part, which is typically an airfoil surface of a rotating blade or static vane in a turbine (though other static parts are also made by these methods). In the casting process, DS casting molds are typically preheated outside the furnace to dry them and reduce the time required for preheating inside the chamber. This is still performed at a lower temperature than equiax casting molds to reduce thermal cycling. Inside the melt chamber of the DS VPIC is a resistance or induction heater. Induction heaters (Fig. 3), while more expensive initially, are generally preferred over resistance heaters due to their efficiency and reduced maintenance and replacement-parts requirements. This heater will be used to slowly heat the ceramic shell mold to a point above the melting point of the alloy being poured but below the melting point of the ceramic. Once the mold heating cycle is nearly complete, the metal to be poured will be melted in the furnace above the mold heater, and the alloy will be poured at the specified time. At this point, the slow withdrawal process will begin. Unlike equiax furnaces, which typically utilize a plain-steel mold table to support the molds, DS/SC furnaces utilize a water-cooled copper plate under the mold. This plate acts as a heat sink to draw heat out of the alloy, initially freezing a plethora of individual equiaxed grains against the copper. As the mold withdraws from the mold heater, however, grains with a more favorable orientation relative to the heat sink (i.e. those that will grow directly away from the water-cooled copper plate) will grow more quickly than less favored grains, eliminating many of the initial equiaxed grains. DS parts desire multiple grains. These grains then grow through the withdrawal process, which is typically kept at a fairly steady withdrawal rate. In SC parts, however, an additional grain selector is used to eliminate all but one of the grains formed by the initial pour, which then grows throughout the part. As both types of molds are withdrawn from the mold heater, additional cooling methods can be used to try to increase the temperature differential between the mold heater and the region below. IH For more information: Contact Aaron Teske, technical & sales manager, Asia; Consarc Corporation, 100 Indel Ave., P.O. Box 156, Rancocas, NJ 08073-0156; tel: 609-267-8000 x174; fax: 609-267-1366; e-mail: [email protected]; web: www.consarc.com With the development of superchargers and turbochargers for aircraft, mainly to force more air into the combustion stage to bring up the pressure when flying at higher altitudes, engine designers realized that similar functionality could also be applied to other types of engines for efficiency improvements. Due to the need for advanced high-temperature metals in the turbocharger, this was initially done on larger engines only, such as ship and locomotive diesel engines. Today, they are common on every diesel engine and even many gasoline engines. One reason why turbochargers are more common is the easy availability of the high-performance alloy turbocharger wheels for the automotive engines. The parts require a moderately complex wax injection die to produce a pattern for the lost-wax process. The part is nowhere near as complicated as aerospace castings because it has reasonably thick wall sections and no coring. Because of this, they can be produced in specifically developed “turbocharger” furnaces. This is a small-scale batch vacuum casting furnace that locates the induction coil in air on the outside of a small vacuum bell jar. This vacuum chamber is made of a nonconductive material to allow the magnetic field to couple with the metal barstock inside. The barstock is held in a cup on top of the mold and inserted such that the metal is mostly within the induction field. The bottom end of the bar is tightly held in the mold’s refractory and acts as a plug, remaining solid while the bulk of the bar is melted. After the bar is melted in, conductive heat from the molten metal will melt the “plug,” and the metal will self-pour into the mold. While the overall yield from this type of furnace is generally lower than can be attained from a dedicated semi-continuous equiax furnace, the small size of the equipment and small mold size mean that the overall capital investment cost – both for the furnace and for ancillary equipment (e.g., mold setup, handling, shell systems, etc.) – is reduced. As a result, this type of equipment is a decent entry point into the specialty casting industry.
<urn:uuid:70ddf4e7-3357-412e-8f11-fe9fe3572ba8>
CC-MAIN-2021-43
https://www.industrialheating.com/articles/91654-applications-for-specialty-casting-equipment
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00511.warc.gz
en
0.939195
3,107
3.453125
3
How to be a Good Person Without God by Benjamin Studebaker In many western societies, religion seems to be losing influence, particularly among young people. Many religious people argue that this threatens society’s moral frameworks. Without God, on what basis do we distinguish the good from the bad? Secularists often scoff at this question, resenting the implication that only the religious can be moral. And yet, many secularists are also moral subjectivists, who claim not to believe in any absolute sense of right and wrong, arguing that morality is culturally relative or a matter of individual taste. This does seem to imply that as religion weakens, the intellectual foundation of many of our substantive moral beliefs is being eroded, and that to the extent that secularists remain good people, it is often due to socialization and intellectual inertia rather than some truly substantive alternative. But it doesn’t have to be this way–there are excellent secular moral theories that do offer compelling objective alternatives to religious morality. It is undoubtedly true that religion is giving ground in western countries. In many European countries, less than half the population affirms belief in god: Even in the United States, religion is weakening. The number of people who will explicitly admit that they have no religious affiliation has risen to a full quarter among young people: Those young people who do affiliate report less intense affiliations: Service attendance has plummeted among the young: As has daily prayer: Importance of religion is falling: Certainty about the existence of god is at an all-time low among Millennials: A minority believe that the Bible is literally true, even among older generations: Taken together, the indication is that fewer people are religious and that those who are religious are less certain about their beliefs and less confident that they know God’s will. Historically, people’s moral beliefs have had a lot to do with what they believe God commands. Based exclusively on religious beliefs, many people have historically been willing to kill others for having different religious beliefs or no religious beliefs at all. Others have condemned homosexuality or sacrificed animals. In some cultures, it has even been common practice to sacrifice children and babies to the gods. Many of these actions cause immense suffering and are thoroughly condemned by nearly all objective secular moral theories. How does religion practically motivate people to do these things? Most religions operate on a remarkably simple set of principles. It goes something like his: - There is a God. - This God has things he wants you to do (i.e. “good” things) and things he does not want you to do (i.e. “bad” things). - If you do good things and do not do bad things (or repent for the bad things that you do), God will send you to some heaven or paradise after you die. - If you don’t do good things and you do bad things (or fail to repent for the bad things that you do), God will send you to some hell or terrible place after you die. In sum, religious moralities operate on a core principle of selfish egoism–you do the things that God commands to receive a reward and avoid a penalty. Abiding by religious morality is consequently no different from following the law. Indeed, religious people often speak of morality in legal terms, calling it “God’s law”. For these people, disbelieving in God means disbelieving in meaningful consequences for bad actions. These people often claim that if there is no punishment for acting wrongly, we have no reason to avoid bad actions. This is because these people’s moral beliefs are entirely self-focused and egotist. One of the great observations of our secular moral theorists is that there is a sharp distinction between morality and law, between what it is good to do and what someone commands. Derek Parfit discusses this difference in his opus, On What Matters. Parfit claims that morality is irreducibly normative and that while natural facts can inform our decisions by making us more aware of the likely consequences of our actions, they do not in themselves have moral content. Parfit believes that we have fundamental object-given reasons to do good things and avoid doing bad things. For instance, cutting off your son’s arm for no reason is wrong, because this causes your son to suffer for no reason, and suffering for no reason is intrinsically bad. There is no possible universe in which it would be good to suffer for no reason. It is beyond even the capabilities of God to make it good to suffer for no reason. Even if God himself commanded everyone to cut off the arms of their sons, this would not make this action good, because unnecessary suffering is intrinsically harmful. If God sent people to hell for refusing to cut off the arms of their sons, this would not make refusing bad–this would make God bad. Those who resisted this command knowing the penalty is damnation would be brave, courageous, and selfless. They would be good. If God is good, his goodness cannot come merely from his power. The ability to punish does not bestow any moral authority whatsoever, because actions are good or bad irrespective of what any being, person, or organization believes. To offer any semblance of justification, those who would cut off the arms of their sons because God ordered them to do so would have to believe that God had some good reason for demanding this, that they were not being ordered to mutilate their sons for no reason. This would require an extraordinary level of faith not merely in God’s existence and power but in God’s benevolence. Once we accept the core secular moral principle that it is intrinsically and objectively bad to suffer for no reason regardless of what any being thinks or demands, this principle has all kinds of implications and places very large demands on us. There is a great deal of suffering in the world, and if we genuinely believe that suffering is bad, we are obliged to try to alleviate this suffering even at substantial cost to ourselves, with no hope of recompense. The secular moralist recognizes a duty to relieve suffering even though no God will reward this behavior. Indeed, secularists typically recognize a duty to relieve suffering even when many people will take advantage of them or even try to punish them for doing so, even when doing so may cost them their lives. The life of a secularist is especially precious because there is no belief in heaven. Dead is dead. Secular morality typically asks people to care about others even when those others will not reciprocate this care or will even actively seek to exploit the moralist, with no hope of reward. Consequently, many secularists fail to meet the demands of secular moral theory, which requires an extraordinary level of altruism and selflessness that goes far beyond anything required by any religious moral theory. This means that while many secular moral theories are objectively quite convincing on a theoretical level, they often fail to motivate people to undertake the massive sacrifices demanded in a practical context. The core problem is the separateness of persons. Intellectually, we can recognize that suffering for no reason is bad not just for us, but for all beings. But we do not experience the suffering that these other beings experience–we are separate from them. So while we have very strong theoretical reasons to care about this suffering, in practice we find that these reasons do not compel us to undertake massive personal sacrifices. In Isaac Asimov’s Foundation series, Asimov explored the possibility that humanity one day might create a hive mind where the experiences of suffering and happiness are shared. Asimov rightly recognized that this would dissolve the separateness of persons and make it much easier for people to act morally. When they relieved another’s suffering, they would relieve their own suffering. The conflict between the selfish interest and the social interest would be eliminated. In this way, Asimov tried to imagine a new set of conditions where we would believe that we had strong egocentric reasons to do good things and avoid doing bad things. It is immensely regrettable that human beings seem unable to do good things and avoid doing bad things simply because they are good or bad and for no other reason. Moral truth seems to be insufficiently powerful to motivate good action without some external factor that connects being good to being personally rewarded. Given that this is the case, we need practical secular moral systems that are more effective at making this connection. How do we do this? I have some ideas. To start, we need to recognize that even though we cannot directly experience other people’s happiness and suffering, we still benefit from their happiness and are harmed by their suffering in a great many ways when these people reciprocate with us and are part of our community. If we help our friends and family members to be happy and avoid suffering, they will be more able to help us in the same ways. If the other people in our society are happier and suffer less, they will be more economically productive and they will commit fewer crimes. We will see our living standards rise more rapidly, and we will be safer and happier ourselves. If people in foreign countries are happy and don’t suffer, we will benefit from trading with them, we are less likely to see influxes of refugees, and these foreign populations are less likely to dislike us or want to harm us. So we have some practical egotistic reasons to care about other people and to try to benefit them as much as they are willing and capable of benefiting us. We should recognize that when we exploit others, we encourage them to resent us and oppose us, damaging our relationship and making cooperation unsustainable in the long-term. Even if we are able to consistently exploit them, we will damage their psychological well-being, reducing their productivity and effectiveness. Even if we are pure egotists, we should recognize that we always have duties to reciprocate good behavior and to not exploit others. If we abide by those duties, we will have better lives. We will live in more productive and safer societies. More radically, while it is not yet possible to join an Asimovian hive mind, it is currently possible to get cryonically preserved when you die. Places like Alcor are currently willing to freeze your body when you die and attempt to resurrect you in the future using undiscovered technology. We cannot be certain that this will work, because the technology is undiscovered. But if you can believe that it will or even that it might, you can be a secularist and still believe in a kind of life after death. If you are a good person and you make the world a better place, your future society will be better and consequently your future life will be better as well. If you are a bad person and you make the world a worse place, human civilization may not last long enough for the necessary technologies to be discovered, and you may never be revived. This gives you egocentric reasons to care about what the world will be like in the distant future, even for people who do not yet exist and may never exist. If we do not take action to deal with long-term problems like climate change or economic inequality, we may never have the kind of society that can revive us. To get an afterlife, we have to ensure that human civilization is sustainable and continues to grow and develop in a healthy way. Unlike religious moral doctrines in which people are sent to an afterlife based on the whims of God, cryonics asks us to earn our afterlife by contributing positively to our world and by avoiding damaging it in ways that might cause our society to stagnate or collapse. Theoretically, the best kind of people would all abide by secular objective moral theories of the kind advanced by folks like Derek Parfit and Peter Singer, even at great cost to themselves. Unfortunately, real people seem unable to follow those doctrines without some confidence that they will be rewarded. But we do not need God to bestow rewards. We can bestow them on ourselves by creating a just legal system that helps people to reciprocate and cooperate freely without fear of being exploited. Perhaps we can even believe in cyronics and try to build the kind of society that can one day revive us and give us a great afterlife. In these ways, secularists can have moral beliefs that are every bit as robust, both in a theoretical and practical sense, as religious people.
<urn:uuid:706defdb-4e17-488e-822b-45da537b4e25>
CC-MAIN-2021-43
https://benjaminstudebaker.com/2015/09/01/how-to-be-a-good-person-without-god/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00190.warc.gz
en
0.963387
2,532
2.96875
3
The UK government has for 15 years persistently backed the need for new nuclear power. Given its many problems, most informed observers can’t understand why. The answer lies in its commitment to being a nuclear military force. Here’s how, and why, anyone opposing nuclear power also needs to oppose its military use. “All of Britain’s household energy needs supplied by offshore wind by 2030,” proclaimed Prime Minister Boris Johnson at last week’s online Conservative Party conference. This means 40 per cent of total UK electricity. Johnson did not say how, but it is likely, if it happens, to be by capacity auctions, as it has been in the recent past. But this may have been a deliberate distraction: there were two further announcements on energy last week – both about nuclear power. 16 so-called “small nuclear reactors” Downing Street told the Financial Times, which it faithfully reported, that it was “considering” £2 billion of taxpayers’ money to support “small nuclear reactors” – up to 16 of them “to help UK meet carbon emissions targets”. It claimed the first SMR is expected to cost £2.2 billion and be online by 2029. The government could also commission the first mini power station, giving confidence to suppliers and investors. Any final decision will be subject to the Treasury’s multiyear spending review, due later this year. The consortium that would build it includes Rolls Royce and the National Nuclear Laboratory. Support for this SMR technology is expected to form part of Boris Johnson’s “10-point plan for a green industrial revolution” and new Energy White Paper, which are scheduled for release later in the autumn. Johnson will probably also frame it as his response to the English citizens assembly recommendations – a version of the one demanded by Extinction Rebellion in 2019 – which reported its conclusions last month. While the new energy plan will also include carbon capture and storage, and using hydrogen as vehicle fuel, it’s the small modular reactors that are eye-popping. They would be manufactured on production lines in central plants and transported to sites for assembly. Each would operate for up to 60 years, “providing 440MW of electricity per year — enough to power a city the size of Leeds”, Downing Street said, and the Financial Times copied. The SMR design is alleged to be ready by April next year. The business and energy department has already pledged £18 million (AU$32.49 million) towards the consortium’s early-stage plans. They are not small The first thing to know about these beasts is that they are not small. 440MW? The plant at Wylfa (Anglesey, north Wales) was 460MW (it’s closed now). 440MW is bigger than all the Magnox type reactors except Wylfa and comparable to an Advanced Gas-cooled Reactor. Where will they be built? In the town of Derby – the home of Rolls Royce – where, as nuclear consultant Dr David Lowry points out, the government is already using the budget of the Housing and Communities Department to finance the construction of a new advanced manufacturing centre site. When asked why this site was not being financed by the business and energy department (BEIS), as you’d expect, a spokesperson responded that it was part of “levelling up regeneration money”. Or perhaps BEIS did not want its budget used in such a way. Throwing money at such a “risky prospect” betrays “an irrationally cavalier attitude” according to Andrew Stirling, Professor of Science & Technology Policy at the University of Sussex Business School, because an “implausibly short time” is being allowed to produce an untested reactor design. Only if military needs are driving this decision is it explicable, Stirling says. “Even in a worst case scenario, where this massive Rolls Royce production line and supply chain investment is badly delayed (or even a complete failure) with respect to civil reactor production, what will nonetheless have been gained is a tooled-up facility and a national skills infrastructure for producing perhaps two further generations of submarine propulsion reactors, right into the second half of the century. “And the costs of this will have been borne not by the defence budget, but by consumers and citizens.” Yes, military needs UK defence policy is fully committed to military nuclear. The roots of civil nuclear power lay in the Cold War push to develop nuclear weapons. Thus has it ever been since the British public was told nuclear electricity would be “too cheap to meter”. The legacy of empire and thrust for continued perceived world status are at the core of a post-Brexit mentality. It’s inconceivable to the English political elite that this status could exist without Great Britain being in the nuclear nations club, brandishing the totem of a nuclear deterrent. “The civil-military link is undisputable and should be openly discussed,” agrees Dr Paul Dorfman at the Energy Institute, University College London. Andrew Stirling talks of the “tragic relative popularity of (increasingly obsolescent) nuclear weapons”. The coincidental fact that civil nuclear installations are also crumbling provides a serendipitous opportunity for some. The stores of plutonium in the UK are already overflowing and the military has its own dedicated uranium enrichment logistics. Any nation’s defence budget in this day and age cannot afford a new generation of nuclear weapons. So it needs to pass the costs onto the energy sector. “Clearly, the military need to maintain both reactor construction and operation skills and access to fissile materials will remain. I can well see the temptation for Defence Ministers to try to transfer this cost to civilian budgets,” observes Tom Burke, Chairman of think tank E3G. The threat of nuclear proliferation The threat of nuclear proliferation is therefore linked to the spread of civil nuclear power worldwide, says Dr David Toke, Reader in Energy Politics, Department of Politics and International Relations at the University of Aberdeen. David Lowry agrees: “India, Pakistan and above all Israel are obvious examples, each of which certainly has built nuclear weapons.” It’s impossible to separate the tasks of challenging civil nuclear power without also challenging military nuclear interests, Stirling strongly believes. “The massive expense of increasingly ineffective military nuclear systems extend beyond the declared huge budgets. They are also propped up by large hidden subsidies from consumer and taxpayer payments for costly nuclear power. “Huge hidden military interests will likely continue to keep the civil nuclear monster growing new arms. Until critics reach out and engage the entire thing, we’ll never prevail in either struggle.” How new plants would be paid for still remains a question. Nuclear power is prohibitively expensive. The second option for new nuclear While Downing Street is pushing SMRs, BEIS has been looking for a way to finance the £20 billion Sizewell C reactor which EDF has been lobbying to build in Suffolk.This could be why it did not want to bankroll Rolls Royce’s expansion. One idea being floated by BEIS is the government taking equity stakes in future nuclear plants such as Sizewell C, the energy minister has confirmed. French energy company EDF is unable to continue with its plans for a new UK nuclear power station without even more government support than it has already had. The CEO of EDF, Jean-Bernard Lévy, met the Chancellor of the Exchequer Rishi Sunak recently to beg for such support. The head of Greenpeace UK, John Sauven, wrote to the Chancellor saying giving support may be in EDF’s interests, but it is not in the UK’s. Nevertheless, the government is considering taking a direct stake in the project, using a “Regulated Asset Base” (RAB) financing model, where costs are added to consumers’ bills during construction. This would still result in multibillion-pound liabilities showing on the government’s balance sheet. So the Treasury is studying whether the government should in return have equity stakes in EDF’s Sizewell plant. The government previously offered to take a one-third stake in Hitachi’s Wylfa plant on Anglesey, but the Japanese company still scrapped the project last month – even then it was too expensive. The RAB approach is being challenged anyway by the national nuclear regulator, the Office for Nuclear Regulation, because it could introduce a dual regulator for the industry, which it does not regard as sensible or workable. Renewables can supply UK energy needs and net zero targets sooner and cheaper than nuclear Renewables are safer, cheaper, quicker to install and genuinely low carbon, with no fuel supply chain. The Sizewell reactor could not realistically be supplying power until 2034 at the earliest, while wind and solar plants take less than two years to commission, on average. The ability of the national grid to absorb more fluctuating renewable electricity input is improving, helped by the collapsing cost of batteries, and investment in hydrogen and other forms of storage. The National Infrastructure Commission has testified that the absorption of 65 per cent renewables on the grid by 2030 is cost-effective – and more is technically achievable. Implicitly recognising the truth of this, the Ministry of Defence’s Chief Scientific Adviser on nuclear science and technology matters, Robin Grimes, has just opened up another front against renewables. Grimes is advocating nuclear power’s potential for cogeneration – using its “waste” heat for all manner of things from district heating and seawater desalination to synthetic fuel production and industrial process heat. This is not likely to make much of a dent in the cost-benefit equation. Alarm bells should be set ringing when you know that this same Grimes was also co-author of a once-secret report in 2014 for the Ministry of Defence where it was recommended that the UK nuclear submarine industry needs to forge links with civil nuclear power in order to extricate itself from the dire situation it is in. This secret report discussed what to do about the radiation-leaking Vulcan Naval Reactor Test Establishment, a military submarine reactor testing facility built in 1950 at Dounreay in Scotland. Engineers with nuclear expertise are dying out with the reactors. New nuclear subs need a new supply chain and new expertise. What better place to tackle all these issues? Rolls Royce and Dominic Cummings This sad, radioactive site is operated by – guess who – Rolls-Royce (under the Vulcan Trials Operation and Maintenance contract). And Rolls Royce is already benefiting from public money flowing into new nuclear. It has for years been lobbying the government to support its small nuclear reactors wheeze. Its 2017 pitch document contained phrases like “providing 440MW of electricity per year — enough to power a city the size of Leeds” – that Downing Street has literally copied and pasted into the above article fed to the Financial Times. It doesn’t take much insight to see that Rolls Royce has turned Boris Johnson’s right-hand elf – the one who hates energy efficiency – Dominic Cummings. One can see his hand in the push for SMRs, while BEIS is pushing support for Sizewell C. Rolls Royce is axing up to 8,000 jobs because of the pandemic-related aviation crash. This troubled company is a huge symbol of Great Britain plc. Millions of public money for SMRs is just what it needs. But to back both Sizewell and the SMRs would be far too expensive for the public purse, already heavily in debt because of the coronavirus pandemic. Burke believes the SMR pitch is “Cummings fight back against the public pressure for Sizewell from EDF and (Tom) Greatrex”. Tom Greatrex is the Nuclear Industry Association’s chairman. In a Times article he recently called for “a strong and unambiguous statement of the need for new nuclear to be able to meet the net-zero target” with backing for Sizewell. The PR battle for nuclear There is a PR battle in the UK media for new nuclear – and now there are two sides to it. Editors seem to favour giving pro-nuclear writers a clear ride and rarely question their baseless claims that nuclear is zero carbon. This is misguided and not based on empirical data, says Dr Lowry. If the carbon footprint of the full uranium life cycle is considered – from uranium mining, milling, enrichment (which is highly energy intensive), fuel fabrication, irradiation, radioactive waste conditioning, storage, packaging to final disposal – nuclear power’s CO2 emissions are between 10 to 18 times greater than those from renewable energy technologies, according to a recent study by Mark Jacobson, professor of civil and environmental engineering at Stanford University, California. Another recent peer-reviewed article in Nature Energy shows that nations installing nuclear power don’t have lower carbon emissions, but those installing lots of renewables do. Moreover, investment in new nuclear “crowds out” investment in renewables. Renewables therefore offer a more rapid and cost-effective means to address net zero targets. The opportunity cost of nuclear is severely negative. The 2019 version of the World Nuclear Industry Status Report comprehensively demolishes any evidence-based arguments on the utility of nuclear to help address climate change. But that’s not the real argument. It’s military. At the very least, we deserve to be told. David Thorpe is author of books such as Solar Technology and One Planet Cities. He also runs online courses such as Post-Graduate Certificate in One Planet Governance. He is based in the UK.
<urn:uuid:6a35b63b-b067-4821-8715-0740b9e59059>
CC-MAIN-2021-43
https://thefifthestate.com.au/energy-lead/how-the-uks-secret-defence-policy-is-driving-energy-policy-with-the-public-kept-in-the-dark/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00190.warc.gz
en
0.948418
2,903
2.6875
3
Nuclear power advocates are increasingly emphasizing the value of existing but financially struggling U.S. nuclear plants in curbing carbon emissions and addressing climate change. Questions about nuclear power's costs and safety that kept it at 18% to 20.6% of U.S. electricity generation from 1990 to 2020 left little support for new plants. But extreme weather-driven disasters and predictions of much worse in the recent reports from the Intergovernmental Panel on Climate Change and National Oceanic and Atmospheric Administration are driving new thinking about existing plants. "The economic feasibility of existing nuclear is a very different question depending on whether the power market values clean energy," said Exelon Senior Vice President of Regulatory Policy and Analysis Mason Emnett. In a power market that compensates all clean resources, "our nuclear units could compete, operate safely and reliably, and be relicensed." "Financial incentives for zero-carbon generation are a no-brainer," said Analysis Group Senior Advisor Susan Tierney, a former nuclear skeptic, Department of Energy (DOE) official, and Massachusetts utilities regulator. Unsafe nuclear plants should not be preserved, but incentives for existing and safe nuclear are better than rising emissions from increased use of natural gas generation, she added. Growing support for new federal and state initiatives to support nuclear power shows clean energy advocates and power system analysts are confronting the possibility that the transition to net zero emissions may require investment in existing nuclear. State policy efforts The changing appreciation of existing nuclear, and its role in fighting climate change, is reflected in laws enacted from 2017 to 2019 to fund zero emissions credits (ZECs) in Connecticut, Illinois, New Jersey, New York and Ohio. While ZEC programs differ, existing nuclear plants generally receive above the electricity market price for the power they produce based on "an established social cost of carbon" that reflects the environmental cost of emissions, a 2019 Department of Energy report said. Exelon's recent fight for new ZECs for plants not included in Illinois's 2016 allocation began with a state Environmental Protection Agency-commissioned report by Synapse Energy Economics on nuclear economics. But it was the link between nuclear power and the climate crisis fight that led to the Illinois House and Senate passing comprehensive clean energy legislation this month with bipartisan support in both chambers. "Nuclear power is necessary for the state's transition to 100% clean energy," according to the recent legislation. To avoid losing "environmental benefits" from Exelon's nuclear generation, the bill instructs Illinois regulators to allocate approximately $700 million in carbon mitigation credits to three non-competitive Exelon plants — Byron, Dresden and Braidwood — from June 1, 2022 through May 31, 2027. SB 2408's new ZECs for nuclear had bipartisan support because responding to climate change requires "closing fossil resources first," Citizens Utility Board of Illinois Executive Director David Kolata said. "Closing nuclear plants too soon could prevent reducing emissions in the most cost-effective way." This kind of support may also be needed in other states. Nuclear provides 85% of the clean energy in Maryland and Illinois, 91% in Pennsylvania, and over 50% in New York, Exelon Executive Vice President and Chief Generation Officer Bryan Hanson told a July U.S. Chamber of Commerce webinar. But nuclear plants are being closed because "outdated markets don't distinguish between dirty coal electrons and carbon-free electrons." Pennsylvania, for instance, wants to keep its nuclear generation at current levels to prevent increasing the 66% of its electricity generation that comes from fossil fuels until it can cost-effectively develop alternative zero emissions resources, according to the state's 2021 Climate Action Plan, released September 22. But more cost effective natural gas has been driving the state's nuclear generation out of the PJM market that supplies the state’s grid, according to June data from the Energy Information Administration. And renewables remain only 3% of the state’s electricity supply. To keep its nuclear generation competitive long enough to develop renewables, Pennsylvania's legislature can pass legislation approving ZECs, its action plan said. At present, however, it appears legislators may be reluctant to take on the costs and challenges from renewables advocates, the plan and state leaders acknowledged. Longstanding debates about a price on carbon continue while federal lawmakers presently consider a clean energy standard and tax credits that would support existing nuclear plants. A national policy that is "technology neutral" will drive cost-effective decarbonization, Exelon's Emnett said. "A price on carbon will do that very efficiently. A well-designed clean energy standard can largely achieve the same thing, or it could be done through tax policy" and "we'll meet policymakers where they are" to protect the existing fleet. "Nuclear power can't be dismissed as a potential part of the long-term climate solution," a 2018 report by long-time nuclear skeptic the Union of Concerned Scientists (UCS) agreed. But ZECs should be market based, available to other clean energy sources, and only go to plants that are verified as safe, need financial support, and would avoid increased emissions, UCS stipulated. Without incentives, "cheap natural gas will increasingly drive the retirement of zero-emitting nuclear plants, canceling out gains in emission reductions," a March 2021 independent analysis by consulting firm Rhodium Group confirmed. Nuclear power's value in the climate fight has won bipartisan support in federal planning and the Biden administration's infrastructure bill, Department of Energy Acting Assistant Secretary and Principal Deputy Assistant Secretary for Nuclear Energy Kathryn Huff told the U.S. Chamber webinar. The Senate's $1.2 trillion Infrastructure Investment and Jobs Act (H.R. 3684) was passed August 10 by a 69-30 vote. It incorporated the Senate's bipartisan American Nuclear Infrastructure Act (S. 2373), includes $6 billion to fund ZECs from 2022 to 2026 where they are demonstrated as economically necessary to limit emissions. It is expected to be reconciled with the House's more ambitious but unfinished infrastructure bill scheduled for a vote on Sept. 30. The Senate bill's ZECs would be allocated through competitive bidding by plant operators. They would go to financially threatened plants certified as likely to lead to increased carbon emissions if closed and certified as operationally safe by the Nuclear Regulatory Commission (NRC). The final infrastructure bill could also include a clean energy standard, which would resemble state requirements that electricity providers procure progressively larger amounts of emissions-free energy. As a national mandate, it could face rejection by conservative lawmakers and evolve into a more politically palatable clean electricity payment plan that offers incentives to procure zero emissions energy, according to University of California at Davis Economist James Bushnell. But a Clean Electricity Performance Program, like the one approved this month by the House Energy and Commerce committee as part of the House version of the budget reconciliation bill, has unrecognized potential pitfalls, Bushnell said. It could lead to double-counting of clean energy or duplicitous shifting of clean energy procurements between providers that would threaten customer costs. An alternative to the infrastructure bill initiatives is the Zero-Emission Nuclear Power Production Credit Act of 2021 (S. 2291). It would provide a $0.015/kWh production tax credit (PTC) to existing nuclear and was recently endorsed by both the NRC and the Nuclear Energy Institute (NEI). It has obtained little attention and not emerged from committee while lawmakers work through versions of the infrastructure bill. That PTC "will make an enormous difference for climate and for our nuclear plants," Exelon President/CEO Chris Crane said Aug. 4 in the company's Q2 earnings release. There are also provisions in existing legislation and proposed legislation for advanced nuclear technologies. Another bill includes incentives for research, development and demonstration of advanced nuclear technologies and small modular reactors. Interest in these technologies as part of the climate fight was reflected in PacifiCorp's August 2021 long term resource plan, which proposed a 500 MW TerraPower advanced nuclear reactor, to be online in 2028, to replace the company's retiring coal plants. But some advanced nuclear designs "pose even more safety, proliferation and environmental risks than the current fleet," a March 2021 UCS analysis concluded. And advanced nuclear technologies "require rethinking incentives" because "other clean energy resources are more immediately available," added Center for Energy Efficiency and Renewable Technologies (CEERT) Executive Director V. John White. Beyond doubts about advanced nuclear technologies are bigger unresolved questions about nuclear power's safety and costs. Keys to closure It may be appropriate to provide financial supports for an existing nuclear facility if it can operate safely and if there are inadequate cost-effective alternatives to prevent it being replaced with emissions-creating generation, analysts said. But this is not always the case. "Diablo was one incident away from a disaster, its maintenance is expensive, and California has affordable alternative zero emissions resources available," said CEERT's White, who helped convince Pacific Gas and Electric (PG&E) to close Diablo Canyon, California's last nuclear plant, in 2025. "Indian Point's proximity to New York City made its questionable operational history a factor." "Indian Point was shut down because of its threat to New York City," the Analysis Group's Tierney agreed. "But the upstate New York nuclear plants got ZECs because they had good operating histories and delivered economic benefits." Exelon's claims that its nuclear plants' safety and reliability are good and getting better are supported by the NRC, but UCS remains dubious. "Safety issues are economic issues," UCS Director of Nuclear Power Safety Ed Lyman said. "Industry pressure on the NRC to reduce costs for maintenance and refueling have weakened safety standards" set by the commission's Reactor Oversight Process (ROP) "over the past few years," he told Utility Dive. New Biden administration appointments to the NRC, including Commission Chair Christopher Hanson, led to NRC Staff's August 5 retraction of two papers that would have decreased ROP inspection frequency and intensity, Lyman acknowledged. "But every operational incident reviewed this year has involved inadequate maintenance and the impact of the new leadership is not yet clear. Safety might be better, or they might have changed standards," he said. "It is good that stakeholders raise safety concerns, but the NRC has extended the licenses of the bulk of existing fleet, some to 80 years," and "their owners will use them as a backbone to reach their climate goals," said former NRC Commissioner and nuclear advocate Jeffrey Merrifield, now a Pillsbury-Winthrop law firm partner. The NRC "has consistently and rigorously assessed the ROP" and NRC safety inspections show "almost every U.S. nuclear power plant is operating at the highest level of performance," NRC spokesperson Scott Burnell added in an email. But nuclear plant owners may not continue to take on the costs of inspections and maintenance necessary, Lyman cautioned. There is an "absolutely necessary" but elusive difference for the NRC to identify "between barely acceptable and really acceptable." The linkage between safety and economics raises even harder-to-answer questions about the cost of having nuclear power or other firm capacity to achieve deep decarbonization. The need for firm power There is growing, though not unanimous, consensus that nuclear or another form of firm power will be needed to cost-effectively reach net zero emissions, scientists and analysts said. Many policymakers and nuclear advocates see financial support to existing nuclear as the most immediate and cost-effective of the firm power choices. But there is no consensus on the most cost-effective way to achieve 100% renewable energy because cost analyses have "many nonlinearities and unknown unknowns," according to the authors of a National Renewable Energy Laboratory (NREL) report in May on the challenges of identifying the lowest cost power mix. "Firm capacity" that can be called on at any time like nuclear energy, thermal generation with carbon capture and storage (CCS), geothermal, green hydrogen, or other clean fuels will be needed to assure the best system reliability at very high renewables penetrations, according to studies using multiple state-of-the-art modeling tools by NREL, UCS, Princeton University, Evolved Energy Research and Energy and Environmental Economics (E3). That need for firm capacity is only at or above 95% renewable energy penetrations, and even at 100% renewables, the added costs of overbuilding renewables and storage instead of using firm capacity "were still less than $0.04/kWh," Stanford University Professor of Civil and Environmental Engineering and Director of the Atmosphere and Energy Program Mark Z. Jacobson disagreed. But other studies do not validate the greater cost-effectiveness of overbuilding renewables and storage to supporting firm capacity in reaching net zero emissions. "Nuclear or another firm capacity is part of the picture if the goal is zero carbon" because "getting to zero carbon with only wind and solar and batteries and even hydrogen is very expensive," E3 Senior Partner Arne Olson insisted. Power sector emissions reductions of 80% to 100% below 1990 levels in the Pacific Northwest are "achievable at manageable cost, provided that firm capacity is available," E3's March 2020 assessment concluded. The same is true for New England, its November 2020 modeling of the region found. "[W]e need to build the heck out of renewables now because they are cheap and scalable, but they will not get us to the finish line." Senior partner, E3 Other research agreed that the absence of firm power resources increase the cost of reaching 100% decarbonization. Models commissioned by environmental groups Environmental Defense Fund and Clean Air Task Force found renewables and batteries alone would increase wholesale electricity rates "65% over today." The most cost-effective approach likely includes clean firm resources like CCS, nuclear and clean fuels, and "we need to invest heavily in them so they are live options in the 2030s," E3's Olson said. "And we need to build the heck out of renewables now because they are cheap and scalable, but they will not get us to the finish line." But even with firm capacity like nuclear energy, the costs from adding new generation and transmission infrastructure will be significant. Getting to 100% renewables by 2050, even with firm capacity, raises system costs by 29% because "system costs increase nonlinearly for the last few percent approaching 100%," an NREL June 2021 study found. "It is not clear how much electricity prices and price profiles would be impacted" by these system costs, NREL Senior Energy Analyst and co-author of the May and June papers Trieu Mai told Utility Dive. "There might be high prices at some hours, but not at others, which might increase demand for flexibility and drive electrification." It is also not certain how research might lower the costs of distributed generation, which means the technologies for the last 5% to 10% of a 100% renewable energy system "are highly uncertain," he added. But "the U.S. is still at 20% renewables and the challenge of deploying the technologies that are commercially available now is important before the challenge of the last 5% to 10% emerges." Traditionally, technology development follows a walk-trot-run trajectory, he acknowledged. But to meet the Biden administration's 100% decarbonization by 2035 goal, "we have to sprint for the next 14 years." Making the necessary and appropriate expenditures to keeping existing nuclear generation in place may be the most cost-effective way to get across that net zero emissions finish line. But despite nuclear advocates' aggressive push, uncertainties about nuclear's cost and safety keep it among the firm power options rather than the preferred one, scientists and analysts agree.
<urn:uuid:11a13bd7-a8d1-428d-b29a-74d4b280c573>
CC-MAIN-2021-43
https://www.utilitydive.com/news/state-federal-actions-show-growing-push-for-a-nuclear-role-in-reaching-net/606107/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00069.warc.gz
en
0.946984
3,253
2.703125
3
Famous Artists Messy Play Kit Guide Ready to dig into your Famous Artists Messy Play Kit? On this page you'll find detailed step-by-step instructions, ideas to extend the learning, and some links to other resources. Feel free to reach out if you have any questions about your kit. Now go get messy! When working on art projects with young children, it's important to focus on the process instead of the product. In this Messy Play Kit we practice making art like famous artists, but with the understanding that a three year old child is not going to make a masterpiece like Michelangelo or Rodin. We don't expect them to! We do expect them to explore the materials fully, learning about the process of making art, what the materials feel like and how they interact. What the final result looks like isn't as important as what happens in the process. Learn more from this article from NAEYC. Learn more about Michelangelo here. 1. Find a table in your house that is low enough to the ground that you can comfortably lie on the ground under it and reach the underside. If you don’t have one that low, try finding a table you can sit under. 2. Use the tape to attach the large paper backdrop to the underside of the table, making sure it is securely attached. Do the same to attach a piece of the art paper in the middle. 3. Lay the plastic dropcloth on the ground under the table, and secure to the ground if necessary. 4. Add one tablespoon of water to each of the four containers of powdered color, replace the lids, and shake until mixed. 5. You are now ready to paint! Lay or sit down under the table, and place the paints next to you on the ground (or on a tray). Use the paintbrush to paint on the small paper you taped to the underside of the table, being careful not to drip on yourself! A pillow will make this more comfortable. 6. There is enough paper to do this activity three times. Just replace the art paper with a new sheet, and continue the painting! TIP: This paint is washable, but will be easier to clean when it’s wet. If you drip on the floor or get any on the table, wipe it up before it dries to make clean up easier! What are they learning? - Fine Motor Control: Holding a paintbrush, peeling and sticking tape. These help build the small muscles in their hands and help them learn to control their movements. - Creativity: Creating their own design gives them a sense of power and ownership. Remember to comment on what they draw rather than offer them empty praise. Read more about that here. - Science: Color mixing and blending. - Gross Motor: Arm movement above their head, crossing the midline of their body helps build coordination. Picasso and Mondrian: Crayon Resist 1. Take one sheet of art paper and lay it on a flat surface with an edge, like a baking tray or large plate. 2. Use the crayon to make thick lines on the paper. Picasso made both straight and curved lines, while Mondrian often made straight lines that were perpendicular like a grid. This is a sample Mondrian painting: 3. When you’re done with your lines, cut the tips off the larger pipettes of liquid watercolors and drip them slowly on the paper. Watch the colors bleed and spread, but what happens when it gets to the crayon lines you made? 4. Drip colors until you are satisfied, using the paintbrush to spread the colors around more if you want. Use all the colors, or only one. It’s your artwork so you get to decide! 5. There is enough art paper to do this three times. Remember to start with the crayon before dripping the watercolors on top! Why does the crayon stop the liquid watercolors from spreading? The crayon is made of wax, which is a material that water cannot pass through. When you color with the crayon, you are covering that part of the paper with wax, so the liquid is unable to pass through it. What happens if you use the liquid first and then the crayon? What are they learning? - Fine Motor: Squeezing pipettes and being controlling the amount of liquid released take a lot of control! - Science: Material properties and how they interact (crayon wax resists the liquid and blocks it from spreading- why?) - Creativity and Problem Solving: They choose what design to make and how, which gives them a sense of empowerment and pride. - Science: Color mixing Learn more about Alexander Calder here. 1. Empty the packet of gelatin onto the plastic plate. 2. Carefully, with an adult's help, add 4 teaspoons of hot water and mix until the gelatin has dissolved. 3. Cut the tips off the small pipettes of liquid watercolors and squeeze a few drops onto the gelatin. Use a spoon to swirl them around. You can add a few different colors to make a marble effect, or mix one color partly, or mix them entirely. 4. Let the plate sit in a corner until it dries. This may happen overnight, or it may take a few days. Leaving it alone in a well-ventilated area will help. 5. When the gelatin has completely hardened, remove it from the plate. Carefully cut it into a few pieces of any shape. 6. With an adult’s help, use the pushpin to poke one hole in each piece you made, thread with string, and hang from the dowel. |TIP: Use caution with the scissors and the pushpin.| 7. Hang the dowel near a window to show off the colors in the gelatin. You made a mobile suncatcher! What are they learning? - Science: They learn about material properties and how they change (gelatin dries to become hard). - Science: Color mixing, again! - Fine Motor Control: Squeezing the pipettes, mixing the colors - Problem-solving: As they choose what size to cut the gelatin pieces and how to hang them, they learn about balance and design, and will have to work through the issues that may arise. Learn more about Auguste Rodin here. 1. Take the playdough out of the container. If needed, protect your workspace with the plastic dropcloth. 2. Squish and shape the playdough into various shapes, sculpting it. Can you make an object you see in front of you? Can you make an object you can play with? Can you sculpt the playdough into the shape of something you eat (but don’t eat it!) Rodin is famous for making many different sculptures. Some photos are shown below. Can you shape the playdough into objects resembling some of his works? Look at the shapes you see in his pieces- is the sculpture tall and thin? Short and round? Flat with markings carved in? Try to make shapes like his, using different tools if you need (a spoon to carve, a fork to make lines, and so on). La Porte de l’Enfer (The Gates of Hell) What are they learning? - Fine Motor: Squeezing and manipulating the playdough is one of the best ways to strengthen the muscles in the hand. - Creativity: They can create whatever designs from the dough they want. - Abstract Thought: It's hard to create a design of something that’s not directly in front of you! This takes a lot of cognitive skills, such as abstract thoughts and memory. - Emotion Regulation and Positive Stress: Children may get frustrated if they can’t make the playdough look the way they want it to. However, small amounts of frustration are manageable and help them learn to deal with negative emotions and cope.Let them take a break and try again later, or offer other solutions that make you feel better when you’re frustrated. More Famous Artists! There are so many ways to continue playing with and learning about artists and styles of creating art. Here are some of my favorites. - Jackson Pollock: Drip Painting. Pollock is famous for dripping paint onto canvasses to create an intricate mixture of colors, lines and textures. Try dripping paint onto paper by standing (carefully!) on a chair above the paper and dripping paint below you, or by splatter painting onto paper (the bathtub is a great place to do platter painting- everything washed down the drain for easy clean up!). Line your workspace with a dropcloth first! - Georges-Pierre Seurat: Pointillism. Seurat used a series of small dots to make up his artwork, using a technique called "pointillism." Try doing this as well by using the back of a paintbrush, a cotton swab, or another rounded object. Dip it in various colored paints and see if you can make a picture appear through the dots! - Go to a local art museum to see artwork up close. Be sure to look at the paintings as well as sculptures to learn about various mediums. Remember that music is a form of art as well, so listen to a variety of music with your children, from classical to folk to blues to pop. - Check out this web article about talking with children about art. "Information is not important. What’s important is helping children find ways to describe what they see." - This article from NAEYC describes what "process art" is and why it's important for young children. They also have links to other resources and ideas as well!
<urn:uuid:1939db71-12ed-4599-932e-4b1c0f29501e>
CC-MAIN-2021-43
https://messyplaykits.com/pages/artists-guide
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00310.warc.gz
en
0.938304
2,048
2.859375
3
Fans of the science fiction series Star Trek celebrate First Contact Day on April 5 to mark the day in 2063 when humans make their first contact with the Vulcans. Star Trek First Contact Day is celebrated as a minor holiday in the Star Trek world and first showed up on the episode named Homestead in the Star Trek: Voyager series. This day is a minor holiday in the Star Trek universe and praises the day wherein the Vulcans first made contact with Earth as depicted in Star Trek: First Contact and the Star Trek Voyager episode entitled “Homestead.” In the Star Trek universe, the first celebration of this day didn’t happen until 2378, or about 315 years after the first contact. While the fictional date of 2063 is less than half a generation away, minor humans contemplate the possibilities the further we investigate space. Do the modern pilgrims keep our interests piqued, as well as the sci-fi genre keeps on pouring out stories that excite and thrill us. One of sci-fi’s basic themes portrays the first meeting between humans and extraterrestrials. Conspiracy theories surround Area 51, however, a famous one speculates that first contact was made there. Also, UFOs frequently work up theories about first contact. The soonest reported sighting of a UFO occurred in 1639 and was recorded by the governor of the Massachusetts Bay Colony, John Winthrop. While celebrating the first contact could mean numerous things, this celebration focuses on one kind – the Vulcan/humankind meeting. It’s hard to believe, but it’s true. It’s a “final frontier” sort of celebration. Luckily, the film that originally presented First Contact Day can give inspiration. Coordinated by Jonathan Frakes, Star Trek: First Contact can be delighted in as a basic science fiction experience. Since the film sends the Next Generation team back to April 4th, 2063 to ensure Zefram Cochrane (James Cromwell) from time-traveling Borg. Then it highlights Picard and Worf gunning down Borg drones, Riker and Geordi joining Cochran on his debut trip (after some assistance by a comically intoxicated Troi), and Data enticed to join the Borg Queen (Alice Krige). First Contact is likewise about the horrors of assimilation. Similar to those reacting to the pandemic with prejudice or declining to change their behaviors, a large number of the film’s characters pulverize the individuals who are unique and advance their lifestyle, despite the damage it causes others. We see that attitude at work in the Borg. Over a simple zombie danger, the Borg represent security through equivalence. Additionally, among the drones who attack the Enterprise are races who have battled against the Federation, for example, Cardassians and Klingons. Within the Collective, they work close by Starfleet crew members. The drones have a sense of security with each other because they have deleted all distinctions. Singular people, Cardassians, and Bolians all mix together, their highlights lessened by sickly gray skin and nondescript black uniforms. “Star Trek”, one of the best canons of science fiction legend, is held in high respect by hosts of obstinate fans who treasure the series’s philosophies, standpoints, and good faith. First Contact Day or Star Trek Day celebrates one of the most critical moments in the show’s history. April 5 denotes the day in Star Trek when the Vulcan alien race originally came to Earth and first made contact with humans. This day marks may be the most crucial moment in all of Star Trek and the future that came shooting forward from it into the final frontier. First Contact Day History First Contact Day is an unofficial holiday, and it isn’t clear when precisely it began. Trek fans got it some point, maybe after the release of the 1996 film “Star Trek: First Contact,” and have carried it on consistently. Much like a genuine national holiday, Within the setting of the show, First Contact Day is a grand historical event. However, for us, First Contact Day is our reason to return to our preferred bits of “Star Trek” mythos, regardless of whether it be television or film. Canonically within “Star Trek,” First Contact Day is April 5, 2063. A little behind the scenes trivia: “Star Trek: First Contact” co-writer Ronald D. Moore’s child’s birthday is April 5, thus the date decision. First Contact Day is a tribute to the day when humankind initially meets the extraterrestrial beings known as Vulcans. Dr. Zefram Cochrane, played by James Cromwell in the ’96 film, is the inventor of the warp drive, the gadget that allows for quicker than light space travel, and pilot of the Phoenix, the spacecraft that directs Earth’s first successful warp travel. The Vulcan group, going through Earth’s solar system on a survey mission, pick up the Phoenix’s warp signature and detour over to Earth. Additionally, The Vulcan group make landfall, sign the signature Vulcan salute, and make the acquaintance of Dr. Cochrane. First Contact is made, and along these lines changes the intergalactic tide. In 1996, Star Trek: First Contact was released in theaters the country over. Starfleet and group travel through time and experience historical figure Dr. Zefram Cochrane. His spacecraft, the Phoenix, launches and becomes first in human history to arrive at warp drive. Moments after the launch, Vulcans show up at the base close Boseman, Montana. The date is April 5, 2063, and all through 300 years of future history, First Contact Day praises the accomplishments of Dr. Cochrane and the historic first communications between humans and Vulcans. Present-day science fiction fans praise the event also. The first recorded celebration of First Contact Day, at least in the series, happens on 2378 onboard the USS Voyager. After that on this day, the most loved music of Dr. Cochrane is played and his favorite foods are served. Some of his favorite songs incorporate Roy Orbison’s “Ooby Dooby” and Steppenwolf’s “Magic Carpet Ride.” One of his beloved foods is cheese pierogi. Star Trek is an American TV and film enterprise first made in 1966 by Gene Roddenberry. The first TV series in the franchise appeared in 1966 and is currently known as The Original Series. Additionally, it followed the experiences of the team of the starship USS Enterprise under the leadership of Captain James T Kirk. The Enterprise is a 23rd-century spaceship of the United Federation of Planets, an organization of over 150 governments from various planets. The Federation as it is famously known was made in 2161. The Vulcans are a humanoid race from the planet Vulcan, which is around 16 light-years from Earth. Also, Vulcans are viewed as masters of logic who have discovered approaches to suppress their violent emotions. Commander Spock is one of the most notable Vulcans in the Star Trek universe. How to celebrate Star Trek First Contact Day Review your Star Trek history before you board the U.S.S. Enterprise. In reality, in this case, we need to concentrate on a somewhat more recent history of sorts. Regardless, here are a few thoughts for your celebration. All things considered, the first step to celebrating is getting out Star Trek: First Contact and giving it a new watch. However, that can be only the start, you can have a Star Trek: First Contact Day celebration as they do in Star Trek! Simply get together with your companions, ideally in uniform, and bring out Zefram Cochrane’s favorite cheeses, serve cheddar pirogi, and play some old-style awesome on the radio! That is the start of an incredible celebration, and since you’re now together, you should do a total marathon of all the films! Observe First Contact Day by watching the film “Star Trek: First Contact.” It includes the group of the Next Generation series with Captain Picard (Patrick Stewart) in charge of the USS Enterprise and is viewed as extraordinary compared to other Star Trek movies ever made. The series “Star Trek: Voyager” includes the episode “Homestead” that touches on the significance of First Contact Day. Give it a watch in case you’re quick to find more. Dress for the part! Wear your top choice “Star Trek” gear as a watch or rewatch your preferred Trek moments. Cheese pierogies are the most loved nourishment of Dr. Zefram Cochrane. Do as the great doctor does! Get yourself ready some lively pale and feast as you observe some Trek. Also, perhaps over a glass of whiskey. The great Doc wasn’t above getting pleasant and sloshed from time to time. In the case of nothing else, binge some “Star Trek.” Any “Star Trek,” man! Movies, tv shows, classics, recent releases, whatever your inclination. Also, travel to the incredible unknowns and past this April 5 and honor the legacy of “Star Trek” maker Gene Roddenberry’s groundbreaking work. Praising this holiday is most likely a whole more straightforward than inventing the first Warp drive however it very well may be the same amount of fun. A decent method to celebrate this is to have a Star Trek-themed party where everybody appears as their preferred characters in uniform. Then you would be able to serve an assortment of conventional Star Trek dishes which incorporate Andorian Cabbage Soup, Bajoran ale, Cardassians Yamok sauce, Ferengi Slug Steak, Klingon Bregit lung, and Pipius claw and obviously, Vulcan Plomeek broth and Spice Tea. Simply make certain to likewise make a plate of cheese pierogis for the great Dr. Cochrane, as well as some whiskey. Another incredible method to praise this holiday is by having a Star Trek TV episode seeing marathon. Also, you can decide to watch the 79 episodes of the Original Series; 178 episodes of Star Trek: The Next Generation; the 172 episodes of Star Trek Voyager; the 176 episodes of Star Trek Deep Space 9; or the 22 episodes of Star Trek: The Animated Series. You can likewise have a Star Trek film marathon and incorporate movies, for example, Star Trek: The Motion Picture, Star Trek Generations, or the more current Star Trek movies. Starting in 2017, there are 13 Star Trek films accessible. But this year, First Contact Day must be more than that. First Contact Day 2020 may be the most significant recognition of the holiday since we’re praising it amid a global COVID- 19 pandemic. Because of COVID-19 spreads over the planet, a few Americans have reacted with dread, by accumulating supplies and spreading misinformation. So much, including the individuals who allude to the virus as “the Chinese infection,” have used the outbreak as a reason to hurt those of Asian descent. An excessive number of Americans will not change their habits, despite everything socializing, even though doing so builds the danger of disease for the elderly and the immunocompromised. Even though the necessities of social-distancing imply that April 5th, 2020 ought to be a no-contact day, we have a chance to order thoughts behind the holiday with our activities. Then First Contact Day, none of us will meet a team of alien time-travelers, yet we will manage coronavirus. Lily reminds us to be brave during the pandemic. Rather than returning to our old ways, we can do the hard thing and stay at home as much as possible. Rather than storing supplies, we can decline to take an interest in factionalism and leave goods for other people. At the very least, we can react to contrasts by declining dread and dismissing racism in any place we see it. There’s no compelling reason to hold up until 2063 to introduce a new era of human advancement. Hence we can do it in 2020 on the off chance that we just recollect the exercise of the First Contact, that distinction improves us and that assimilation is destruction. Additionally, we should use this First Contact Day to beat our pandemic apprehensions and improve our world now.
<urn:uuid:555081a9-ebd7-4aa5-b4a6-9046d30deded>
CC-MAIN-2021-43
https://www.timebulletin.com/first-contact-day-2020-what-is-star-trek-first-contact-why-is-it-celebrated/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00151.warc.gz
en
0.931747
2,579
2.546875
3
This article was first published in Science News magazine, February 23rd, 2008; Vol.173 #8. Occasionally, scientists stumble upon what seems to be a free lunch. But they’re not concerned about possibly violating the laws of economics. It would be much more shocking to break the laws of physics. To physicists, the no-free-lunch rule is precious. One form of it is the first law of thermodynamics, which says that energy cannot be created from nothing. The second law of thermodynamics goes even further, declaring not only that lunches are never free but also that they come at some minimum price. Nonetheless, some natural phenomena seem, at first glance, to violate the spirit, if not the letter, of those laws. Take living cells. In recent years, scientists have found that some molecular machines—proteins that perform crucial tasks of life, from shuttling molecules through membranes to reading information off of DNA—seem to move spontaneously. These machines are likely powered by the random motion of water molecules in their environment, the “thermal noise” that thermodynamics insists is not available for doing work. While some researchers debate how such machines work without breaking physical laws, other scientists have begun to exploit similar phenomena to create artificial molecular motors—nanomachines that imitate nature by putting randomness to work. “The idea is, let’s take advantage of thermal noise, rather than fight against it,” says Dean Astumian, a theoretical chemist at the University of Maine in Orono. Researchers have just begun to build artificial nanomachines that perform simple tasks, such as moving molecules, by steering random motion in one direction rather than another. In the Feb. 13 Journal of the American Chemical Society, a team led by David Leigh, a chemist at the University of Edinburgh in Scotland, describes the first molecule designed to use chemical energy to open or close a gate and allow one of its parts to randomly cross the gate in one direction, but not the other. It’s very much like the task assigned to a hypothetical “demon” by the 19th-century Scottish physicist James Clerk Maxwell. His thought experiment was an early attempt to show how the second law defines group behavior and thus applies only to large numbers of particles. The second law requires that in any given activity, some of the expended energy will end up as waste heat. For example, even an efficient power plant can lose half or more of its fuel’s energy to waste heat. This waste heat cannot be recovered without expending more energy—and producing more waste heat—in the attempt. Ultimately, waste heat manifests as random molecular motion, like the incessant hailstorm of water molecules buffeting proteins in a cell’s watery guts. “It’s sort of like you’re riding a bicycle and there’s a Richter-12 earthquake going on all the time,” says George Oster, a molecular biology theorist at the University of California, Berkeley. It’s hard to see how the molecular movements (called Brownian motion) produced by such violence could accomplish anything useful. Every second, a typical molecular motor will exchange millions of times as much energy with the environment through these random collisions as it will in the performance of its actual task, Astumian explains. But beginning in the early 1990s, scientists began to suspect that certain protein motors can perform their tasks not despite Brownian motion, but thanks to it. One example is RNA polymerase (RNAP), an enzyme responsible for reading genetic information from DNA. RNAP latches on to a DNA double strand at the beginning of a gene, cleaves the two strands apart, and clamps around one of them. It then moves along DNA’s bases—the A’s, C’s, G’s, and T’s that constitute the genetic code’s alphabet—and assembles a corresponding molecular chain of RNA. The RNA molecule then acts as a template for producing proteins. RNAP, however, does not always move forward. Brownian motion can push it either way. “It’s like a zipper—it slides back and forth,” says Evgeny Nudler, a biochemist at New York University. Roger Kornberg, a structural biologist at Stanford University, and his collaborators first decoded the structure of RNAP in 2001, earning him the 2006 Nobel Prize in Chemistry. In the same award-winning papers, the team suggested that RNAP may be able to select the Brownian fluctuations that propel it forward and discard those that would set it back. That sounds suspiciously like a free lunch, but in fact, the laws of physics do not prevent it. RNAP’s secret lies in the fact that the second law is statistical in nature. At the scales of molecules, random fluctuations can temporarily create small amounts of seemingly “free” energy. Cells can take energy out of Brownian motion by selecting the favorable fluctuations and rejecting the others—very much in the spirit of Maxwell’s demon. Maxwell asked whether the random differences among the energies of particles could somehow be harnessed. He imagined a box filled with a gas and divided into two parts by a wall that didn’t conduct heat. The wall had a tiny door, and standing by it, “a being whose faculties are so sharpened that he can follow every molecule in its course,” Maxwell wrote in Theory of Heat (1871). This “demon” could open or close the door whenever a gas molecule approached, in such a way as to let the faster molecules cross in one direction only, and the slower ones in the opposite direction. After a while, the faster molecules would make one side of the box hotter than the other. Heat would flow in the “wrong” direction. For decades, physicists argued whether such a demonic being could actually violate the second law. Ultimately, modern thinking goes, the energy that the demon’s brain spends on processing (and erasing) information about the particles would offset any recovery of waste heat, and thereby preserve the second law’s validity. So, molecular motors such as RNAP could work like microscopic Maxwell demons, using energy to select favorable fluctuations of energy when opportunities arise. In fact, RNA polymerase is so far the best-established example of a biological Maxwell demon, says Steven Block, a biophysicist at Stanford University. But that doesn’t mean it gets a free lunch. When they decoded RNAP’s structure, Kornberg and his team discovered that RNAP includes a system of two moving parts, located next to the site within RNAP where new RNA bases bind to the DNA template. When this two-part system folds, it falls onto the binding site like a trigger onto a bullet casing. Perhaps, some researchers thought, such a trigger pushes the newly formed DNA-RNA double strand forward by one step. Indeed, in 2005, Nudler and his collaborators showed that mutations altering the trigger structure rendered the RNAP unable to move preferentially forward. However, Kornberg suggests, the trigger may not be what pushes the zipper forward. Instead, the trigger’s role could be to test the strength of the binding in the latest DNA-RNA base pair. If the wrong, noncomplementary RNA base had gotten there by mistake, it would not be bonded as strongly as a complementary base would be, and the trigger would dislodge it, correcting the transcription error. The trigger’s “principal role would not be in motion, but in recognition,” he says. Here is where the Maxwell-demon analogy could be useful, Kornberg adds. Once a correct complementary base pair has formed, Brownian motion would allow the zipper to move forward. The trigger would prevent a backward step. Block’s team measured the pull exerted by single RNAP molecules during the transcription process. Those measurements seem consistent with this picture, Kornberg says. So, Brownian motion would provide the energy for RNAP to crawl along DNA. The higher chemical affinity of complementary pairs—and the larger amounts of energy they release when they bind—would do the demon’s work. And pay for lunch. Geography as destiny No matter what the details of its machinery are, RNAP is an example of how evolution has invented ways of doing complex tasks in the forbidding environment of Brownian motion. Researchers who are trying to build artificial machines at the molecular scale—one of the promises of nanotechnology—would very much like to do the same, says Astumian. The molecule described by Leigh’s team at Edinburgh is a step in that direction, operating just like a Maxwell demon by opening and closing its gate to let molecules through. “We made a molecule that works with the process that Maxwell envisaged,” says Leigh, who proudly remarks that his house is just around the corner from the place where Maxwell once lived. Leigh’s molecule is really three molecules. Two form a type of rotaxane, which is a dumbbell-shaped molecule plus a ring molecule around the dumbbell’s axle. Because of Brownian motion, this ring is generally free to bounce between the dumbbell’s ends, where it can loosely bind. Left alone, the ring will keep randomly jumping between the two sides. The researchers put their rotaxanes in water and added to the solution the third molecule, which is designed to bind to the middle of the axle. This third molecule would act as a gate, blocking the ring to one side and holding it there. The ring’s two sides have different shapes. When the ring is on one side of the dumbbell, the gate can bind to the axle. When the ring is on the other side, its shape will prevent the gate from binding. The researchers demonstrated that in 70 percent of the molecules, the rings ended up sticking to the preferred side of the dumbbell, trapped into position by the gate. The team described a similar molecule for the first time a year ago in Nature—although in that case, the gates were controlled by shining ultraviolet light on the solution rather than by the presence of molecules in the solution itself. In both cases, the energy moving the ring comes from Brownian motion, but the molecules determine where the ring ends up. “It’s a chemical way of implementing Maxwell’s demon,” says Astumian, who in 1998 envisaged a similar working principle with Imre Derényi, now at Eötvös University in Budapest. Leigh says that one could imagine stringing together many rotaxanes. The rings would still move mostly at random, but on average the gates would tend to push them in a specific direction, from one rotaxane to the next. Meanwhile, physicists, inspired in part by the discoveries about protein motors, have found renewed interest in the small fluctuations that characterize thermodynamics at microscopic scales. “On average, the second law will never be violated,” says Christopher Jarzynski, a theoretical physicist at the University of Maryland in College Park. But, as Maxwell suggested, the second law may apply more to macroscopic thermodynamics. It thus is not always helpful for understanding phenomena such as the spontaneous folding of newly minted proteins, which take place in the cell’s thermal bath. In the 1990s, Jarzynski and others developed new theoretical tools to predict how much energy the Brownian bath can spontaneously make available, for example, to help out a molecular motor. In 2002, Berkeley biochemist Carlos Bustamante and his collaborators tested Jarzynski’s hypothesis for the first time on a biological molecule. They took single RNA molecules in a folded state and repeatedly pulled them apart to unfold them, while measuring the force exerted during the process. In accordance with Jarzynski’s predictions, Brownian fluctuations would sometimes impede the process, and sometimes help it by providing a bit of free energy. In such cases, says Bustamante, “the work is being done by the bath, in a sense.” Last year, another team performed similar measurements by unfolding proteins (Science News: 7/14/07, p 22). Experiments such as these can help researchers understand why biological molecules fold in one way rather than another—knowledge that may help them understand diseases caused by protein folding gone wrong. In any case, it seems that the free lunches of molecular motors do always carry some sort of cost. Consequently, most scientists today would still agree with the sentiment Einstein expressed about thermodynamics in 1949: “It is the only physical theory of universal content which I am convinced that, within the framework of the applicability of its basic concepts, will never be overthrown.”
<urn:uuid:d868fed6-ea20-4d43-ac51-8212e4d6092b>
CC-MAIN-2021-43
https://sciencewriter.org/2008/03/22/brownian-motors/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.949155
2,685
3.4375
3
Colitis is inflammation of your colon, also known as your large intestine. If you have colitis, you’ll feel discomfort and pain in your abdomen. This discomfort may be mild and reoccurring over a long period of time, or severe and appearing suddenly. There are different types of colitis, and treatment varies depending on what type you have. The types of colitis are categorized by what causes them. 1. Ulcerative colitis UC is a lifelong disease that causes inflammation and bleeding ulcers within the inner lining of your large intestine. It generally begins in the rectum and spreads to the colon. UC is the most commonly diagnosed type of colitis. It occurs when the immune system overreacts to bacteria and other substances in the digestive tract, but experts don’t know why this happens. Common types of UC include: - proctosigmoiditis, which affects the rectum and lower portion of the colon - left-sided ulcerative colitis, which affects the left side of the colon beginning at the rectum - pancolitis, which affects the entire large intestine 2. Pseudomembranous colitis Pseudomembranous colitis (PC) occurs from overgrowth of the bacterium Clostridium difficile (C. diff). This kind of bacteria normally lives in the intestine, but it doesn’t cause problems because it’s balanced by the presence of “good” bacteria. Certain medications, especially antibiotics, may destroy healthy bacteria. This allows C. diff to take over, releasing toxins that cause inflammation. 3. Ischemic colitis Ischemic colitis (IC) occurs when blood flow to the colon is suddenly cut off or restricted. Blood clots can be a reason for sudden blockage. Atherosclerosis, or buildup of fatty deposits in the blood vessels that supply the colon, is usually the reason for returning IC. This type of colitis is often the result of underlying conditions. These may include: - vasculitis, an inflammatory disease of the blood vessels - colon cancer - blood loss - heart failure - obstruction or blockage - trauma or injury Although it’s rare, IC may occur as a side effect of taking 4. Microscopic colitis Microscopic colitis is a medical condition that a doctor can only identify by looking at a tissue sample of the colon under a microscope. A doctor will look for signs of inflammation, such as lymphocytes, which are a kind of white blood cell. Doctors sometimes classify microscopic colitis into two categories: lymphocytic and collagenous colitis. Lymphocytic colitis is when a doctor identifies a significant number of lymphocytes. However, the colon tissues and lining are not abnormally thickened. Collagenous colitis occurs when the colon’s lining becomes thicker than usual due to a buildup of collagen under the outermost layer of tissue. Doctors do not know exactly what causes microscopic colitis. However, they do know some people are more at risk for the condition. People at a higher risk include: - current smokers - those assigned female at birth - those with a history of an autoimmune disorder - people older than age 50 - people taking certain medications, such as some types of: The most common symptoms of microscopic colitis are: - chronic watery diarrhea - abdominal bloating - abdominal pain 5. Allergic colitis in infants Allergic colitis is a condition that can occur in infants, usually within the first months after birth. The condition can cause symptoms in infants including: - excessive spitting up - possible flecks of blood in a baby’s stool Doctors don’t know exactly what causes allergic colitis. One of the most popular theories is that infants with allergic colitis have an allergic or hypersensitive reaction to certain components in breast milk. A 2020 review of studies indicated that a protein allergy, either through breast milk, cow’s milk, or formula, could contribute. Eosinophilic colitis is a type of allergic colitis that can also show up in infants with these symptoms. Its causes are similarly Doctors will often recommend an elimination diet for the birthing parent, which involves slowly cutting out certain foods known to contribute to allergic colitis. Examples include cow’s milk, eggs, and wheat. If the baby stops having symptoms of allergic colitis, these foods were likely causing the problem. In severe cases, monoclonal antibodies, such as those used to Other causes of colitis include infection from parasites, viruses, and food poisoning from bacteria. You may also develop the condition if your large intestine has been treated with radiation. Different risk factors are associated with each type of colitis. You’re more at risk for UC if you: - are between the ages of 15 and 30 (most common) or 60 and 80 - are white or of Ashkenazi Jewish descent - have a family member with UC You’re more at risk for PC if you: - are taking long-term antibiotics - are hospitalized - are receiving chemotherapy - are taking immunosuppressant drugs - are older - have had PC before You’re more at risk for IC if you: Depending on your condition, you may experience one or more of the following symptoms: - abdominal pain or cramping - bloating in your abdomen - unexpected weight loss - diarrhea with or without blood - blood in your stool - urgent need to move your bowels - chills or fever A doctor may ask about the frequency of your symptoms and when they first started. The doctor will perform a thorough physical exam and use diagnostic tests such as: - colonoscopy, which involves threading a camera on a flexible tube through the anus to view the rectum and colon - sigmoidoscopy, which is similar to a colonoscopy but shows only the rectum and lower colon - stool samples - abdominal imaging such as MRI or CT scans - ultrasound, which can be useful depending on the area being scanned - barium enema, an X-ray of the colon after it’s injected with barium, which helps make images more visible Treatments aim to reduce symptoms and can vary by factors such as: - type of colitis - overall physical condition Limiting what you consume by mouth can be useful, especially if you have IC. Taking fluids and other nutrition intravenously may be necessary during this time. Your doctor may prescribe various medications to help manage colitis symptoms. These may medications include: - anti-inflammatory medication such as 5-aminosalicylates or corticosteroids to treat swelling and pain - immune system suppressors such as tofacitinib (Xeljanz), azathioprine (Azasan, Imuran), or cyclosporine (Gengraf, Neoral, Sandimmune) - biologics such as infliximab (Remicade), adalimumab (Humira), and ustekinumab (Stelara) - antibiotics to treat infection - pain medications - antidiarrheal medications - antispasmodic drugs - supplements for nutritional deficiencies Surgery for colitis could include removing part or all of your colon or rectum. This may be necessary if other treatments don’t work. These surgeries could include: - ileal pouch-anal anastomosis (IPAA), in which the ileum (the end of the small intestine) is turned into a pouch that then connects to the anal canal - proctocolectomy, in which the colon (and sometimes the rectum) are removed - ileostomy, in which the ileum is connected to the abdominal wall, and a stoma (an opening in the abdomen) is created to allow waste to leave the body - continent ileostomy, in which the end of the ileum is secured inside the interior of the abdomen. This is a possible but uncommon surgical procedure for colitis. The only definitive way to prevent a colitis flare-up is to have surgery. If you’re looking to prevent flare-ups without surgery, there are ways to decrease their likelihood: - Keep a food log to track which foods may cause an increase in symptoms. - Ask your doctor if you should change your fiber intake and how much to eat. - Ask your doctor if eating smaller meals more frequently will help you. - Increase your activity levels if you can. - Learn ways to help manage stress such as meditation, yoga, and mindfulness exercises. - Always take medications as prescribed and tell your doctor if you have not. - Make sure your doctor knows about all of your other medications and supplements, including vitamins. Always check with your doctor before changing your diet or adding any new supplements. While every person may experience diarrhea and abdominal cramps from time to time, speak with a doctor if you have diarrhea that does not seem to be related to an infection, fever, or any known contaminated foods. Other symptoms that indicate it’s time to see a doctor include: - joint pain - rashes that have no known cause - small amount of blood in your stool, such as slightly red-streaked stool - stomach pain that keeps coming back - unexplained weight loss Seek immediate medical attention if you see a significant amount of blood in your stool. In all cases, early detection is critical to recovery. Early detection may help prevent other serious complications. If you feel that something is not right with your stomach, it’s best to talk with a doctor. Listening to your body is important to staying well.
<urn:uuid:65e01b44-674e-4ad8-b864-f3fc0bdc738d>
CC-MAIN-2021-43
https://www.healthline.com/health/colitis
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00471.warc.gz
en
0.932124
2,075
3.3125
3
China has a massive push to electrify or eliminate air pollution from all of their buses, cars, trucks and cars as fast as possible. Vehicle exhaust emissions contributed to between 13.5 and 52.1 percent all of major pollutants in 15 heavily contaminated cities such as Beijing, Tianjin and Shanghai. They cause several environmental problems, including dust haze, acid rain and photochemical smog. China has the Most Cars, Buses in the World and Will Have Three Times as Many Trucks as the USA In 2017, China had about 310 million motor vehicles which is up 5.1 percent compared to 2016, which has resulted in an increase of combined air pollution due to coal burning and vehicle exhaust emission. China is adding about 25 million cars, trucks, buses and taxis in 2018. China is the world’s largest motor vehicle market, in both production and sales, for nine consecutive years. China has three times the heavy trucks as the USA. China now has more cars than the USA. China is adding over half of the worlds electric cars and 99% of the electric buses. In 2016 in China, coal-generated power accounted for 72% of all power sources. Electric cars and even buses must use the electrical grid to charge thus electrical power being 72% coal means coal is charging the buses. However, a large number of electric buses mean there are batteries at utility scale for taking more renewable power from solar and wind. Solar charges during the day and you need a lot of batteries to use it over the night. Buses would constantly charge as they move along their route. They are not the best for smoothing out the electricity that is available. Coal powered electricity with electric buses is still an improvement over diesel buses. The coal plants in China are now far away from cities. The electric buses are more efficient in energy use overall. Over 70% less the power is used. The overall CO2 emissions are halved. All of the air pollution is removed from the cities. There is zero soot and other pollutants in the city. Any air pollution is at the power plant which can have 99% efficient air pollution control systems. China has turned on the air pollution control systems despite the 20% cost impact. There is the statement that there are 3 million buses in the world. I have not found a country by country breakdown of buses. I also assume that more buses are being added. At least more buses in China as China continues to urbanize and China is doubling the public transportation system in its major cities. The public transportation doubling is new and expanded subway systems. However, this would go hand in hand with an expansion of the bus networks. Therefore, it seems reasonable to believe that the bus network in China will double in China from 2017 by 2025 or 2030. Shenzhen did achieve 100% electric buses. There were over 16000 electric buses in Shenzhen alone in 2017. This is nearly double Londons total bus count. So the new electric buses are not just part of the bus network expansion but are displacing old buses. Old buses would be the most polluting. Every 1000 electric buses displaces 500 barrels of oil per day. 680,000 buses would be 340,000 barrels of oil per day. This is not a large part of China’s roughly 13 million barrels per day of oil usage. However, it is about 10% of China 3.5 million barrels per day of oil production. It makes a dent in China’s oil dependence. The level of oil production might be about the 20th place oil production for a country. China has about 25% of all cars. I believe they have a larger share of the buses. I think it is about 35% of the world’s buses and could conceivably rise to 50% of the worlds buses. Therefore, about 1 million buses in China in 2017 is my estimate but I estimate that China is adding on the order of 50,000-80,000 buses every year. China will have in the range of 1.5 million to 2 million buses by 2030. I can see that part of the national plan would be to get more public transportation density. Singapore’s city future transportation plan is a mix of self-driving cars, self-driving buses and public subway transportation. I would believe this would be followed in the close to one hundred Chinese cities of comparable size or larger than Singapore that China will have by 2025. Forecast of Buses to 2030 China is getting in the range of 100,000 electric buses every year. They had 385,000 electric buses in 2017 and should have about 480,000 electric buses in 2018 and will easily have 600,000 electric buses by 2020. There is a widely reported statement is that China is adding about 9,500 electric buses every 5 week or China is adding more electric buses than all the buses in London every five weeks. This would mean about 98800 buses per year. Assuming the trend of the last 3-4 years is being followed then China will have 580,000 electric buses in 2019 and 680,000 in 2020. I have not been able to find a report on the total number of buses in China and how many buses are being added. There is an 89-page report on China’s transportation plans in regards to addressing pollution. The world is on track to 1.2 million electric buses by 2025. This is 99% China. The 3 million bus count for the world in 2017 will not stay static. A city even a city in China cannot build subway fast enough for urban expansion. Buses have to be used as part of network either for a fast stopgap or even to shuttle the last few miles in urban sprawl. China has urban sprawl. China is heading to 70% urban and a lot of that urban is in megacities. I estimate 2 million electric buses in China by 2030. This will offset 1 million barrels of oil per day. Air Pollution Fix With Cities Prioritized China’s priority is fixing air pollution in cities with transportation improvement. China is also building 1-2% more city every year. You might think well 1-2% those are small numbers. In China’s case, that means adding an equivalent of a Los Angeles of city every year or two. Fixing air pollution in cities will give back 7% of GDP from air pollution damage. It will improve health and reduce health care costs. In 2017, China’s automobiles emitted around 436 million tonnes of pollutants, including 333 million tonnes of carbon monoxide (CO), 57.4 million tonnes of nitrogen oxide (NOx), 40.7 million tonnes of hydrocarbon (HC) and 5 million tonnes of particulate matter (PM). The total amount of emissions of the four pollutants from motor vehicles was down 2.5 percent from the previous year. So despite adding 5% more vehicles every year, China has been able to reduce air pollution from vehicles. China’s electricity is still mostly produced by coal power and coal is by far the most polluting source. However, China has activated the modern air pollution mitigation systems on the coal plants. Most of the coal plants are new and have all modern air pollution control. Modern pollution control can reduce air pollutants by about 99%. Previously operators at the coal plants did not use the pollution control because it increases costs by about 20%. Now China’s regulators are forcing the controls to be on as they need to address the emissions. They always needed to address the emissions but the complaints the middle class in the cities were being sacrificed for development. New energy vehicles include pure electric vehicles, extended-range electric vehicles, hybrid vehicles, and fuel cell electric vehicles, among others. Companies are offering different types of electric buses to meet market demand, and the industry has developed rapidly, with the production and sales volume of new energy vehicles increasing 101 times increase in the last five years. In 2015, the Ministry of Transport, Ministry of Finance, and Ministry of Industry and Information Technology jointly issued a stipulation that the proportion of new energy buses to be added and replaced in Chinese cities must be divided proportionally: 80% in Beijing, Shanghai, Tianjin, Hebei, Shanxi, Jiangsu Zhejiang, Shandong, Guangdong and Hainan; 65% in Anhui, Jiangxi, Henan, Hubei, Hunan and Fujian; 30% in other provinces (regions and cities). At the same time, national and local governments have introduced a series of subsidy incentives and tax reductions to encourage and promote the development of new energy vehicles. The subsidy for large- and medium-sized fuel cell bus is 500,000 yuan. This about US$70,000 for each bus. China has a Two Phased Approach on Trucks China has the most trucks in the world and since they add three times as many trucks as the USA, they must have in the range of 50-65% of the world’s trucks. If they don’t have that they will. The two-phased approach is clean up all emissions on the diesel trucks and then focus on battery short range trucks. The priority on batteries now is the buses and cars. China will also work on the overall energy efficiency of trucks. China is already enforcing the highest European level 6 air pollution emission controls on trucks. China has most of the world’s trucks. China is adding three times as many trucks as the USA every year. China is the factory for the world. All of the fridges, electronics, furniture that they are building has to be moved from factories to ships for internal or external customers. Only 20% is exported on average. China has a lot of small operators of trucks and they overload them to maximize how much they can get. The trucks have been cheaper domestically produced trucks. They were more polluting. China will have all large trucks modified for emission controls. The trucks are a massive source of air pollution. Cleaning up the trucks completely with diesel pollution control and some electrification by 2030 will positively impact China’s air pollution, China city air quality, World Air pollution and world warming. Particulates, soot, black carbon are similar and have overlap. They are the bits of leftover stuff from not cleanly burning fossil fuels. They are also unburnt from slash and burn agriculture. Any fire that is not properly managed makes black carbon, soot and particulates. It takes a lot of work for clean burn or to filter what is left and prevent it from becoming air pollution. Soot and black carbon are twenty times cheaper and faster to manage than CO2 fixes. There is overlap between fixing those issues of black carbon and CO2. There is over about half of degree of warming that could be prevented and rolled back by cleaning up transportation. Transportation is 20-30% of the soot and black carbon problem globally. China’s massive move on buses, trucks, electric cars and taxis could address close to half of the world transportation pollution which is about 5-12% of world particulate pollution. This could improve global warming by 0.1 degrees. Fixing all soot and black carbon globally is a 0.5 to 0.8-degree reduction in warming. It is the fastest and most impactful way to actually bend the temperature trend. The other major parts are stopping slash and burn to go to slash and char and to get soot free cooking and heating in the developed world. Soot free cooking is the biggest. Plus fixing soot and black carbon first saves up to 7 million lives per year. China’s part of addressing air pollution in cities with transportation and fixing soot free cooking in rural areas and heating in northern cities will save on the order of 1 million lives per year. There is also the electrification of regular passenger cars, ride-sharing and self-driving taxis. China is on pace to 25% of all new cars in 2025 being electric cars. It is conceivable that nearly all new cars in China could be electric self-driving cars by 2030. There are limiting factors. There is the supply chain for batteries and other components and the factories to put the batteries and cars together. It is not just batteries for all new electric cars. It is electric batteries for cars, buses, trucks, utility-scale storage, planes, drones and more. There is the nuance of where batteries have the most or best impact compared to alternatives. Batteries are scaling for a 17X increase by 2030. This is still not nearly enough. Therefore, prioritization must be performed. Million of lives per year saved within 20 years for 20 times less cost than CO2 fixes. More temperature effect cheaper and 40 years faster than CO2. This is the actual trillion dollar environmental fix in action. Not the insane and impractical call to get off of fossil fuels which would leave 4 billion in poverty and slow the saving of millions of lives per year. This is the true scale and level of action now. If the world wants this to go five times or ten times faster with real impact then this needs to be replicated across India and Asia and into Africa and there needs to be mass production of factory or adapted shipyard mass production of molten salt or other nuclear reactors. Replace all coal burners with high temperature (700 degree celsius) nuclear. Temperatures that match the coal plants being replaced. The grid is there already for the coal plant. You would not be upending and building new grid and infrastructure out in remote areas for massive solar. The supply chains stay in place. Moving or building new grid will take you a hundred years and would be a stupid and uninformed plan. What I am suggesting is faster, cheaper, more effective and would actually work. Overall air pollution and climate control in China is also happening. That is another article. I have an article from October which needs to be updated and expanded. Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology. Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels. A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
<urn:uuid:c16ec022-fb97-4357-908c-a002bfd63c57>
CC-MAIN-2021-43
https://www.nextbigfuture.com/2018/12/despite-chinas-mainly-coal-power-electric-buses-halve-emissions-overall.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00311.warc.gz
en
0.961748
3,003
3.1875
3
Create an Account - Increase your productivity, customize your experience, and engage in information you care about. Show All Answers A stormwater utility is similar to water, sewer and other utilities that you are familiar with. These utilities charge a fee for services provided. In this case, the service is the control of storm water runoff through construction, operation and maintenance of the stormwater system within the Town’s municipal limits. There are two main reasons. First, additional revenue is needed for stormwater operations. The U.S. Environmental Protection Agency is requiring the Town of Fort Mill, and numerous other municipalities, to improve stormwater operations to prevent pollution and improve stormwater quality as part of the "National Pollution Discharge Elimination System." Second, dedicated revenue is needed to maintain and improve the stormsewer system. Yes. There are many stormwater utilities in large and small communities throughout the nation, with many more in the planning stages. Locally, there are 32 municipalities in South Carolina who have implemented stormwater utilities based on a survey conducted in 2012. Federal laws regulating stormwater runoff require Town of Fort Mill to manage the stormwater that runs off impervious surfaces such as concrete, asphalt, or rooftops. Stormwater runoff carries pollutants directly to our streams and rivers creating flooding issues and contaminating our local waterways. To learn more about the impacts of stormwater runoff, please visit the Stormwater Department website. A major storm water quality concern is "non-point source pollution". As the name implies, non-point source pollution comes from numerous locations and is carried through runoff. The types of pollutants include: These directly impact water quality and now represent the number one pollution source to our waterways. Activities such as street sweeping, elimination of leaking sanitary sewers, and increased cleaning of storm drains can control these pollutants. Drainage problems may include roadway or structural flooding, clogged or failing underground pipes and culverts, stream bank erosion and stormwater pollution affecting a stream. Historically, the allocation of funds has not been sufficient to address all of the Town’s storm water service needs. State and federal laws also require that municipalities address the environmental impacts of stormwater pollution, but do not provide the funds to do it. Consequently, the Town must investigate alternative means for raising revenue. No. Only sewage is collected and transported to Town’s Wastewater treatment plant by the sanitary sewer system. Stormwater flows through the storm sewer systems, ditches, and channels. It empties into our streams, ponds, and lakes. It would be too expensive to size the sanitary sewers to convey and treat stormwater in the same manner as sanitary sewage. The volume of sewage generated by our homes and businesses each day is insignificant compared to the volume of stormwater runoff generated during a rainstorm. The better solution is to prevent the entry of pollutants into the stormwater system in the first place. The Town is also responsible for the water quality of natural streams within its jurisdiction as defined by the State and the Environmental Protection Agency. The Town does not maintain facilities that are located on private property or that fall under the jurisdiction of other governmental jurisdictions. The stormwater utility will provide the funds necessary to provide for the administration, maintenance, and improvement of the Town’s stormwater systems. Some of the services tied to the stormwater program include: The stormwater utility fee charges properties in Fort Mill based on that property’s contribution to the need for stormwater management. The utility uses the square footage of impervious surface, or surface that water is unable to soak into, on a property as the primary basis for the fee. The vast majority of utilities across the country have found this to be the most equitable way to charge and collect revenues for this program. A stormwater utility fee is similar to a water or sewer fee. In essence, customers pay a fee related to the amount of runoff generated from their site, which is directly related to the amount of impervious surface on the site. Impervious surface means a surface composed of any material that significantly impedes or prevents natural infiltration of water into the soil. Impervious surfaces include, but are not limited to, roofs, buildings, streets, parking areas, and any concrete, asphalt, or compacted gravel surface. The Town measures the amount of impervious surface (roofs, sidewalks, driveway, parking lot, etc.) using the number of Equivalent Residential Unit (ERU) per property. 1 ERU is equal to 3,473 square feet, which is the median amount of impervious area found on a typical single-family residential property in Fort Mill. All single-family residential properties are charged one ERU. Other properties are charged in proportion to this billing unit based on the calculated number of ERUs for the existing impervious area multiplied by the ERU rate. For example, if your property has four times the amount of impervious area of one ERU, you will be charged four times the base rate of $72 per year (i.e. 4 ERUs or $288 per year). This billing methodology is typical for other stormwater funding programs in the United States. The impervious surface is calculated using the County’s 2009 aerial photography. If a property owner believes that the area of impervious surface has changed since the 2009 aerial photography was produced, the property owner can apply for an evaluation. The Town is responsible for compliance with new Federal and State regulations on water quality as well as providing stormwater management facilities and services. These services are done to protect personal and public property as well as provide for a healthy environment. Funding is not provided by Federal or State government for these services. These requirements are unfunded federal mandates that have been imposed upon all cities similar in size to Fort Mill. All developed properties are charged a stormwater fee. Properties paying the fee will include residential, commercial and industrial properties, non-profit organizations, federal, state and Town owned properties, and schools. The only exceptions are public streets, which are designed to collect and carry storm water runoff. Yes, because it is a user fee, just like water and sewer fees which are based upon the cost of services provided. Property taxes are based on the assessed value of the property. The Stormwater Utility Fee is based on the amount of impervious surface area a property has. Because this is not a tax, it is collected from all customers who receive service. Tax exempt properties contribute a significant amount of runoff to the Town because of their size and amount of hard surface. They will be treated like all other customers under the rate structure. Yes, these include: Your property may not be physically connected to the drainage system in the same manner as water or sewer but you are still provided service. How? The Town’s stormwater program improves and maintains stormwater facilities throughout the Town. It establishes design criteria, and regulates development that helps control off site stormwater problems. This program is taking steps to reduce stormwater pollutants that degrade our culinary water quality and the environment of the Town. Every property owner in Fort Mill is served by these activities. Everyone in the Town benefits from the Stormwater Management Program. The fees collected through the stormwater utility are dedicated solely to managing the Town’s stormwater program. This program brings us into compliance with State and Federal regulations and safeguards our community through improved drainage and protection of our local waterways. Yes, as long as that property contains impervious area. Under most conditions, the bill will go to whoever pays the tax bill for the property. The stormwater fees are subject to the same payment deadlines and penalties as your property taxes. The stormwater fee will appear as a line item on your annual tax bill. Property taxes are based upon the assessed valuation of land and their improvements. These values have little relationship to an individual property’s use of the storm drainage system. A service fee, applied to all parcels, is a more equitable method of funding the program. Many tax-exempt properties, such as schools, churches and government agencies are large contributors to the storm water runoff problem. They will pay their share of the utility fee. No. The responsibility for maintaining the ditch, pipe or channel falls on the property owner. Stormwater Utility crews can only maintain ditches or other drainage facilities on private property if the facility is within the drainage easement granted to the Town. Without an easement, the responsibility for maintaining the ditch, pipe or channel falls on the property owner. The Stormwater Department is planning to hire a two-man crew who will regularly inspect and maintain the stormwater system for the Town maintained system. If you have noticed a stormwater drainage problem please call the Stormwater Manager at 803-396-9730. Residents are encouraged to help by keeping storm drains near their homes and businesses clear of debris. Yes. Property owners who reduce the amount of runoff leaving their property may have their fee reduced by a percentage. This can be accomplished by using specially designed systems such as detention ponds or rain gardens that manage stormwater runoff and clean pollutants in the stormwater. To learn more about the credit system, please read the Credit Policy Manual (PDF). Under law, stormwater fees may not exceed the cost of providing stormwater improvements and services. Your fees will go into an "enterprise" or special fund that will be used only for the stormwater program. Users who do not agree with the amount of the stormwater charge they receive should contact the Stormwater Manager at 803-396-9730. If after contacting the Stormwater Manager and you still feel your Stormwater Fee is incorrect, you may provide a written request for reduction to the Stormwater Advisory Board. Your request should detail any information which supports your position about property ownership, the amount of impervious surface area on your property, or your stormwater class. Any questions regarding the Town of Fort Mill Stormwater Utility should be directed to the Stormwater Manager at 803-396-9730. You can also obtain information calling the Stormwater Hotline 803-548-9098. For information on how to apply for a land disturbance permit can be found on the Land Disturbance Page. For specific questions about your project please use the contact information on our home page. $250.00 per disturbed acre, rounded up to the nearest acre. Everyone. A project requiring any type of land disturbance will require a permit application. Stormwater issues can be reported through our hotline number at 803-548-9098 or our online form. We have 2 separate Adopt-A-Stream programs currently (link to page here)! We are also open to coordinating specific service projects with groups! We can sign off for volunteer hours for school clubs. Please contact our Stormwater Management Coordinator at [email protected] for specific questions. Please email our Stormwater Management Coordinator for all inquiries about Drippy the Friendly Water Drop! To find out who owns a particular road you can contact South Carolina Department of Transportation or you can use the York County GIS. The Town can only assist with town owned roads, but we are happy to answer any questions. An illicit discharge is defined as any discharge to the municipal separate storm sewer system that is not composed entirely of storm water, except for discharges allowed under a NPDES permit or waters used for firefighting operations. Illicit discharges occur through either direct connections, such as piping mistakenly or deliberately connected to the storm drains, or indirect connections such as infiltration from cracked sanitary sewer pipes and degradation of older manholes. As a result of these illicit connections, contaminated wastewater enters into storm drains or directly into local waters before receiving treatment from a wastewater treatment plant. Illicit connections may be intentional or may be unknown to the owner. Additional sources of illicit discharges can be failing septic systems, illegal dumping practices, and the improper disposal of sewage from recreational practices such as boating or camping. Pollutants from the example sources below are harmful to the environment and degrade the beauty and health of our community. They need to keep them out of our lakes, rivers, and wetlands. The following discharges are allowed in Town of Fort Mill:
<urn:uuid:62505a9f-d0c4-446e-9818-1e043bf144d2>
CC-MAIN-2021-43
https://www.fortmillsc.gov/faq.aspx?TID=27
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00671.warc.gz
en
0.932256
2,518
2.8125
3
The authors define anticipatory thinking as the ability to prepare in time for problems and opportunities. There are three different types of anticipatory thinking that the authors are able to recognize at this point in their research though they do say that they are predicting that more will arise in the future. - Pattern matching for individual cues - Trajectory tracking for trends - What they call a conditional form, which is where people are reacting to the implications of combinations of events In this paper they discuss what problems can arise that prevent anticipatory thinking and also discuss some ways to make it better. “Anticipatory thinking is the process of recognizing and preparing for difficult challenges, many of which may not be clearly understood until they are encountered.” This seems like a great skill to have in our field of software dealing with complex systems and large scale incidents. The authors point out that in most cases, the ability to use anticipatory thinking is actually a mark of expertise. They cite some research from the 40s showing that Grand Masters who would look at different chess positions while determining what move to make next, would almost instantly react to a move they had in mind and either positive or negative. The Grand Masters didn’t have to go through a long process by which they reasoned out the move. They were able to almost immediately know whether or not the effects of that move were good or bad. It’s important to note that anticipatory thinking is not prediction. There is an overlap, but it is a mistake to look at anticipatory thinking as a way of predicting what’s going to happen. They have a quote that I like and maybe I’m biased since I live in Las Vegas, but they say “we are gambling with our attention to monitor certain kinds of events and ignore/downplay others”. To demonstrate this, they look at an eye tracking study done on new and experienced drivers. The drivers who are experienced are actively looking for hazards. They’re scanning around looking for things that might cause trouble, but new drivers don’t do that. New drivers are strongly focused on just staying in their lane and trying to keep the car on the road. It’s not that these more experienced drivers are predicting that there will be a problem, they’re just managing their attention. The experience they’ve developed have created the ability for them to notice the patterns that could signal trouble. These signals are present, of course, for the new drivers, but they don’t yet have the sensitivity to some of these weaker inputs to make sense of it. We’re doing the anticipatory thinking in our heads, but we’re expressing it through how we pay attention, what we pay attention to, and at what level. Anticipatory thinking depends on our capability to prepare, and not just our ability to predict future states of the world. Forms of anticipatory thinking The first one is pattern matching, which is also a hallmark of expertise, because the longer you’ve been working at something, more experienced you are at it, the larger database you have in your head of patterns that you can match against. This helps you almost immediately recognize certain things. Anticipatory thinking isn’t only involving the detection of problems, but it is one of the best uses; to provide an early warning system before running into trouble. This is often why experts may say that something doesn’t feel right or doesn’t seem right and then begin to redirect their attention even if at that moment, they cannot quite put their finger on it. The downside to pattern matching is that it may give us overconfidence and have us be overly sure of a diagnosis and as a result fail to notice something different that may have been seen by someone else. The next type of anticipatory thinking, is trajectory tracking. Just like it sounds, this is how people get ahead of the curve ahead of events that are going on and preparing for how the events might unfold. Trajectory tracking is not the same thing as pattern matching. Trajectory tracking is a more difficult task. Instead of matching an input with some sort of recognized template, we need to compare what we’re observing with what is expected to happen. The authors point out something interesting that I think can have implications for us as software practitioners, especially if we’re in an organization that does post incident review: In this area of anticipatory thinking, people are using narrative. By looking at what others have experienced and listening to their stories, we’re actually getting more patterns given to us that we can then use to determine our response to those future events as we do this trajectory tracking. More data around which we can calibrate our expectations. As we’re looking at the trajectory of events, we’re able to see more possible places that different people or situations could end up at. The last type is convergence, which is seeing connections between events. Instead of having this queue that the triggers us to either match or pattern or start looking and predicting the trajectory, this is where we think about the interdependencies of events and how they connect together. This is where “we notice an ominous intersection of conditions, facts, and events.” The authors use an example of where this didn’t happen, which was a friendly fire incident between some F-15s and a Blackhawk in the 90s. Essentially, no one along the way had anticipated a number of events affecting the code they transmit to show whether or not someone is an enemy. They hadn’t anticipated consequences of some changes in combination with some changes of their hand procedures, after which this disaster happened. There’s a number of contributing factors there. Each factor representing a failure of converging anticipatory thinking, a failure to appreciate how problems might arise in the future. The authors suggest that convergent thinking might require some degree of mindfulness, but regardless of whether it is a deliberate action or not, converging anticipatory thinking requires noticing some amount of inconsistencies. But in addition to noticing these consistencies for the convergent form, we need to notice connections as well. Components of anticipatory thinking Anticipatory thinking is made up of sense making and explanation/diagnosing. This is absolutely critical to coordination, which is something I’ve discussed before in a previous issue . Effective teams need to be able to predict each other’s reactions and have some ability to anticipate how they’ll respond to different events. Anticipatory thinking is more than just gathering a list of inconsistencies or discrepancies and then attending to it when it passes some threshold, but is a form of problem detection. Instead, this often requires us to reframe our understanding of the system in order to have the evidence that we’re noticing become significant. This ability to compare what we expect to happen with unfolding events is the basis of common ground . If we didn’t have this, we wouldn’t also have the ability to be surprised by others and then repair a common ground as a result. Obstacles to anticipatory thinking Unhappily, the set of barriers that interfere with anticipatory thinking is fairly long. This includes things like fixation, or explaining away inconsistencies instead of seeing their significance, or we may just be overconfident in our diagnosis or understanding of situation. And then on top of this there are organizational factors as there always are. There might be policies that keep the information from reaching you or a big gap between the people who are using the data and those who collected it, or difficulty in directing someone’s attention. Directability being another component of common ground. So given this, the authors did a study to see if they could improve anticipatory thinking specifically in small teams, specifically fighting fixation. In practice this could look like dismissing alarms or other signals. They’re trying to increase the chance of someone noticing and responding to a weak signal. So they set up this study using some different military and intelligence scenarios and gave them to seven different teams, about four to five people in each team. And in each scenario, they put in some weak signals so that they could look at whether or not the team noticed them. And then whether or not that altered team behavior. At pre-established points, each team member was to stop and write some notes about what was going on as well and monitored team discussions. They found that there was always at least one person in a group who noticed the signals that they planted and even noticed what the implications might be. About half the group would at least just notice the weak signals, but perhaps miss the implications. But no one took these signals seriously. If they were mentioned, they were dismissed. So the group never acted on these. This helps show that common admonition to “pay more attention” doesn’t work. These people saw the signals, but didn’t always know what to make of them, or couldn’t get the group to act on them. The challenge then, is actually to not just help people recognize weak signals, but helping organizations and teams do something about it when individuals are perceiving them. Improving anticipatory thinking Given the results of the study, the authors then examine what can help the problem. One that we’ve discussed before, is the value of having someone else from the outside and a team, that diversity of perspective as well as bringing in some fresh eyes. Another strategy is to try and overcome weak mental models. One solution they give might be to develop expertise and having an organization that values that. They also looked at different forms of “ritualized dissent”. For example, having those people with fresh perspectives bring actual, authentic disagreement. Also, an organization could expect members to voice unpopular, but sincere beliefs. These methods seem to have helped team perform better, but it’s important to note that these are not just contrived beliefs for playing devil’s advocate. Anticipatory thinking isn’t just another new term. It is a form of sensemaking, looking forward rather than retrospectively. … We believe that anticipatory thinking is critical to effective performance for individuals and for teams. - Anticipatory thinking is a key component of team coordination - Advice to “pay more attention” doesn’t help anticipatory thinking - The use of narrative can help others improve their anticipatory thinking skills - “Devils advocates” don’t seem to help team performance, though expressed, authentic differing view points do. - Even when weak signals are noticed by individuals and the potential consequences understood, it can be difficult to get a team or organization to act on them. - Being able to anticipate the behavior and actions of others, and to sometimes be surprised, is the basis of common ground.
<urn:uuid:37a7383a-1b3b-4a1f-ac4f-d9d0706f33c8>
CC-MAIN-2021-43
https://www.getrevue.co/profile/resilience/issues/resilience-roundup-anticipatory-thinking-issue-27-168981
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00430.warc.gz
en
0.962712
2,269
3.15625
3
The Hair Retail Industry Before 1989: Before 1989, the hair industry was relatively small. It primarily served the American and European wig markets. Clients included cancer survivors, women with alopecia, and orthodox Jewish women. Hair was sourced from Western Europe nunneries and convents as well as from smaller local collectors in rural areas. Collectors also sourced hair from Eastern Europe and the Soviet Union. European Remy HAIR EXTENSIONS and wigs came in a variety of colors, and manufacturers were able to source the natural colors to match their clients’ needs. After the collapse of the Soviet Union and the modernization of Europe, hair became scarcer, and manufacturers in Europe and the U.S. began searching for alternate sources. During the late 1980s and early 1990s, the hub of manufacturing for remy hair extensions and wigs began migrating from Western Europe and the U.S. to South Korea. The Korean diaspora in the U.S. began rooting itself into the distribution and retail segments of the hair extensions and wigs industry by opening retail stores progressively throughout the U.S. In Western Europe, the South Asian diaspora also entered the Remy hair industry, and their success mirrored the expansion of the Korean-American hair extensions industry in the U.S. Distributors and retail stores began to spring up, with Indians and Pakistanis owning 95 percent of the retail industry, but sourcing from Korean brands in the U.S. The Hair Retail Industry, 1990 to 2000: Remy virgin hair and recycled human hair have long been collected in India and used for various industries – the most dominant use being the extraction of amino acids from the virgin hair to be used in various retail consumption products. However, after 1990, Western demand increased, and India became the world’s largest producer and exporter of both recycled human hair and Remy human hair (also known as virgin hair and virgin Indian hair) for wigs and the growing Remy hair extensions market. In the early 1990s, manufacturers did not distinguish between Remy hair (virgin hair) and recycled human hair. All virgin hair they received was processed, colored, and made into wigs and hair extensions in the same way. The processing and coloring treatments resulted in hair of a uniform quality, with cuticles stripped and pigment/color added to the hair. In the late 1990s, a few new Remy hair extensions companies entered the market. They began to use Remy Indian hair in its unprocessed form, which they sourced exclusively from Indian temples. These companies represented the first significant shift in the industry. They determined that the quality of Remy virgin hair—in its natural state—was much higher than processed recycled hair. Because Remy virgin hair featured cuticles that flowed in the correct direction from root to tip, the Remy hair extensions would perform as natural hair and could be reused by the consumer. This was the genesis of Remy hair extensions, or virgin hair extensions, as the industry calls it today. The retail sales and distribution of Remy hair extensions and wigs in the U.S. also expanded during this period. By the year 2000, there were approximately 5000 Korean-owned beauty supply stores in the U.S., selling brands that were developed and marketed by Korean-American hair companies. African-American women formed their primary customer base. 100 percent of the hair sold in these stores was processed hair, with over 90 percent of it being recycled hair. Remy Virgin Hair And Processed Hair The Hair Retail Industry, 2001 to 2011: Between 2001 and 2011, the demand for Remy hair and virgin hair extensions began to increase as discerning consumers began to grasp and appreciate the difference between Remy virgin hair and recycled processed hair. Due to the growing demand for virgin hair extensions and the fact that the supply of Remy hair came exclusively from the temples of India, the price of Remy Indian hair began to increase. Only a handful of companies in the U.S. were selling authentic Remy hair extensions. But because there were no regulations governing the human hair industry, all recycled processed hair being sold at beauty supply stores was also being named Remy hair extensions. Many consumers could not understand why the price of Remy hair extensions at the beauty supply stores (which was actually recycled processed hair) was dramatically lower than Remy hair extensions (virgin hair) sold by the handful of authentic Remy hair companies. This confusion arose as a direct result of the lack of industry regulations, with brands selling recycled processed hair and claiming their hair to be Remy hair facing no punitive consequences. Facing little competition, no regulatory authority, and complete control of the supply chain, beauty supply stores were able to keep consumers in the dark. Most customers had no knowledge that the hair extensions they purchased and believed to be Remy hair was actually recycled processed hair. The deceptive nature of positioning recycled hair as Remy hair was evidenced by the marketing of certain brands that would promote “100% Remy hair – good for four washes without tangling.” However, true Remy hair will last through an endless number of washes without tangling as long as the consumer manages their Remy hair extensions as they would their own hair. However, by 2005, Remy natural hair was also being offered by a new retail channel previously unknown to the hair industry – the internet. The advent of hair sales via the internet was the second significant shift in the industry because clients were now able to purchase hair from a retailer that wasn’t a beauty supply store. It was a slow and arduous journey for the nascent Remy natural hair (virgin hair) companies establishing themselves and trying to retail products in a competetive industry. In 2010, Malaysia Hair Imports began using the label “virgin hair” to distinguish its remy hair extensions from the standard recycled processed hair offered at the ubiquitous beauty supply stores. Because the label “virgin hair” wasn’t being used by any established brands, it allowed to clearly inform clients what the unique attributes of its virgin Indian hair extensions were. Demand for authentic Remy virgin hair extensions started to increase, and consequently the pricing from the temples in India increased. In the U.S., Malaysia Hair Imports began to realize that clients needed to be educated on how to use and maintain their virgin hair extensions rather than recycled beauty supply store hair. The most important thing for consumers to know was that virgin Remy hair extensions are more durable and can be washed, colored, and reused for long periods of time. In other words, Remy virgin hair is a longer-term investment. Malaysia Hair Imports retail locations provided virgin Remy hair extensions and offered an educational experience for clients for the first time in the history of the industry. Clients were able to touch and feel the hair, understand why the hair had different textures even within the same product, and why the hair color was not consistent like the hair sold at the beauty supply store. In contrast, at beauty supply stores, hair was kept behind counters where it could not be touched, and sales associates were not trained or educated to sell hair. The opening of Malaysia Hair Imports stores signaled the third major shift in the industry. Now consumers could purchase virgin hair extensions from Malaysia Hair Imports retail locations focused on offering high-quality Remy virgin hair extensions and staffed by educated and knowledgeable salespeople who also wore the virgin hair extensions and understood the needs of the consumer. Another industry development during this time was the export of virgin Indian Remy hair primarily to Brazil, but also to other countries in South America. Indians who had emigrated to Brazil in the 70s had uncovered the tremendous demand for virgin hair extensions in Brazil and began importing Remy hair extensions from India. Brazil became the largest importer of Remy human hair from India, and Europe (Italy and Spain) was second. As for recycled hair, China purchased 99 percent of all recycled hair from India, which they processed into recycled hair extensions and sold to the beauty supply stores in the U.S. and Europe. As the volume of imports from India into Brazil grew, so did the news that Brazil was also a source for high-quality Remy Brazilian hair extensions. The Brazilian companies then began exporting the Indian hair as virgin Brazilian hair extensions to many parts of the world, including Europe, Africa, and the US. This was the advent of Brazilian hair extensions in the human hair industry as we know it today. Hair Wigs and Extensions Market: Geography The availability of high-quality wigs and toupees is encouraging consumers, especially males, across the world to try them as an alternative to surgical procedures for hair transplant. The global hair extensions market is expanding at an incredible rate due to the growing trend among consumers to imitate celebrity hairstyles. North America dominates the global hair wigs and extension market because the adoption of wigs and hair extensions is the highest in the US. Increased disposable incomes and high spend have increased the demand for high-end hair extensions. The US accounted for the largest share of the global haircare market in 2018. The APAC region is the largest supplier of human hair for manufacturing wigs and extensions. The increasing demand for hair products in the APAC market is influencing manufacturers to establish production facilities in Asian countries, such as China and India. The European hair wigs and extensions market is largely made up of human hair. The high per capita income of European consumers allows them to spend on hair extensions and wigs produced from human hair. The penetration and adoption of hair wigs and extensions is relatively higher in Africa than the Middle East. As the demand increases in Africa, the number of start-ups manufacturing these hair substitutes has also increased in the region. Key Hair Vendor Analysis The customer demand has steered the global hair wigs and extensions market. Manufacturers have been introducing new products according to the prevailing fashion trends. However, this has not been able to increase the demand for wigs and hair extensions among customers over time. The market demand is primarily driven by new trends showcased by celebrities on social media. India is the largest supplier of human hair, and the business is highly concentrated on the supply side. As of now, the global hair wigs and extensions market is dominated by domestic players, including small-scale proprietorship firms. Several local players are expected to expand their presence worldwide during the forecast period, especially in APAC and Latin America, which are the fast-developing economies. Besides, improving global economic conditions are likely to fuel the growth of the market, making it an attractive time for the launch of new products. An intensely competitive environment is expected to emerge during the forecast period because of the immensely growing popularity of hair wigs and extensions across the globe, thereby driving the demand for global hair wigs and extensions market. Key Hair Market Insights - The analysis of the hair wigs and extensions market provides market sizing and growth opportunities for the forecast period 2019–2024. - Offers market sizing and growth prospects of the hair wigs and extension market for the forecast period 2019–2024. - Provides comprehensive insights on the latest industry trends, market forecast, and growth drivers in the hair wigs and extensions market. - Includes a detailed analysis of market growth drivers, challenges, and investment opportunities. - Delivers a complete overview of market segments and the regional outlook of the hair wigs and extensions market. - Offers an exhaustive summary of the vendor landscape, competitive analysis, and key market strategies to gain a competitive advantage in the hair wigs and extensions market. OUR RAW HAIR RAW CAMBODIAN HAIR Cambodian Hair Extensions are one of the best 100% REMY HAIR extensions on the market today. Malaysia hair Imports Raw Virgin hair that is Unprocessed 100% Human Hair with all cuticles intact and aligned in the same direction. Our Raw Cambodian Hair Bundles have a natural textured curved volume and nice thickness to the hair bundles. Hair textures are easy to maintain and therefore last a very long time with the proper care and conditioners. Our extensions are tangle and matte-free, which means you can spend more time living your life without all of the hassles that come with many other hair extensions. You can style the hair extensions into any style because our hair is 100% Remy hair from a single donor. Your method of hairstyle is not limited at all when you use these gorgeous hair extensions. - OUR_CAMBODIAN_HAIR_EXTENSIONS”>OUR CAMBODIAN HAIR EXTENSIONS Our Raw Cambodian hair extensions are the best due to their coarseness, silkiness, and lightweight textures that blend well with all types of hair types. The raw coarseness of this hair makes it perfect for blending with natural African American hair. Each of our bundles is collected from a single donor. Guarantees the longevity of our raw Cambodian hair by preventing any tangling and matting. The cuticles are aligned and flow in one direction and Imported ethically directly from Cambodian hair donors. Our textures are not manipulated in any way to create uniform patterns and textures. Each bundle remains in its natural state and is unprocessed. RAW MALAYSIAN HAIR Malaysian Hair hands down some of the most natural-looking wefts on the market today. They have a full texture from the root to the tip of the extensions and use tapered ends which mimic the natural hair growth used in our Malaysian Hair bundles with closure. We do not offer double drawn or fallen hair due to our strict quality control, hygiene, and manufacturing procedures We only use natural Straight unprocessed hair to manufacture our hair extensions. We guarantee that we make one of the best human hair extensions you will ever try! - OUR MALAYSIAN HAIR EXTENSIONS Weft extensions can be used to make thick full and beautiful natural looking sew-ins, and wigs. The hair will always revert to its natural state and pattern when washed using natural shampoo and conditioners. We only use 100% RAW HUMAN HAIR collected from single donors and therefore the hair will vary from bundle to bundle but when styled, each Raw Hair Weft will blend seamlessly to your natural hair and natural color. Our 100% Human natural Malaysian hair extensions are not chemically treated in any way. We take pride in our etically donor hair and hope all our clients do so as well. Unlike most large companies we control our hair quality. RAW BURMESE HAIR Our 100% Raw Burmese Hair Extensions are one of the rarest Remy Hair Extensions on the market today. Malaysia hair Imports the best quality Burmese hair and which is Raw Unprocessed 100% Human Hair with all cuticles intact and aligned in the same direction. Our Burmese Straight, Wavy, and Burmese Curly hair have a natural gorgeous volume and thickness too it. The Burmese hair textures we offer are very easy to maintain and therefore last a very long time with the proper care using natural shampoos and conditioners. Our human extensions are tangle and matte-free, which means you can spend more time living your life without all of the hassles that come with many hair extensions. You can style our Burmese hair extensions into any style because our hair is 100% Remy from one donor, single drawn. You can design and be very creative with these gorgeous hair extensions. The Hair Retail Industry, 2012 to 2016: In 2012, the Remy hair extensions industry faced a massive shock in the supply chain: the Indian temples, still the main source of Remy hair, stopped selling hair for 18 months, from January 2011 to August 2012. Manufacturers began to look for creative ways to sell recycled hair as Remy hair extensions. This was by far the most significant outcome from this event. A few Indian manufacturers discovered that if they simply stripped the cuticles off recycled hair and finished the hair in a silicone bath, the hair, at the outset, would look and feel like Remy hair or virgin hair. How was this possible? Recycled Indian hair is simply inverted Indian hair with many impurities, such as dirt, lice, nits, oil, color, henna, grey hair, water, etc. Once this hair is washed in acid, most of the impurities are removed (except for the grey hair that must be removed separately), and the hair no longer has cuticles. This process is exactly what the Chinese used in their production of processed recycled hair for the beauty supply store market in the U.S. However, the Indian manufacturers realized that if they did not color the hair after the acid wash and only rinsed the hair in a silicone bath to help reduce tangling, then the hair would look and feel just like Remy virgin hair to the consumer. The uneducated and untrained consumer would be unable tell the difference between the two products—at least at first. The recycled natural hair was a lower quality hair that was characterized by dryness, tangling, and matting after a few washes (remember how the beauty supply store brand promoted their products?). This was the advent of recycled natural hair falsely sold as Remy virgin hair: the fourth major development in the Remy hair extension industry. From the manufacturing perspective, purchases of Remy hair from the temples in India fell 90 percent due to the exponential price increase, and demand for recycled hair increased immensely after Indian manufacturers found a substitute product for pure virgin Remy hair – however the temples maintained their high prices as they are no dependent on demand. Exports of this Indian recycled hair to Brazil grew rapidly (labeled still as Brazilian hair extensions), and so did the number of manufacturers in India making recycled natural hair. Because importers in the U.S., Brazil, and Europe unknowingly purchased recycled natural hair as virgin Remy hair, the proliferation of this hair was tremendous. Chinese manufacturers caught on and began manufacturing the same product at very low prices, mixing the hair with plastic and animal hairs to reduce the price further and be even more competitive. By 2016, manufacturers in China and India began to latch on to any competitive advantage as the industry became dependent on cut-throat pricing with very little regard to quality. Therefore, these companies would promote this recycled natural hair from India as Brazilian hair, Mongolian hair, Peruvian hair, etc. The number one hair extension product exported from India is Brazilian hair extensions. From the retail perspective in the U.S., and Worldwide an entire market was suddenly thrown open to entrepreneurs. Many new companies found a niche to exploit: selling so-called virgin hair, which is actually recycled natural hair, at very low prices via websites and retail stores. Unfortunately, most of these companies do not know what they are purchasing, the source of the hair, and how the hair has been processed. If the companies retailing this hair have no understanding, knowledge, and experience in what they are selling, how are they supposed to truly inform their clients? They cannot. That is why Malaysia Hair Imports stands apart from the competition. Malaysia Hair Imports only sells authentic Ethical Remy virgin hair, and our staff is trained to educate our customers on how to take care of their virgin hair extensions.
<urn:uuid:b861e512-83d3-4e36-92e4-c37499d3ac68>
CC-MAIN-2021-43
https://malaysiahair-staging.qbkbmaag-liquidwebsites.com/hair-industry/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00470.warc.gz
en
0.963153
3,917
2.8125
3
Over the past decade, we have had an increasingly vocal debate about the responsibility of platforms and the future of freedom of expression. It may be worth pausing to reflect on the main lines of that debate, especially as it is likely to intensify following the various decisions taken on President Trump's online presence. It is not unbelievable that in 2021 we will see that debate increasingly evolve around legislation, especially as the EU has proposed legal rules on what are called Very Large Platforms and Gatekeepers . It is also worth spending a few minutes thinking about what the basic problem here really is. Why do we have this discussion? Part of the answer to that question is about how technology has come to affect freedom of expression in general, and what role freedom of expression actually plays in our society. If we look at the functions of freedom of expression, we can quickly realize that it has at least two very important functions. One is to help us jointly discover new ideas and opinions that can lead society forward, help us to think critically about various problems and draw our attention to lines of conflict that need to be resolved. This function – freedom of expression as a market of ideas – can loosely be said to be the one that has gained the most ground in the United States, where the first amendment and jurisprudence largely assumes that this marketplace of ideas must be safeguarded. The second function is more complex, and entails that freedom of expression is required for reason and debate, followed by decisions on how we should proceed as a political community. Freedom of expression is a prerequisite for our joint analysis and our common considerations. It has, in addition to being a pure mechanism of discovery, also an important deliberative function to play. This view of freedom of expression as the regulatory framework that maintains and enables public discourse and the public sphere in a kind of Habermas sense, is more European. When we say that Europeans have a more restrictive view of freedom of expression and look more negatively at hatred and threats, it is actually an observation of how Europe places a deeper emphasis on the deliberative than the pure mechanism of discovery that freedom of expression also allows. This is of course a simplification. There are other differences as well, but I still claim that it is an interesting simplification to think about. The reason is that we – if we accept this model – can ask the question of how technology then affects the two different functions of freedom of expression. The answer is interesting: technology strengthens the ability to express oneself and thus the mechanism of discovery, but with the abundance of opinions, ideas and other things that follow, our common ability for deliberation and political conversation in the public sphere is also eroded. Much of the new freedom of expression is oriented around what are today calling platforms – large technical systems that enable individuals to interact in different ways. The platforms have greatly expanded freedom of expression, and today we enjoy the opportunity to express ourselves in a way that can reach the whole world. This was hailed early as a "democratization" – and I also hope it would be – but it was a conclusion that skipped a couple of stages. Above all, we assumed that there was a linear relationship between freedom of expression and the quality and content of a democracy. In addition, it was assumed that freedom of expression was an institutionless social function that could be reduced to the opinion itself, without also thinking about the institutions in which it was embedded. It was a surprisingly naive view, which was mainly rooted in the fact that we were affected by a kind of historical institutional blindness: we did not see that earlier societies' freedom of expression existed in various very complex institutions. John Stuart Mill, who is generally considered to have formulated the sharpest and most interesting argument for a total freedom of speech, writes de facto on freedom of the press – and therefore assumes that we discuss the question of what can be printed in a newspaper with a certain circulation, a readership with certain social properties, a certain education etc. The chapter begins clearly with a discussion of freedom of the press: "THE TIME, it is to be hoped, is gone by, when any defence would be necessary of the “liberty of the press” as one of the securities against corrupt or tyrannical government. No argument, we may suppose, can now be needed, against permitting a legislature or an executive, not identified in interest with the people, to prescribe opinions to them, and determine what doctrines or what arguments they shall be allowed to hear. This aspect of the question, besides, has been so often and so triumphantly enforced by preceding writers, that it need not be specially insisted on in this place. Though the law of England, on the subject of the press, is as servile to this day as it was in the time of the Tudors, there is little danger of its being actually put in force against political discussion, except during some temporary panic, when fear of insurrection drives ministers and[Pg 29] judges from their propriety; – JS Mill There is no argument in Mill for a general, unlimited and institutional freedom of expression. The problem of freedom of expression is also discussed in this context by Simone Weil, who in her book on how to construct a just society, advocates the creation of a sphere where anything can be said: ”That is why it would be desirable to create an absolutely free reserve in the field of publication, but in such a way as for it to be understood that the works found therein did not pledge their authors in any way and contained no direct advice for readers. There it would be possible to find, set out in their full force, all the arguments in favour of bad causes. It would be an excellent and salutary thing for them to be so displayed. Anybody could there sing the praises of what he most condemns. It would be publicly recognized that the object of such works was not to define their authors’ attitudes vis-à-vis the problems of life, but to contribute, by preliminary researches, towards a complete and correct tabulation of data concerning each problem. The law would see to it that their publication did not involve any risk of whatever kind for the author.” – Simone Weil But she does not stop there – she realizes, with World War II as an open wound in French society, that her position is impossible if it is not supplemented with responsibility: ”On the other hand, publications destined to influence what is called opinion, that is to say, in effect, the conduct of life, constitute acts and ought to be subjected to the same restrictions as are all acts. In other words, they should not cause unlawful harm of any kind to any human being, and above all, should never contain any denial, explicit or implicit, of the eternal obligations towards the human being, once these obligations have been solemnly recognized by law.” – Simone Weil She then notes that this opinion is indeed impossible to formulate in legal terms, at the same time as it is quite clear to her – and that the problem rather lies in institutionally formulating the limits of which opinions we really mean and should take responsibility for and which we only try: ”The distinction between the two fields, the one which is outside action and the one which forms part of action, is impossible to express on paper in juridical terminology. But that doesn’t prevent it from being a perfectly clear one. The separate existence of these two fields is not difficult to establish in fact, if only the will to do so is sufficiently strong. It is obvious, for example, that the entire daily and weekly press comes within the second field; reviews also, for they all constitute, individually, a focus of radiation in regard to a particular way of thinking; only those which were to renounce this function would be able to lay claim to total liberty.” – Simone Weil Weil's position here seems to open up to the obvious argument that the web can be the place where we mean nothing and the press can retain the part of the public sphere where we stand behind and take responsibility for what we write – a kind of dividing line in society between utterance and opinion formation as Weil suggests. However, we do not have such an institutional order, and it is difficult to imagine how it would even be possible. The public sphere is a kind of common property in society that we can not divide arbitrarily into different zones. However, freedom of expression must always be situated in an institutional context. Freedom of expression without institutions in a time where information and communication technology is advancing rapidly reaches a point where we feel that something must be done. But this something is not well defined, and when it comes to freedom of expression, it falls into two different, conflicting positions. The first is that the platforms are essential in the modern information society and that they have a moral responsibility to ensure that the content made available to them is not only legal, but also morally justifiable. The platforms must work to remove content that is offensive, harmful, hateful or otherwise contrary to various moral principles. The second is that the platforms are essential in the modern information society and therefore must not place any moral values on the content made available through them at all, but instead work under a contractual obligation . The only legitimate basis for taking down content and making it unavailable is if it is illegal, and then one must do it urgently. Within the EU, there are representatives of both these views. The European Commission represents the first, and a recent bill in Poland represents the second. The consequent question then naturally becomes which of these positions is to be given priority. When it comes to illegal content, there is no real debate. Everyone agrees that such content should be removed when a platform becomes aware that there is such content on the platform – what the discussion is about there is instead how this knowledge should be considered to have arisen. For example, should there be a general obligation for a platform to monitor everything made available by pre-filtering, or should the platform act only when someone notices the platform's illegal content? As in all debates, there are nuances – an active operation to detect child pornography can, for example, be combined with a so-called "notice and take-down" principle for pirated material. But the basic principle is clear. Illegal content must be removed, promptly. Another nuance that is not unimportant is who decides that content is illegal – where opinions vary from requiring a court decision to the platforms themselves having to make this assessment, and in that matter it is about balances between legal certainty, efficiency and urgency. But, back to our question about the platform's responsibilities: which of the two positions should apply? Should platforms only remove illegal content or, in addition to this indisputable responsibility, do they also have a moral responsibility for the content on their own platform? There is a trap here, and it is to assume that this is a logically necessary dichotomy – that it must be one or the other. Instead, it is of course entirely possible to imagine that we limit moral responsibility to a certain number of issues and allow a number of different principles to be developed within the framework of this responsibility, through repeated and in-depth application of these principles. Many have pointed to the journalistic model as a way forward – where newspapers make a moral assessment in connection with the purely legal assessment. By virtue of the slowly accumulating material of assessments one has, one can then argue that good practice is emerging and that this can be a guide for the entire industry. It is a model that it has taken the press a long time to arrive at, and violations of this custom are committed at regular intervals. The early press in the United States was, as well described in the book Infamous Scribblers: The Founding Fathers and the Rowdy Beginnings of American Journalism by Eric Burns, in which he concludes by saying that America has turned away from its founding fathers in an important respect: ”But we have not adopted their style of journalism. We do not, in most of our print and broadcast news sources, impugn character as they did. We do not, except in extraordinary cases, use the kind of language they did. We do not, except on well-publicized and wellpunished occasions, make up the news to suit our ideology. It is a rare example of our turning our backs on the Founding Fathers, finding them unworthy, rejecting their legacy. We are to be commended.” – Eric Burns The press developed and grew into its role and built its institutions over time, over a long, long time. Is not the same thing possible for the platforms? Certainly! And the frameworks are already being built in the European Commission's recently proposed law package for platforms – the Digital Services Act and the Market Act. These legal packages lay down rules, especially the Service Act, on how the platforms should handle the content made available to them and set limits on moral responsibility by requiring both opportunities for appeal, review and transparency in the process. When this proposal becomes a reality, the major platforms will be more regulated than today's press, while continuing to develop their own departments to moderate content. They will have more responsibility for how they make decisions, with clear audit rules and openings for outside assessors to make assessments of how they handle their own rules. Their practice will be more accessible and transparent than the journalistic assessments made in the major newspapers. The platforms will probably reach a point where they are better suited for public discourse than today's media, and will probably reach this point faster than the press reached its current ethical position and quality. An open question is whether they become a lesser threat to the economy of the press in connection with that development, or whether the calls for responsibility and order that are now being raised actually pave the way for an institutionally robust public sphere that rests on the foundations of the platforms. Yes, but. How do we think about Trump and terror? About hatred and threats? About deplatforming? All of these questions require institutional answers. The big problem is that we have focused on individual content instead of the context and institutional reality in which the platforms operate and support. And here it is worth saying a word about the idea that the platforms are only possible on the basis of business models that can not survive this institutional maturity, that they are based on us hating and threatening and being sucked into a general public anger that drags us down in polarization and violence – a thought that could just as easily have been thought at the beginning of the press. In fact, the business model of the media became better because it was embedded in ethics and an institutional framework, it became stronger and more sustainable over time. Sure, the press still lives on the tickling of the sensation and "if it bleeds it leads" and the click economy sometimes controls modern newspapers so hard that they seem to be re-formed in the early layers of the press' evolution, but overall the press only won by being provided with the regulations under which it currently lives. The long arc of history ties towards institutions, regulations and stability for the platforms, and thus also towards a slow reconstruction of the public sphere. But there is an outstanding problem, and that is how we handle public discourse when it grows too large. How big can the public conversation be? How many people can make decisions together? If we look at parliamentary sizes, we see that they are very limited, in the order of 100-1000 people who decide for an entire country. How many people can have a meaningful public conversation? Do we have any reason to believe that there are more than that? Maybe we are limited by the Dunbar number – 100-230 social contacts is what our brain size suggests we can handle if we compare with other primates – or we can be many more than that, but it does not seem obvious to assume that democracy scales especially good. Representative democracy emerged for a reason, but is not currently matched by a representative public sphere – instead, today we stand in a square that we cannot see the end of and shout with others. The natural consequence is that we unite according to other, basic principles. We look to others who think like us, because historically it has been the way we handled knowledge. As the author Julia Galef notes, the brain has not evolved to find out what is true – but instead it has evolved to find out what is new, as a scout, or to find out what unites us, as a soldier. When the square expands towards infinity and there are no institutions for us to lean on, we stagger back into the mechanisms of evolution. What we really need are new technology and new institutions that not only distribute, but also allow us to discuss the world. It's not simple, but it's hardly impossible. Do platforms have a responsibility for what they make available? Yes, absolutely. Both legally and morally. Moral responsibility is increasingly captured in customs that will emerge in the coming decades, as fast as or faster than the customs of the press emerged. What does it mean in the long run? That they may slowly turn into better bearers of the public sphere than the media that exist today. But probably that they merge with them and that powerful hybrids develop. What are the remaining problems? We must try to understand the biological and social constraints that govern how the public sphere is constructed and maintained. How big it can be, how it can include diversity and differences of opinion. Is there reason to be optimistic? Absolutely. History gives us reason to believe that we can develop institutions to handle information and communication, and that the long line of development leads to a society where we learn faster and reach solutions to the eternal political problem of how we live together better and better. This article was originally published on Unpredictable Patterns.
<urn:uuid:88b5ceec-6849-4dc1-9dae-4406856e2b25>
CC-MAIN-2021-43
https://www.warpnews.org/warp-news-experts/the-responsibility-of-the-platforms-a-guide-for-the-perplex/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00430.warc.gz
en
0.965356
3,635
2.546875
3
Today's history lesson is about the Seattle-based labor activists of the 1970s--Gene Viernes and Silme Domingo. Gene Viernes was born in Washington to a family that had immigrated to the U.S. from the Phillipines. After joining the Local 37, ILWU, Gene became close with Silme Domingo, another son of Filipino immigrants. He had grown up in Texas and gone to the University of Washington. With socially active parents, Domingo grew up involved in efforts to advocate for social and civil rights. Both of them became officers in the Cannery Workers Local 37. On June 1, 1981, both of these men were brutally murdered. These two ambitious young activists had been very vocal in their position against the Alaskan canneries, whom they accused of discrimination and prejudice. Furthermore, they protested the unfair labor conditions and wages the workers faced, as the union president, Tony Baruso, was greedy and ignorant to the unfair conditions the seafood cannery workers faced. Because their union had a corrupt leader, Viernes and Domingo worked hard to advocate for better working conditions and labor rights. They also investigated discrimination and segregation towards Filipino-American workers at the cannery. At first, it was thought that the attacks were retributions towards the reform efforts the two activists were trying to make. Unfortunately, Viernes died at the crime scene, but Domingo survived long enough to identify the attackers as members of the Tulisan gang, a gang who ran a large gambling scheme in the Cannery Workers Union, and who Tony Baruso was accused to be affiliated with. However, after they passed away, it became obvious that the plan was orchestrated by someone much more powerful. Someone who was pulling the strings. Officials found that it was the Marcos dictatorship who had something to benefit from the murders, and a few years later, the Committee of Justice for Domingo and Viernes sued Ferdinand and Imelda Marcos in federal court. For those of you who don't know, Marcos was the President of the Philippines at the time--ruling as a dictator under martial law from 1972 to 1981. During this time, he kept a strong hold on media and political socialization, as well as clamping down on any opposition. He used all means necessary, including violence and oppression. It was said that Viernes and Domingo both opposed him, thus the matter became a political one. The stands that Gene Viernes and Silme Domingo took were not only important for labor reform but were also important to expose discrimination towards Filipino-Americans in the workplace. It's important to remember the courageous actions of these young activists, and to take a stand for what you believe in. Author: Carina Sun As the child of immigrants, I am extremely thankful for the advancements the United States has made to make this possible. However, these advancements weren't without a fight. In the 1800s, the U.S. was in a state of yellow peril. As more and more Chinese immigrants entered the U.S. at the prospect of striking a fortune in the West Coast, the people of the United States began to worry about the economy and job stability. In 1882, President Arthur signed the Chinese Exclusion Act, forbidding anyone of this demographic from immigrating to the United States. It also excluded all Chinese nationals from being able to apply for a U.S. citizenship. In many ways, this Act was tremendously flawed. It not only perpetuated racism towards the Chinese-Americans who were already in America, it also proved a huge problem for the children of immigrants, like Wong Kim Ark. Born in San Fransisco in 1873, Wong Kim Ark was by all definitions, a U.S. citizen born on American soil (under the clause of the 14th amendment). Both his parents were Chinese immigrants who were not U.S. citizens, so they had to move back to China. Wong continued to live in California, and would often travel between the two countries to visit his family and build a life in America. His first reentry was smooth, but upon his second reentry a few years later, he was held at the border. The United States proclaimed that because Wong was a child of Chinese citizens, he was a Chinese citizen too, no matter where he was born. Furious with this, Wong gathered legal support and fought his way to the U.S. Supreme Court to defend his case. The debate was whether Wong's birth in the U.S. was enough to make him a citizen of the United States, and while lawyers claimed he was a Chinese national, the Supreme Court disagreed by a 6-2 vote and held that a child born in the United States is a citizen of the U.S. As a result, Wong's triumphant case became a huge legal precedent for future cases for immigrants and citizenship eligibility. It protects the rights of groups who may be excluded and targeted for dual-citizenship. In an interview by the NPR with Wong's granddaughter, it was revealed that Wong still faced discrimination in the U.S. after he won the case, prompting him to return to China, where he passed away. Wong's fight for the civil rights of immigrants and U.S. citizenship should not be forgotten, especially as our current administration tightens controls on immigration. The United States was built on freedom and opportunity for all, what would we be to disregard these values? Author: Carina Sun Like many of us, Patsy Mink was born to an ordinary family. She was a third-generation Japanese-American raised on the island of Maui in Hawaii. However, her early life wasn't easy at all. She entered high school right before the Pearl Harbor attack on Honolulu. This made her high school years a living hell, as she struggled with xenophobia and alienation from her community. Despite these hardships, she persevered and graduated as valedictorian of her class. Her start in activism was prompted by her experience at the University of Nebraska, where she saw segregation among people of color and whites. This infuriated her, and she organized a coalition to protest and lobby to end the long-intact segregation policies. A few months later, Mink moved back to Hawaii to treat an illness and pursue a degree in medical school. Unfortunately, she was not accepted because of her gender or race. She then switched her attention to law--applying to law schools on the mainland. She enrolled in the University of Chicago Law School and finished with a Juris Doctor degree. Even with this though, she found it difficult to find a job as a married, Japanese attorney. Many law firms were hesitant to hire her even after she passed the bar exam in 1953. To support herself, she moved back to Hawaii and opened her own law firm, becoming the first Japanese-American woman in Hawaii to practice law. From there, she was on the track to politics. Mink quickly became involved in local politics in Hawaii and became the first Asian-American woman elected to the Hawaii House in 1956. From there, she ran for U.S. Congress when Hawaii became a part of the U.S., but was defeated. A few years later, she ran again for a spot in the U.S. House of Representatives and won successfully. She was the first woman of color elected to the House--ever. And her time there did not disappoint. Mink poured her energy into focusing on issues that were always important to her, such as education and gender equality. In 1970, she became the first Democratic woman to deliver a State of the Union response. In 1971, she became the first Asian-American woman to run for president. Although she did not win, her contributions in advocacy and focus on social issues will continue to play a prominent role in U.S. politics. Patsy Mink worked her entire life to eradicate gender and racial discrimination, which she had experienced in her early years. Although she faced countless obstacles, Patsy Takemoto Mink never stopped until she was able to speak her voice and make a change in the world. Even then, she stayed true to her morals and what she believed in. Author: Carina Sun Continuously advocating for revolution until her passing, Grace Lee Boggs was a feminist, climate advocate, human rights activist, and fighter for labor and civil rights. Her story demonstrates the resiliency of an Asian American female activist fighting through economic recessions, racial injustices, and social change. Grace Lee Boggs was born in 1915 to two Chinese immigrants. She grew up in New York and by 1940, earned a doctorate in philosophy. Living through the Great Depression and finding herself in the aftermath of it, Boggs found herself in an environment with the need for change. Being turned away from numerous jobs as an Asian American, she was forced to live in a rat-infested home. Fighting for housing improvements was the entry point into the world of activism for her. She began marching and fighting for females and people of color while discovering her own political stance. Finding herself drawn to different branches of socialist organizations, she became a part of the Johnson-Forest Tendency. As a Johnsonite she stood with the belief that power and liberation should lie with the working class. To join the growing Johnsonite community in Detroit, Boggs moved there in 1953. In Detroit, she found intellectuals that shared similar beliefs with her but also her partner in activism and husband James Boggs, a black autoworker and fellow Johnsonite. Together, they fought for black communities’ rights and continued advocating for females and other marginalized groups. At this point they slowly departed from socialism, but their fight for change did not stop. Grace Lee Boggs joined the Great Walk to Free in Detroit, hosted Malcolm X, and even attended Save Our Sons and Daughters meetings. She continued working in her community in Detroit and with her husband, and created the program, Detroit Summer, to bring together the community that was facing an influx of crime. Detroit Summer worked to empower youth, inspired by Martin Luther King Jr.. Through the organization, the people of Detroit created community gardens, repurposed and renovated the city, and built leaders who continued to vision through their own lives. Even after her husband’s passing in 1993, Grace Lee Boggs did not slow down her steps to fight. She continued to be involved in fighting for change in Detroit. She wrote for newspapers, talked to civic groups and college students, and even wrote her own books. Later, she turned the second floor of her home into the Boggs Center and founded the James and Grace Lee Boggs School. There, she incorporated lessons and curriculum focused on empowering youth in Detroit through teaching community-orientated skills. In 2015, Grace Lee Boggs, age 100, passed away. In the words of past President Barack Obama, “As the child of Chinese immigrants and as a woman, Grace learned early on that the world needed changing, and she overcame barriers to do just that. She understood the power of community organizing at its core – the importance of bringing about change and getting people involved to shape their own destiny.” Grace Lee Boggs left a legacy as a grassroots activist, philosopher, and author. She spent her life fighting for Asian American communities, Black communities, females, and the working class. Let us take this Asian American Heritage Month to recognize the remarkable work Grace Lee Boggs has done for marginalized communities, Detroit, and America as a whole. Author: Audrey Zhou Remembering "The Forgotten" --The Chinese migrants who built America's first Transcontinental railroad Over 20,000 Chinese migrants underwent back-breaking labor for over 6 years from 1863 to 1869, building a railroad that would connect the existing U.S. rail in Iowa, to the Pacific Coast. This project started out with a group of just 21 Chinese migrant workers, as they were originally deemed "too weak" to do the job. Soon after construction started, however, the demand for labor increased, and white workers didn't want to do such hard, manual labor. And so, more Chinese immigrants were hired, as they were desperate for jobs as they came to the U.S. By 1865, over 90% of workers were Chinese migrants. They became a crucial part in the construction of the railroad, and yet, are forgotten by history books today. By the peak of construction, over 12,000 Chinese workers were hired to work on the railroad, getting paid an average of just 26$ a month for 6 day weeks. While their salary did eventually increase to 31$-35$ per month, it fell short of the salary the whites were receiving--around 40$. Furthermore, they were forced to toil under more dangerous conditions, causing from 50-1,200 Chinese migrant deaths (although no records were kept by Central Pacific so the exact number is unclear). Finally on June 25, 1867, they had enough. Over 5,000 Chinese workers put down their tools, went to their camp, and sat. Everyone was amazed at the sheer number of strikers, but didn't want to give in to the Chinese. So Charles Crocker, superintendent of the Central Pacific railroad, came up with an idea. He would cut off all food supplies and starve them until the strike let up. And it worked. With no change, the strike was written out of history. And as the number of Chinese immigrants in the west increased, so too, did the levels of anti-Chinese sentiment. In fact, just a few years later, the infamous Chinese Exclusion Act was passed, preventing Chinese laborers from immigrating to the U.S. at all. When the railroad was finished, this famous photograph depicting the connection of East and West has a noticeable lack of Chinese workers. Not only this, news reports also failed to acknowledge the Chinese workers who played such a crucial role in the construction of the railroad. 151 years after the construction of the first transcontinental railroad was completed, it's still important as ever to remember the Chinese migrant workers who suffered back-breaking labor to advance the United States forward. As May brings the peak of Covid-19, it's important to fight against the influx of xenophobic hate crimes on the Asian-American community. Little of the U.S. population know the toils of the Chinese migrants during the construction of the railroad, nor the importance of Asian immigrants during wartimes. That'a what we aim to correct. Author: Carina Sun Ayub Ommaya (1930-2008) In celebration of Asian Heritage Month, I thought it would be fitting to share the story of the “Singing Neurosurgeon:” one man who not only revolutionized studies on brain injury in the United States, but shaped the future of brain cancer treatment worldwide. I wish to share the story of Dr. Ayub Ommaya, a Pakistani-American neurosurgeon who is best known for his invention of the Ommaya reservoir. He studied medicine in Lahore, Pakistan before securing a scholarship to Balliol College at Oxford University. Dr. Ommaya then immigrated to the United States and became a U.S. citizen in 1967. From his days at Oxford, Dr. Ommaya developed an interest in studying traumatic brain injury, and his work ultimately led to the creation of the National Center for Injury Prevention and Control. His groundbreaking Ommaya reservoir, a catheter system that directly administers chemotherapy to tumor sites in the brain, is used to this day. Prior to his work, there was no effective way to deliver such treatment. In fact, the reservoir was also the prototype for all modern medical ports and is just one of Dr. Ommaya’s contributions to the scientific world amongst over 150 articles, chapters, and books he has published. Trained in opera, Dr. Ommaya also became fondly known for singing (much to the joy of his patients) prior to and post-surgery. I can only imagine the comfort and delight he brought to what otherwise must have been a bleak and sterile environment. Dr. Ommaya’s story is a testament to the fact that one can pursue a variety of interests and mustn't limit themselves to any one thing— He even excelled in debate and rowing (oh, and he was also champion boxer and swimmer). This month, I hope we can all remember the “Singing Neurosurgeon,” who is just one of the many Asian-Americans who have dedicated their lives to progress in their fields in the hope of a better future. Author: Sara Rizwan In Honor of Asian Pacific American Heritage Month We are bringing you the stories of inspirational Asian Americans from history. 5/30/20 - The Exceptional Example Ronald Takaki Set 5/27/20 - The Incredible Legacy of Kalpana Chawla 5/26/20 - When Marrying a Non-American Meant Losing Your Citizenship 5/25/20 - Honoring the 442nd Infantry Regiment 5/24/20- A Glimpse at Asian-Americans in Hollywood -- Miyoshi Umeki 5/22/20 - The Oriental Schools of San Fransisco 5/21/20 - Equality For All Colors - Yick Wo v. Hopkins 5/20/20 - An End To Police Brutality: Peter Yew's Stand 5/19/20 - Finding His Form: Linsanity in 2012 5/18/20 - Internment and Injustice: Fred T. Korematsu 5/17/20 - The Courageous Stand of Gene Viernes and Silme Domingo 5/16/20 - The Unbreakable Spirit of Wong Kim Ark 5/15/20- The Admirable Perseverance of Patsy Takemoto Mink 5/13/20 - The Lasting Legacy of Grace Lee Boggs 5/12/20-Remembering "The Forgotten" -- The Chinese migrants who built America's first Transcontinental railroad 5/11/20 - The Singing Neurosurgeon: Dr. Ayub Ommaya
<urn:uuid:1823f456-52d8-46e1-b256-502d07b6f594>
CC-MAIN-2021-43
https://www.declarasian.org/history-lessons/previous/2
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00670.warc.gz
en
0.980908
3,778
2.890625
3
Zambia is one of the world’s greatest wildlife sanctuaries. An incredible myriad of species have made their home across the country, from the coveted Big Five to tiny rodents and reptiles. At least 237 mammal species have been found in Zambia and over 700 bird species. In Zambia, you can spot all the safari classics including elephants, lions, leopards, hippos and crocodiles, but you’ll also have the chance to find some of Africa’s rarest animals. There are fascinating creatures found only in Zambia, and thanks to the country’s fantastic conservation efforts, you can observe some of the world’s endangered species. From endemic lechwe, zebra and giraffe to endangered rhinos and wild dogs, a safari in Zambia takes you deeper into the magical animal kingdom. The Thornicroft’s giraffe, also known as the Rhodesian giraffe, is a beautiful subspecies of giraffe found only in the Luangwa Valley of Zambia. It can be identified by its slightly shorter height and unique coat patterns. This animal is ecologically unique as it is geographically isolated from other giraffe, although it has recently been discovered that it's genetically similar to the Masai giraffe found in Kenya and Tanzania. With an estimated 500 Thornicroft’s giraffes living in the wild and no captive populations, the giraffe is a vulnerable species and spotting these rare creatures is a special experience. Similar to human fingerprints, all giraffes have different markings on their coat, used to distinguish between species and individual giraffes. African wild dog Also known as painted wolves, the African wild dog is an elusive and endangered species found in only six countries in Africa. Zambia has growing populations in South Luangwa and Kafue National Parks and there’s a small captive breeding program in Lusaka. The best season to spot these incredible animals is the ‘green season’ from November to May, however sightings are still rare. Over the last hundred years, African wild dog populations have depleted from half a million to an estimated 6,000, largely due to human hunting. African wild dogs are neither wolves or dogs, but a separate evolutionary canine species. They’re also one of Africa’s best hunters - with 80% of their hunts ending in a kill, they’re even better than lions, leopards and cheetahs. Found only in the Luangwa Valley of Zambia, the Cookson’s wildebeest is a subspecies of the blue wildebeest. They’re more commonly found in North Luangwa National Park, although you can still spot them in South Luangwa National Park. Their population is estimated to be around 5,000 to 10,000. During the dry season, the wildebeest make long journeys to the major waterholes to drink, and you can see large herds moving together through the mopane woodland in search of water. Cookson’s wildebeest are distinguishable from other species by their lighter colour and the clear markings on their neck and sides. Zambia had the third largest black rhino population in Africa in the 1960s, but just two decades later, rampant poaching had wiped out these beautiful creatures. They were officially declared extinct in 1998, but their story didn’t end there. They introduced 25 black rhinos in four phases between 2003 and 2010 and the population is now considered viable. The efforts included community education programs, ranger training and security operations, to ensure the longevity of the black rhino population. While you may not spot a black rhino in the North Luangwa National Park, a visit to this region is a great opportunity to learn about such an incredible conservation story. The passionate team at Mwaleshi Camp were involved in the anti-poaching and rhino reintroduction efforts and provide guests with a wonderful insight into the conservation story. Mwaleshi is a true bushcamp, set in the remote wilderness with just four seasonal reed-and-thatch chalets. They focus solely on authentic walking safaris throughout the park. The camp is located on the edge of the Mwaleshi river, near to the area where the black rhino have been reintroduced, although the team have spotted the shy rhinos just a couple of times in the 2018 season. Black rhinos are solitary creatures and known to be very aggressive, however they share a mutualistic relationship with Oxpecker birds. The birds feast on the ticks and flies that irritate the rhino’s skin and they also screech loudly to warn the poor-sighted rhino when danger approaches. The white rhino has also been poached to near extinction for its magnificent horn. Once found all over southern Africa in two subspecies, the Southern white rhino population is now threatened yet still viable, while the Northern white rhino is extinct in the wild. You can see this gentle giant in the Mosi-Oa-Tunya National Park, near the Victoria Falls in Zambia. Four white rhinos were successfully relocated from South Africa to the park in 2008, and there are now eleven white rhinos roaming freely through the park. The rhinos are guarded round-the-clock by a passionate team of rangers. You can take a walking safari through the park to learn about their stories and spot these beautiful creatures. You can tell the difference between a black or white rhino by reading their lips. The black rhino has a pointed or hooked upper lip, best for plucking foliage from trees and bushes, while the white rhino has square lips which suits their grass grazing habits. The graceful cheetah is one of the most difficult big cat species to spot. A shy creature, they roam all over Africa, with populations scattered across south and east Africa. Their population has depleted so alarmingly due to loss of habitat and prey (human developments have destroyed many open grasslands), illegal poaching and trading, and a high mortality rate - only 5% of cubs survive to adulthood. The Kafue National Park in Zambia is one of the best places to spot the cheetah, particularly in the northern Busanga Plains. Musekese Camp is an incredible bushcamp located in this truly remote corner of the Kafue. The passionate owners, Phil and Tyrone, ventured on foot around the vast national park to discover a completely untouched area they now call ‘Eden’. Here, they set up an authentic bushcamp, with just four reed-and-thatch chalets and a strong focus on sustainability and conservation. They also provide support and guiding for wildlife and documentary filmmakers and have worked with high profile crews including the BBC Earth teams. The camp’s expert guides can take you on game drives and walking safaris to spot cheetah, popular big game, and a number of other rare and unusual species including pangolin, honey badger, mongoose, bushpig, side-striped jackal and the endangered wild dog. The cheetah is the fastest land animal in the world, reaching speeds of around 113 km per hour in just three seconds. They’re also the only big cat that cannot roar, although they do purr loudly. Zambia has two endemic species of the lechwe antelope - the Kafue lechwe and the Black lechwe. Both are vulnerable species due to poaching and are difficult to spot as they’re only found in specific parts of Zambia. Distinguishable by their larger stature and bigger horns, the Kafue lechwe is only found in the Kafue Flats, with the largest herds found in Lochinvar and Blue Lagoon National Parks. They have a light brown coat with white undersides, with dark markings on their legs and shoulders. Found only in the Bangweulu plains of northern Zambia, the Black lechwe has been reintroduced to the Nashinga Swamps near Chinsali and Kasanka and Lusaka National Parks in an effort to reverse the threat of extinction. They are black and tan in colour and have white undersides. The lechwe is a water-loving antelope. While they’re slower on land, they’re great swimmers and can move quickly in shallow waters. Zambia is home to a number of vulnerable or endangered antelope species. The country’s fantastic conservation efforts have allowed antelope populations to remain stable in Zambia and it’s a great place to spot the rarer species. The roan antelope are common in the Luangwa Valley, although rare in other parks around Zambia. They are distinguishable by their large stature, light brown coat and ringed horns. The puku antelope, while scarce around Africa, are found in abundance in the Luangwa and Zambezi Valley. You’ll find these furry orange creatures in thirty-strong herds along the floodplains near the Zambezi river. Tougher to spot, the oribi antelope is found mainly in the Kafue and Lochinvar National Parks and the Bagweulu Swamps, and sometimes in the Luangwa Valley. They have black patches below their large oval ears and are renowned for jumping into the air with stiff, straight legs when alarmed. The Lichtenstein’s Hartebeest is also difficult to find, as they roam only through northern Zambia in small numbers. They are light fawn in colour and like to hide in the miombo woodlands, although they are sometimes drawn to the floodplains for grass at the end of the dry season. Herd antelopes have glands in their hooves that leave a scent, recording their movements in the earth. If an antelope gets separated, they can find their way back to the herd using this scent. Native to eastern Zambia, Crawshay’s zebra is found in Zambia’s South Luangwa National Park. They can also be found in some neighbouring areas of Tanzania, Mozambique and Malawi, however the Luangwa Valley remains the best place to see this rare animal. As a subspecies of the plains zebra, they are different to other species due to their narrower, denser black stripes that go all the way under the belly and down to their hooves. They also don’t have any light-brown shadow stripes and have different teeth. Baby zebras are born with brown stripes which gradually turn black as they grow. There are a number of theories about why zebras evolved with such a unique stripy coat, but it’s most likely for camouflage purposes. It is believed the stripes can distort the zebras distance from a predator, or create an optical illusion to confuse their stalkers. Zambia is home to many endemic and endangered birds. The Shoebill stork, one of the rarest birds in Africa, is found only in northern Zambia and just a few other places around the continent. With an estimated population of less than 5,000 in the wild, they are a critically endangered species. Distinguishable by their enormous goofy beaks, the Bangweulu swamps are one of the best places to witness a rare sighting of these creatures. Other endangered bird species that can be spotted in Zambia include the Egyptian, Cape and Lappet-faced vulture species, Ground hornbill, Bateleur eagle, Wattled crane, Grey Crowned-crane and the prized African skimmer. Zambia also has three endemic bird species including the Black-cheeked lovebird, White-chested tinkerbird and Chaplin’s barbet. A birding safari in Zambia around the wet ‘green’ season gives bird watchers the best chance of spotting these incredible and rare species. The Shoebill stork is renowned for their comically oversized beaks (they can reach up to 24 cm in length and 20 cm in width), however they’re not so funny when used as a fishing weapon. Their beaks are razor ship and can decapitate large fish and even baby crocodiles. If you’d like to go on safari in Zambia to spot these unique animals, get in touch with our Luxury Travel Specialists to chat about your ideas, or fill out our enquiry form with details on your dream safari holiday. Talk to the team Find out more and tailor your perfect trip with the help of our specialist team on 0117 313 3300
<urn:uuid:14144d3d-88b9-4da1-aea9-894da165cc78>
CC-MAIN-2021-43
https://www.wayfairertravel.com/inspiration/endangered-animals-of-zambia/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00351.warc.gz
en
0.937353
2,650
2.96875
3
A Boiling Water Reactor (BWR) circulates water past nuclear fuel in a large pressure vessel at 6.9 MPa. The nuclear fuel is configured as cylindrical pellets contained within long metallic tubes, referred to as fuel rods. The fuel rods are combined in various arrangements within flow channels to create fuel assemblies. Several hundred such fuel assemblies are connected in parallel between inlet and outlet plena. These parallel fuel assemblies are referred to as the core of a BWR. During passage of water along the nuclear fuel, significant boiling and vapor formation occur — typically producing 10% to 20% vapor flow that exits into the outlet plenum. The resultant vapor is separated and directed to steam turbines for electrical generation. The remaining liquid is circulated back, along with condensate liquid returning from the turbines, to the inlet plenum. Key components of a jet-pump BWR steam supply system are illustrated in Figure 1. Normal operation of a BWR depends upon accurate prediction of several key thermal hydraulic parameters in the fuel assemblies. In particular, vapor void fractions, transition boiling limits and fuel assembly pressure drops must be accurately predicted to support BWR design and operation. Prediction methods must address the range of two-phase flow regimes from subcooled liquid at the fuel assembly inlet to annular flow at the outlet. (See Forced Convective Boiling.) The surrounding channels on BWR fuel assemblies effectively isolate flows within each fuel assembly. This allows the necessary two-phase flow characteristics to be evaluated by reproducing realistic thermal hydraulic conditions in isolated fuel assembly simulation tests. Such simulations require large experimental facilities that can reproduce the power and flows of full-scale BWR fuel assemblies, using electrically heated fuel rod simulators. Data from such full-scale experiments are used to provide very accurate empirical relationships for the two-phase flow parameters required for BWR design analyses. The vapor Void Fraction is the fraction of the flow area that is occupied by vapor. This is directly related to the average velocity of the vapor. In a BWR, the circulating water serves both as cooling medium and neutronic moderator for the nuclear fuel. This produces close coupling between power generation and two-phase flow conditions in the fuel assemblies. The primary coupling parameter is the density of the steam-water mixture surrounding the nuclear fuel. Predicting the two-phase density requires accurate prediction of the void fraction. Fuel assembly Pressure Drop is one of the most important parameters determining conditions in the BWR core. Fuel assembly power and flows can differ significantly due to mechanical and nuclear design differences, as well as locations within the core. To assure appropriate flow distribution to all of the fuel assemblies, it is important to have accurate predictions of pressure drops over the entire range of fuel assembly operating conditions. (See Pressure Drop, Two-Phase.) Boiling transition refers to conditions where typical boiling heat transfer starts to deteriorate. This has also been referred to as critical heat flux (see Burnout). The boiling transition limit for BWR operation is usually associated with deterioration of the liquid film flowing on a fuel rod under annular flow conditions. The fuel rods of a BWR fuel assembly utilize zirconium alloy, which minimizes parasitic capture of neutrons, for the outer metallic tubes. This material has good corrosion resistance for temperatures existing under typical boiling conditions. However, if boiling transition conditions are exceeded, the deteriorated heat transfer causes higher temperatures in the fuel. If such conditions persist for an extended period of time, the increased corrosion rate of the metallic tubes can cause fuel failures. Therefore, accurate predictions of boiling transition limits are very important for reliable BWR operation. The elimination of crossflows between BWR fuel assemblies allows for both realistic experimental simulations of fuel assembly thermal hydraulics and simplified analyses methods. BWR analyses methods typically utilize either one dimensional representations of fuel assemblies or subchannel representations. One-dimensional analyses utilize correlations based on cross-sectional averaged flow quantities, with empirical parameters to account for radial variations where necessary. Nuclear coupling based upon one-dimensional values is usually sufficiently accurate for typical BWR fuel assemblies. Representative solution techniques and correlations are discussed by Collier (1980) and Lahey (1993). Subchannel analyses icorporate more detailed mechanistic descriptions of two-phase flow phenomena to predict crossflows and local parameters within fuel assemblies. Subchannel methods are summarized by Shiralkar (1992). One-dimensional void fraction predictions are often based on the Drift Flux concept of Zuber (1965). Average vapor velocity is characterized by the average of local vapor drift velocities and the volumetric flux adjusted by a distribution parameter. The distribution parameter incorporates radial phase and velocity distribution effects. These parameters are determined from experimental void fraction data. While the original drift flux work anticipated discrete values of these parameters for various flow regimes, considerable success has been achieved by Dix (1971) and Chexal (1991) with continuous correlation relations which implicitly reflect the effects of flow regimes. These correlations are usually assumed to be insensitive to power distribution changes across the fuel assemblies. It has been confirmed by recent data from Yagi (1992) which demonstrated that average void fraction results were unchanged even when large variations were imposed in local void fractions within a typical BWR fuel assembly. One-dimensional pressure drop predictions typically use average values for void fraction and flow quality in combination with standard two-phase flow formulations and correlations. Single-phase pressure drop data associated with wall friction and local losses are necessary to provide the reference bases for each specific fuel assembly design. Boiling transition limits reflect local liquid film disruptions on individual fuel rods. Those limits are dependent upon both local film flow characteristics and local power distributions across fuel assemblies. Mechanical spacers, which maintain fuel rod spacing along their axial lengths, have important effects on boiling transition limits. Detrimental thinning of the fuel rod liquid films can be caused just upstream of the spacers, while favorable deposition of droplets into the liquid films can be caused just downstream of the spacers. The detrimental upstream effects are usually sufficient to cause initial boiling transitions to occur just upstream of spacers. However, the favorable downstream effects usually dominate, such that boiling transition improves with closer axial pitch of the spacers. Boiling transition limits can vary by 10% due to mechanical features of the spacers. Predictions of boiling transition are often based on one dimensional averaged parameters, such as critical quality and boiling length, with empirical treatments of local effects. Such correlations can provide very accurate predictions but require extensive calibration with full scale test results for each specific fuel assembly design and radial power distribution range. This extensive testing requirement is one of the major motivations for more general subchannel analyses methods. Subchannel analyses methods are based upon radial solution meshes within the fuel assemblies. Subchannel methods divide fuel assemblies into a number of interacting flow regions for analyses. The models then track liquid films, core vapor and core entrained droplets as separate fields for each of the subchannel regions. The goals for subchannel methods are to provide predictions which are mechanistically-based and less dependent upon full-scale calibration experiments, particularly for the development of new fuel designs. Since BWR boiling transition limits are related to the deterioration of annular liquid films flowing on fuel rod surfaces, the subchannel methods incorporate mechanisms to predict and track the net liquid flows in those films, as well as criteria for the minimum film flows corresponding to film disruption and boiling transition. Key aspects for these film predictions are liquid evaporation, entrainment of liquid from the film to the vapor/droplet core and deposition of droplets from the core onto the surface films. Flow distributions among the subchannel regions usually require empirical mechanisms such as ‘void drift’ to achieve the overall flow distributions observed in BWR fuel assembly experiments. The effects of fuel rod spacers are very important in determining boiling transition limits of BWR fuel assemblies, as previously discussed. Current subchannel methods incorporate various modeling assumptions to describe these spacer effects, including empirical parameters which must be calibrated with full scale boiling transition data for the specific spacer to be analyzed. General modeling of these spacer effects in subchannel analyses will require a further level of detail and mechanistic understanding which is not yet available. Current subchannel modeling methods provide excellent predictions of boiling transition for limited variations from their calibration bases. Void fraction distributions are also predicted quite well by these methods, except for highly heterogeneous conditions across fuel assemblies. In general, subchannel methods are appropriate tools for BWR analyses, but further developments are necessary to significantly reduce dependence on full-scale testing for final design applications. Parallel flow channels and close coupling between neutronic power and two-phase flow conditions can cause instability conditions in a BWR. Accurate predictions and avoidance of potentially unstable conditions are important for reliable BWR operation. Two types of instability can occur. Density wave oscillations can be driven by the two-phase hydraulics in individual fuel assemblies, with only minor effects from neutronic coupling. Alternatively, core-wide neutronic coupling can cause instabilities in many or all of the fuel assemblies. The fuel assemblies can all have oscillations in-phase, or local regions can oscillate. Accurate predictions of these oscillation conditions require coupled solutions of neutronics and two-phase flow equations. The same basic one-dimensional formulations for two-phase flow parameters are also applied in stability predictions. Both frequency domain and time domain solution techniques are used successfully for these analyses, as discussed by Lahey (1993). (See Instability, Two-phase.) One of the most challenging and well-researched areas of BWR analyses is the prediction of safety system and fuel cooling responses to a wide range of accidents that might result in the loss of coolant water from the core. Postulated breakage of pipes connected to the pressure vessel are the primary bases for these accident evaluations. BWR designs include emergency systems to distribute coolant above the core, as well as systems to refill the pressure vessel and reflood the core if it does become uncovered during an accident. Modern BWR designs also include recirculation pumps within the pressure vessel (rotary or jet pumps) to minimize the size of potential breaks to pressure vessel connections. Predictions of fuel cooling under these postulated loss of coolant accident (LOCA) conditions require detailed system simulation methods to analyze the transient two-phase fluid conditions adjacent to the fuel as the postulated accident proceeds. Accurate predictions require modeling of flow and heat transfer of flow and heat transfer with surface temperatures well above Leidenfrost conditions. Elements include falling liquid films, reflooding from below and countercurrent flows of the two phases. Modeling requirements for accurate BWR/LOCA predictions are summarized by Andersen (1985). Large scale experiments have demonstrated that condensation of vapor by subcooled liquid and three-dimensional distribution effects are important aspects determining the progression of emergency coolant to the fuel. A significant effect for BWR/LOCA is the accumulation and drainage of coolant water in the plenum region above the core. Countercurrent flow limitation (CCFL) conditions at the top of fuel assemblies cause accumulation of emergency coolant water in a pool above the core. Some fuel assemblies then experience downward flow of subcooled liquid, while others experience counter-current flows. The results of this complex pattern are rapid reflooding and cooling of BWR core. These effects and other LOCA studies are summarized by Dix (1983). (See also Flooding and Flow Reversal.) The Simplified Boiling Water Reactor (SBWR) is a new reactor design which incorporates passive safety features rather than active safety systems used previously. This design includes a gravity drain pool inside the containment to provide liquid to the core in case of a LOCA. Heat exchangers submerged in large water pools outside the containment absorb energy from within the reactor containment and transport it to the atmosphere. The latter energy transport occurs without any activation in case of a LOCA. The water pools have sufficient capacity to transport decay heat for several days and can be replenished from outside the containment. Since the SBWR is designed to maintain the core covered during a LOCA, the primary design limit is containment structure pressure capability. Prediction of containment pressure responses for such events requires computer programs that address the entire coupled system. Condensation on the containment concrete walls and structure, as well as interaction effects between steam and other noncondensable gases, are particularly important for accurate predictions of these slow transient responses. Andersen, J. G. M., Chu, K. H., Cheung, Y. K., and Shaung, J. C. (1985) BWR Full Integral Simulation Test (FIST) Program, TRAC-BWR Model Development Volume 2 — Models, GEAP-30875-2,NUREG/CR-4127-2, EPRI NP-3987-2. Chexal, B., Lellouche, G., Horowitz, J., Healzer, J., and Oh, S. (1991) The Chexal-Lellouche Void Fraction Correlation for Generalized Applications, NSAC Report-139. Collier, J. G. (1980) Convective Boiling and Condensation, McGraw Hill. Dix, G. E. (1971) Vapor Void Fractions for Forced Convection with Subcooled Boiling at Low Flow Rates, General Electric NEDO-10491 (PhD Thesis, University of California Berkeley). Dix, G. E. (1983) BWR Loss of Coolant Technology Review, Proceedings ANS Symposium Thermal Hydraulics of Nuclear Reactors, Volume 1. Lahey, R. T. and Moody, F. J. (1993) The Thermal Hydraulics of a Boiling Water Reactor, ANS Monograph. Shiralkar, B. S. (1992) Recent Trends in Subchannel Analysis, Proceedings of the International Seminar on Subchannel Analysis, Tokyo. Yagi, M., Mitsutake, T., Morooka, S., and Inoue, A. (1992) Void Fraction in BWR Fuel Assembly and Evaluation of Subchannel Code, Proceedings of the International Seminar on Subchannel Analysis, Tokyo. Zuber, N. and Findlay, J. (1965) Average Volumetric Concentration in Two-Phase Flow Systems, Transactions of ASME, Volume 87, Series C. - Andersen, J. G. M., Chu, K. H., Cheung, Y. K., and Shaung, J. C. (1985) BWR Full Integral Simulation Test (FIST) Program, TRAC-BWR Model Development Volume 2 "” Models, GEAP-30875-2,NUREG/CR-4127-2, EPRI NP-3987-2. - Chexal, B., Lellouche, G., Horowitz, J., Healzer, J., and Oh, S. (1991) The Chexal-Lellouche Void Fraction Correlation for Generalized Applications, NSAC Report-139. - Collier, J. G. (1980) Convective Boiling and Condensation, McGraw Hill. - Dix, G. E. (1971) Vapor Void Fractions for Forced Convection with Subcooled Boiling at Low Flow Rates, General Electric NEDO-10491 (PhD Thesis, University of California Berkeley). - Dix, G. E. (1983) BWR Loss of Coolant Technology Review, Proceedings ANS Symposium Thermal Hydraulics of Nuclear Reactors, Volume 1. - Lahey, R. T. and Moody, F. J. (1993) The Thermal Hydraulics of a Boiling Water Reactor, ANS Monograph. - Shiralkar, B. S. (1992) Recent Trends in Subchannel Analysis, Proceedings of the International Seminar on Subchannel Analysis, Tokyo. - Yagi, M., Mitsutake, T., Morooka, S., and Inoue, A. (1992) Void Fraction in BWR Fuel Assembly and Evaluation of Subchannel Code, Proceedings of the International Seminar on Subchannel Analysis, Tokyo. - Zuber, N. and Findlay, J. (1965) Average Volumetric Concentration in Two-Phase Flow Systems, Transactions of ASME, Volume 87, Series C.
<urn:uuid:700a4487-dc20-4a87-af56-5341441abfba>
CC-MAIN-2021-43
https://thermopedia.com/content/592/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00431.warc.gz
en
0.88094
3,391
4.34375
4
From Wikipedia the free encyclopedia This article needs additional citations for verification. (April 2021) Bond valuation is the determination of the fair price of a bond. As with any security or capital investment, the theoretical fair value of a bond is the present value of the stream of cash flows it is expected to generate. Hence, the value of a bond is obtained by discounting the bond's expected cash flows to the present using an appropriate discount rate. In practice, this discount rate is often determined by reference to similar instruments, provided that such instruments exist. Various related yield-measures are then calculated for the given price. Where the market price of bond is less than its face value (par value), the bond is selling at a discount. Conversely, if the market price of bond is greater than its face value, the bond is selling at a premium. For this and other relationships between price and yield, see below. If the bond includes embedded options, the valuation is more difficult and combines option pricing with discounting. Depending on the type of option, the option price as calculated is either added to or subtracted from the price of the "straight" portion. See further under Bond option. This total is then the value of the bond. As above, the fair price of a "straight bond" (a bond with no embedded options; see Bond (finance)# Features) is usually determined by discounting its expected cash flows at the appropriate discount rate. The formula commonly applied is discussed initially. Although this present value relationship reflects the theoretical approach to determining the value of a bond, in practice its price is (usually) determined with reference to other, more liquid instruments. The two main approaches here, Relative pricing and Arbitrage-free pricing, are discussed next. Finally, where it is important to recognise that future interest rates are uncertain and that the discount rate is not adequately represented by a single fixed number—for example when an option is written on the bond in question—stochastic calculus may be employed. Present value approach Below is the formula for calculating a bond's price, which uses the basic present value (PV) formula for a given discount rate. This formula assumes that a coupon payment has just been made; see below for adjustments on other dates. - F = face value - iF = contractual interest rate - C = F * iF = coupon payment (periodic interest payment) - N = number of payments - i = market interest rate, or required yield, or observed / appropriate yield to maturity (see below) - M = value at maturity, usually equals face value - P = market price of bond. Relative price approach Under this approach—an extension, or application, of the above—the bond will be priced relative to a benchmark, usually a government security; see Relative valuation. Here, the yield to maturity on the bond is determined based on the bond's Credit rating relative to a government security with similar maturity or duration; see Credit spread (bond). The better the quality of the bond, the smaller the spread between its required return and the YTM of the benchmark. This required return is then used to discount the bond cash flows, replacing in the formula above, to obtain the price. Arbitrage-free pricing approach As distinct from the two related approaches above, a bond may be thought of as a "package of cash flows"—coupon or face—with each cash flow viewed as a zero-coupon instrument maturing on the date it will be received. Thus, rather than using a single discount rate, one should use multiple discount rates, discounting each cash flow at its own rate. Here, each cash flow is separately discounted at the same rate as a zero-coupon bond corresponding to the coupon date, and of equivalent credit worthiness (if possible, from the same issuer as the bond being valued, or if not, with the appropriate credit spread). Under this approach, the bond price should reflect its "arbitrage-free" price, as any deviation from this price will be exploited and the bond will then quickly reprice to its correct level. Here, we apply the rational pricing logic relating to "Assets with identical cash flows". In detail: (1) the bond's coupon dates and coupon amounts are known with certainty. Therefore, (2) some multiple (or fraction) of zero-coupon bonds, each corresponding to the bond's coupon dates, can be specified so as to produce identical cash flows to the bond. Thus (3) the bond price today must be equal to the sum of each of its cash flows discounted at the discount rate implied by the value of the corresponding ZCB. Were this not the case, (4) the arbitrageur could finance his purchase of whichever of the bond or the sum of the various ZCBs was cheaper, by short selling the other, and meeting his cash flow commitments using the coupons or maturing zeroes as appropriate. Then (5) his "risk free", arbitrage profit would be the difference between the two values. Stochastic calculus approach When modelling a bond option, or other interest rate derivative (IRD), it is important to recognize that future interest rates are uncertain, and therefore, the discount rate(s) referred to above, under all three cases—i.e. whether for all coupons or for each individual coupon—is not adequately represented by a fixed (deterministic) number. In such cases, stochastic calculus is employed. The following is a partial differential equation (PDE) in stochastic calculus, which, by arbitrage arguments, is satisfied by any zero-coupon bond , over (instantaneous) time , for corresponding changes in , the short rate. The solution to the PDE (i.e. the corresponding formula for bond value) — given in Cox et al. — is: - where is the expectation with respect to risk-neutral probabilities, and is a random variable representing the discount rate; see also Martingale pricing. To actually determine the bond price, the analyst must choose the specific short-rate model to be employed. The approaches commonly used are: Note that depending on the model selected, a closed-form (“Black like”) solution may not be available, and a lattice- or simulation-based implementation of the model in question is then employed. See also Bond option § Valuation. Clean and dirty price When the bond is not valued precisely on a coupon date, the calculated price, using the methods above, will incorporate accrued interest: i.e. any interest due to the owner of the bond since the previous coupon date; see day count convention. The price of a bond which includes this accrued interest is known as the "dirty price" (or "full price" or "all in price" or "Cash price"). The "clean price" is the price excluding any interest that has accrued. Clean prices are generally more stable over time than dirty prices. This is because the dirty price will drop suddenly when the bond goes "ex interest" and the purchaser is no longer entitled to receive the next coupon payment. In many markets, it is market practice to quote bonds on a clean-price basis. When a purchase is settled, the accrued interest is added to the quoted clean price to arrive at the actual amount to be paid. Yield and price relationships Once the price or value has been calculated, various yields relating the price of the bond to its coupons can then be determined. Yield to maturity The yield to maturity (YTM) is the discount rate which returns the market price of a bond without embedded optionality; it is identical to (required return) in the above equation. YTM is thus the internal rate of return of an investment in the bond made at the observed price. Since YTM can be used to price a bond, bond prices are often quoted in terms of YTM. To achieve a return equal to YTM, i.e. where it is the required return on the bond, the bond owner must: - buy the bond at price , - hold the bond until maturity, and - redeem the bond at par. The coupon rate is simply the coupon payment as a percentage of the face value . Coupon yield is also called nominal yield. The current yield is simply the coupon payment as a percentage of the (current) bond price . The concept of current yield is closely related to other bond concepts, including yield to maturity, and coupon yield. The relationship between yield to maturity and the coupon rate is as follows: - When a bond sells at a discount, YTM > current yield > coupon yield. - When a bond sells at a premium, coupon yield > current yield > YTM. - When a bond sells at par, YTM = current yield = coupon yield Duration is a linear measure of how the price of a bond changes in response to interest rate changes. It is approximately equal to the percentage change in price for a given change in yield, and may be thought of as the elasticity of the bond's price with respect to discount rates. For example, for small interest rate changes, the duration is the approximate percentage by which the value of the bond will fall for a 1% per annum increase in market interest rate. So the market price of a 17-year bond with a duration of 7 would fall about 7% if the market interest rate (or more precisely the corresponding force of interest) increased by 1% per annum. Convexity is a measure of the "curvature" of price changes. It is needed because the price is not a linear function of the discount rate, but rather a convex function of the discount rate. Specifically, duration can be formulated as the first derivative of the price with respect to the interest rate, and convexity as the second derivative (see: Bond duration closed-form formula; Bond convexity closed-form formula; Taylor series). Continuing the above example, for a more accurate estimate of sensitivity, the convexity score would be multiplied by the square of the change in interest rate, and the result added to the value derived by the above linear formula. In accounting for liabilities, any bond discount or premium must be amortized over the life of the bond. A number of methods may be used for this depending on applicable accounting rules. One possibility is that amortization amount in each period is calculated from the following formula: = amortization amount in period number "n+1" Bond Discount or Bond Premium = = Bond Discount or Bond Premium = - Asset swap spread - Bond convexity - Bond duration - Bond option - Clean price - Coupon yield - Current yield - Dirty price - Option-adjusted spread - Yield to maturity - Category:Bond valuation - Staff, Investopedia (8 May 2008). "Amortizable Bond Premium". - Fabozzi, 1998 - "Advanced Bond Concepts: Bond Pricing". investopedia.com. 6 September 2016. - For a derivation, analogous to Black-Scholes, see David Mandel (2015). Understanding Market Price of Risk, Florida State University - John C. Cox, Jonathan E. Ingersoll and Stephen A. Ross (1985). A Theory of the Term Structure of Interest Rates Archived 2011-10-03 at the Wayback Machine, Econometrica 53:2 - Guillermo L. Dumrauf (2012). "Chapter 1: Pricing and Return". Bonds, a Step by Step Analysis with Excel. Kindle Edition. - Frank Fabozzi (1998). Valuation of fixed income securities and derivatives (3rd ed.). John Wiley. ISBN 978-1-883249-25-0. - Frank J. Fabozzi (2005). Fixed Income Mathematics: Analytical & Statistical Techniques (4th ed.). John Wiley. ISBN 978-0071460736. - R. Stafford Johnson (2010). Bond Evaluation, Selection, and Management (2nd ed.). John Wiley. ISBN 0470478357. - Mayle, Jan (1993), Standard Securities Calculation Methods: Fixed Income Securities Formulas for Price, Yield and Accrued Interest, 1 (3rd ed.), Securities Industry and Financial Markets Association, ISBN 1-882936-01-9 - Donald J. Smith (2011). Bond Math: The Theory Behind the Formulas. John Wiley. ISBN 1576603067. - Bruce Tuckman (2011). Fixed Income Securities: Tools for Today's Markets (3rd ed.). John Wiley. ISBN 0470891696. - Pietro Veronesi (2010). Fixed Income Securities: Valuation, Risk, and Risk Management. John Wiley. ISBN 978-0470109106. - Malkiel, Burton Gordon (1962). "Expectations, Bond Prices, and the Term Structure of Interest Rates". The Quarterly Journal of Economics. - Mark Mobius (2012). Bonds: An Introduction to the Core Concepts. John Wiley. ISBN 978-0470821473. - Bond Valuation, Prof. Campbell R. Harvey, Duke University - A Primer on the Time Value of Money, Prof. Aswath Damodaran, Stern School of Business - Basic Bond Valuation Prof. Alan R. Palmiter, Wake Forest University - Bond Price Volatility Investment Analysts Society of South Africa - Duration and convexity Investment Analysts Society of South Africa
<urn:uuid:e1dc8b1e-6ac1-4ec2-a8b5-533f81428c73>
CC-MAIN-2021-43
https://www.wikizero.com/en/Bond_valuation
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00110.warc.gz
en
0.892355
2,817
3.421875
3
I am absolutely convinced that lack of sleep contributes to the majority of our issues today- irritability, violence, stress, memory issues, distraction, accidents, making unsound decisions, fatigue, poor health, obesity. . . There’s an epidemic, and you’re part of it. The Centers for Disease Control and Prevention reports that Americans are in the middle of a sleep loss epidemic. Nearly eight in ten Americans say they would feel better and more prepared for the day if they had just one more hour of sleep. If you’ve ever spent a night tossing and turning, you already know how you’ll feel the next day- tired, cranky, and out of sorts. Missing out on the recommended 7 to 9 hours of shut-eye nightly does more than make you feel groggy and grumpy. The long-term effects of sleep deprivation are real. Sleep is a necessity as critical to life as breathing, and it affects every aspect of your life, from your productivity, to your health, to your mood. While most of us assume that sleep hours cut into our productive hours, we’re actually more productive when we get sufficient sleep. So while it may seem counterintuitive, your production will increase because you’ll have more energy and be able to think more clearly while working smarter and more efficiently. However, there are other consequences of poor sleep that aren’t always as obvious. Without enough sleep, your brain and body systems won’t function normally. It can also dramatically lower your quality of life. A review of 16 studies found that sleeping for less than 6 to 8 hours a night increases the risk of early death by about 12 percent! Your central nervous system is the information highway of your body. Sleep is necessary to keep it functioning properly; chronic insomnia can disrupt how your body sends this information. Research suggests sleep deprivation can negatively affect your immune system, as well as contribute to weight gain, high blood pressure, cancer, heart disease, stroke, diabetes, bone loss, and depression. Sleep deprivation can also impair learning, alertness, concentration,judgement, problem solving and reasoning. Lack of sleep disrupts every physiologic function in the body. To make matters worse, lack of sleep hinders your ability to realize your own performance is impaired, making you think you’re functioning well when you probably aren’t. So now we know that sleep is necessary, it’s up to each of us to make sure we get enough. In the end, getting a better nights sleep will help us all lead better lives. Behaviors during the day, and especially before bedtime, can have a major impact on your sleep. Daily routines- what you eat and drink, the medications you take, how you schedule your days, and how you choose to spend your evenings, can significantly impact the quality of sleep. Even a few slight adjustments can, in some cases, mean the difference between sound sleep and a restless night. Sleep hygiene is a variety of different practices and habits that help us to fall asleep and stay asleep. They encourage a restful night, allowing us to awaken refreshed and ready for another day. After researching multiple sites and journals, I found the following list best represented healthy sleep hygiene tips: • Keep a consistent sleep schedule. Get up at the same time every day, even on weekends or during vacations. • Set a bedtime that is early enough for you to get at least 7 hours of sleep. • Don’t go to bed unless you are sleepy. Struggling to fall sleep just leads to frustration. If you’re not asleep after 20 minutes, get out of bed, go to another room, and do something relaxing, like reading or listening to music until you are tired enough to sleep. The same applies if you wake in the middle of the night. Get up after 20 minutes and return when feeling tired. • Establish a relaxing bedtime routine. Ease the transition from wake time to sleep time with a period of relaxing activities an hour or so before bed. Take a bath (the rise, then fall in body temperature promotes drowsiness), use aromatherapy, read a book, watch television, or practice relaxation exercises. Avoid stressful, stimulating activities like doing work or discussing emotional issues. Physically and psychologically stressful activities can cause the body to secrete the stress hormone cortisol, which is associated with increasing alertness. If you tend to take your problems to bed, try writing them down, and then putting them aside. • Use your bed only for sleep and sex. •It is well documented that keeping your bedroom quiet and relaxing contributes to better sleep. A quiet, dark, and cool environment can help promote sleep. Why do you think bats congregate in caves for their daytime sleep? “To achieve such an environment, lower the volume of outside noise with earplugs or a “white noise” appliance. Use heavy curtains, blackout shades, or an eye mask to block light, a powerful cue that tells the brain that it’s time to wake up. Keep the temperature comfortably cool, between 60 and 75°F, and the room well ventilated,” says one Harvard study. •Limit exposure to bright light in the evenings, including electronic equipment. Dim the lights and turn off all your devices, smartphones, laptops, and TVs, about 60 minutes before bedtime. Bright light is one of the biggest triggers to our brains that it’s time to be awake and alert, so start sending the opposite signal. Even think of taking the computer and TV out of the room altogether. Keeping computers, TVs, and work materials out of the room will strengthen the mental association between your bedroom and sleep. • Don’t eat a large meal before bedtime. Your body isn’t meant to be digesting foods while sleeping. If you need a late night snack, eat light. • Exercise is important to help your body feel ready for sleep. Even just taking a walk can get your blood moving and improve your sleep. It’s best to complete your workouts at least 2 hours before you go to bed so your body is ready to rest. • Avoid consuming caffeine, alcohol, chocolates, nicotine, or any type of stimulant at least 6 hours before sleep (although alcohol may bring on sleep, after a few hours it acts as a stimulant). Nicotine changes the amount of time spent in each sleep cycle. It is estimated 1.2 minutes of sleep is lost for every cigarette smoked and smokers are 4 times more likely to report feeling unrefreshed after a night’s sleep than nonsmokers, (probably due to the fact they spend more time in light sleep). • Reduce your fluid intake before bedtime to prevent or lessen the number of bathroom calls. • Try not to nap or make it earlier in the day- limit to only 30 minutes. • Don’t stare at the clock. According to one Harvard study, staring at a clock in your bedroom, either when you are trying to fall asleep or when you wake in the middle of the night, can actually increase stress, making it harder to fall asleep. Turn your clock’s face away from you. • As a loving pet owner, this one hurts- keep furry friends out of the bed. They may feel comforting at first but their presence keeps you attuned to their needs and limits your sleeping space. They can also trigger allergies. • Slip on socks. Some people have the unlucky lot in life of colder-than-comfortable extremities. According to a 1999 study, having warm hands and feet seemed to predict how quickly you’ll fall asleep. •Breathing deeply mimics how your body feels when it’s already relaxed because it stimulates the body’s naturally-calming parasympathetic system. Try breathing in to the count of five, hold your breath for the count of five, breath out to the count of five, hold your breath for the count of five. After a few minutes you’ll be more relaxed and calm. • It is estimated that 37 million Americans snore regularly. It certainly disturbs a bed partner’s sleep, but it can disrupt the snorer’s sleep, too, leading to more daytime sleepiness, according to the National Sleep Foundation. Some simple tips may help you keep it under control, like sleeping on your side instead of your back, avoiding alcohol before bed, and even losing weight. Many experts recommend sewing a tennis ball into the front pocket of an old t- shirt, and then wearing it backwards to make sleeping on your back uncomfortable enough to help you stay on your side. • Try progressive muscle relaxation exercises which involves tensing then releasing the muscles throughout the body, directing your attention to each as you go. It can improve sleep quality and reduce fatigue. Imagine yourself somewhere calm, relaxing, and sleep-inducing. This deep relaxation method can slow brain wave activity, coaxing you toward sleep. • If your bed partner is constantly stealing all the covers, or one of you sweats while the other shivers, it might be a good idea to use separate bed sheets and covers. Use only one fitted sheet to start, then individual top sheets and blankets. Also make sure the mattress is big enough for two people. As much as cuddling works when awake, it hampers sleep when you can’t stretch and maneuver. • Believe it or not, lots of tossing and turning may be less about you and more about what you’re laying on. An uncomfortable mattress might be the source of your sleepless nights. Whether that’s because it’s lost its cushioning or because it’s simply too small, it’s important to recognize the signs that it’s time to buy a new one. Expect to make a swap every 5-10 years, according to Consumer Reports. • Talk to your healthcare provider if insomnia persists. It may be due to obstructive sleep apnea, prostrate issues, restless leg, or medications. Regardless of our hectic lives we all require sleep. There’s no question getting a full 7-9 hours of sleep will not only help you to have a healthier, happy life but it will help everyone around you as well. If needed, make a sleep diary to better understand your habits. Tune in Wednesday to read how lack of sleep can contribute to weight gain. That’s right, less sleep can mean more pounds! For more information, visit the Division of Sleep Medicine at Harvard here.
<urn:uuid:f8782231-9b04-4aa9-9826-014746797049>
CC-MAIN-2021-43
https://courtneymedicalgroupaz.com/2018/08/06/1715/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00151.warc.gz
en
0.933966
2,190
3.140625
3
In From Spinster to Career Woman: Middle-Class Women and Work in Victorian England, Arlene Young explores changing perceptions of women’s work in mid-Victorian England and the lingering anxieties surrounding the growing cultural acceptance of the figure of the middle-class working woman. This book offers a fresh perspective on the Victorian period and will be a welcome addition to the bookshelves of anyone interested in women’s history, British history and labour studies, recommends Katelan Dunn. From Spinster to Career Woman: Middle-Class Women and Work in Victorian England. Arlene Young. McGill-Queen’s University Press. 2019. Spinster: ‘an unmarried woman and especially one past the common age for marrying’. In From Spinster to Career Woman, Arlene Young takes us on a fascinating, complex and radical journey exploring women’s work in mid-Victorian England. Young examines cultural perceptions of women at the time, as dictated by Victorian ideals of womanhood, alongside representations of women’s work, anxieties over the disproportionate ratio of eligible (and marriageable) women and men in the period as well as women’s desire for personal and financial freedom. In mid-Victorian England, custom and tradition dictated female dependence, with assumptions about a woman’s ‘nature’ deeply entrenched in society. However, the demographic reality of a skewed male-to-female ratio in mid-nineteenth-century Victorian England: presented a challenge to the values and assumptions of the nation […] The lack of husbands presented special difficulties for woman of the middle classes, women who were not raised to work, who had neither the education nor training for work, and whose family fortunes were not extensive enough to provide life-long support for unemployed spinsters (18). The realisation that women would need jobs to support themselves opened space in the social framework for ideas challenging the relationship between dependency and femininity, and the separation of public and private spheres. As a result, debates ‘shifted from […] speculations about what could, or should, or would happen if middle-class women were to be admitted to the ranks of the educated, the trained, and the gainfully employed to more practical considerations of the kinds of work in which women were actually engaging’ (25-26). The recognition that women needed work wasn’t enough to transform Victorian ideals of femininity and motherhood completely. Those who positioned themselves within the educational and employment debates were ambivalent. In Chapter One, Young describes how on the one hand, opponents argued that a woman’s place was in the home and her innate purpose was to be a wife, with educational and employment opportunities undermining the most natural of female roles ordained to women by Nature: that of motherhood. On the other hand, proponents demonstrated a more enlightened position, advocating for the necessity of education and employment opportunities as a basis for the full development of a woman’s moral and intellectual character, private happiness and personal fulfillment. Others, still, argued that an education was necessary because ‘a better education […will] fit them to do their duty’ (23). In other words, this ‘call for more opportunities becomes an argument not for educating women for new roles in society but for educating them to a becoming appreciation of the status quo’ (23), with education and employment helping women to hone the skills required of good mothers and useful wives. Young states that occupations deemed respectable, professional and steady during this time included nursing and teaching, both of which ‘represented work that conformed to Victorian ideals of femininity: caring for the sick and teaching children’ (27) and were regarded as both honoured and honourable. While middle-class women in the nursing profession, for example, demonstrated a specialised degree of knowledge, their work was seen as an extension of domesticity rather than a challenge to the status quo. Therefore, public recognition and representations of nursing in Victorian fiction, in periodical presses and in mainstream media were largely based on nursing as charity and philanthropy rather than skilled work. Image Credit: (An army nurse. Watercolour drawing. Credit: Wellcome Collection. CC BY 4.0) However, Young argues that the cultural acceptance of middle-class working women arose when prominent publications for upper-middle-class ladies began adding articles on women’s work, with contributors recasting and redefining what it means to be a woman. Here we see writer Frances Martin’s notion of the ‘Glorified Spinster’ (an independent, vibrant, confident, career woman), George Whyte-Melville’s figure of the ‘Strong Minded Woman’ and the cultural icons of Mary Carpenter, Sister Dora and Florence Nightingale: all of these helped to reinforce and solidify nursing as ‘the vanguard of professionalized work for Victorian women’ (38) and as models of self-sacrifice and philanthropy, both Victorian ideals of womanhood. Important to note here is the interconnection between work and the class system, creating a dichotomy between educated middle-class women accustomed to affluence and leisure and those occupying the lower echelons of the class system, relegated to domestic servant work requiring physical exertion. This class division further reinforced ideals of the refined, cultured, cultivated woman and the kind of remunerative work being done. In Chapter Two, we read about the difficult trajectory ‘of the lady nurse from selfless volunteer to trained and efficient professional’ (39), including antagonistic working relations between doctors and lady nurses, reforms in hospital management and professional hierarchies. Furthermore, Young touches on how cultural stereotypes, like Charles Dickens’s disreputable Sairey Gamp in Martin Chuzzlewit (1843-44) and the Strong-Minded Woman, were invoked to dismiss nurses’ educational attainment and professional training and undermine their credibility. These brought the ‘nursing question’ to the forefront of public consciousness and established the nursing debate (should they or shouldn’t they?) as a pressing public issue. Also notable is the marginalisation of men from nursing to such an extent that they were excluded entirely from the College of Nursing (established in 1916), typically only working in military hospitals and asylums. Here, we begin to see nursing transition from an exclusively male field to almost solely feminine. Gender and class ideologies were further reinforced during this time by Victorian fiction. Many novels transition from ideals of womanhood and motherhood to centring on the heroic struggle confronting women to define themselves as educated, civically engaged, working members of the economy ‘within a culture that preferred to limit them to the domestic realm’ (60). One powerful piece of Victorian fiction that Young discusses is Elizabeth Gaskell’s Ruth (1853). This novel mirrors the cultural anxieties at the time and the paradox of Victorian conventions dictating behaviour, dress and moral character and the emerging need for women to financially support themselves. Despite Ruth having to work, she redeems herself by the work she participates in (sewing, deemed domestic and ladylike, and governessing, deemed respectable), while also exhibiting Victorian ideals of femininity: beauty, modesty, obedience, respectfulness, innocence and selflessness. Young further explores the significance of types of work in Chapter Four with respect to the typewriter. Here we see the same sort of cultural discomfort with the typist as with the lady nurse. However, ‘the typewriter presented very different representational problems […] Unlike the figure of the middle-class nurse, she did not carry the baggage of preconceived notions from earlier incarnations; there were no images of Sairey Gamps to overcome’ (125). The typewriter offered both the promise and threat of independence; while nursing was attached to both professionalism and caregiving, the womanliness of the typist was in question due to ‘the idea of attractive, accomplished, and marriageable young women working in close proximity to men’ (126). Using a plethora of examples of fictional portrayals of typists in Victorian fiction, Young shows how representations were predominantly negative. She highlights the characterisation of typists as lonely, vulgar and unwomanly, while the typewriter itself was defeminised, being described as aesthetically unpleasing, masculine, loud and disruptive. Not only was the typewriter regarded as a threat to the potential of a competent professional woman, but it is also represented as a vehicle for women to reach both a higher level of independence and to postpone or completely eschew marriage. Young concludes her book with the question that weighed heavy on the public’s consciousness at the time: what shall we do with our daughters? This presents us with both ‘the sense of patriarchal concern and proprietary claim regarding ‘‘our’’ daughters’ (164), but also the stark reality that women during this time remained ‘caught between professionalism and womanlinessness’ (157): woman were regarded as ‘either too professional to be feminine or too feminine to be professional’ (157). There was no secure or clear identity for women to adopt. Yet, as Young writes: ‘In 1860, the answer to the question ‘‘what is to be done with our girls?’’ would have been to find her a suitable husband. By the 1890s, the answer was the one that forward thinkers had been insisting on for decades: educate her for employment’ (169). One might argue that the occupational status of the professions highlighted in Young’s book (nursing, teaching and typing) remain heavily female-dominated in the twenty-first century. Women on average receive less pay and less occupational prestige than their male counterparts. The ‘glass ceiling’ still presents barriers to women’s advancement to the upper echelons of management, and women are still more likely to perform what Arlie Hochschild describes as a double day: juggling the demands of work and domestic responsibilities. The term ‘spinster’ is also still very much in the public consciousness, with its male equivalent, ‘bachelor’, carrying less pejorative connotations. In an excerpt from the 2014 article ‘Don’t Call Me A Spinster’, Claudia Connell writes: When it comes to the spinster, society just can’t seem to make its peace with us. The stereotypical image of long ago of the oddball woman in the village who makes people feel a bit uncomfortable still sticks. The notion of the happy, unattached female is a myth as far as most are concerned. While collectively we have recognised the need for greater and equal opportunities in education and employment for women, we still idealise outdated models of what it means to be a woman. While cultural tensions are obviously less strained than they once were in Victorian England, they persist in other ways. Arlene Young’s book provides a fresh and new perspective on the Victorian period. From Spinster to Career Woman would be a welcome addition to the bookshelves of anyone interested in women’s history, British history, labour studies and women’s studies. Note: the above was first published on the LSE Review of Books. About the Reviewer Katelan Dunn is a Professor in the Department of Liberal Studies at Conestoga College in Kitchener, Ontario, Canada. Her research interests and publications centre around social inequality and stratification, gender, social policy, cultural sociology and social justice. She is currently a member of the City of Burlington’s Inclusivity Advisory Committee and has been published in Canadian, American and European sociological journals.
<urn:uuid:943d9d1a-2584-4c83-9e98-1c27eb9b9751>
CC-MAIN-2021-43
https://blogs.lse.ac.uk/politicsandpolicy/book-review-from-spinster-to-career-woman/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00230.warc.gz
en
0.962359
2,417
2.828125
3
Deceit dressed up as a deal is still deceit We send the messengers only as deliverers of good news and warners. Those who disbelieve argue with false argument, in order to defeat the truth thereby. They take My Verses, and the warnings, for a joke. (18:56) When we read the Quran, we come across the words jadala , ujaadilu, jadal and mujadiloon, which mean argumentation and they crop up in more than 20 verses. What constitutes an argument The definition of argument (jidaal) is an exchange of diverging or opposite views, typically in a heated or angry one, though not always. The majority of the verses in the Quran refer to argumentation as a negative act. Occasionally it is seen as positive, for instance, Surah Al Mujadilah is named after the female companion Khawla bint Tha’laba who pleaded the case about her husband to the Prophet (peace be on him). However this is an exception. Usually argumentation is in a negative act. This is why the Prophet warned us against being argumentative. Jidal has negative connotations An argument, jidal, is a set of statements that you use in order to try to convince people of your opinion. We have to separate mujadalah (argument) from muhadatha (having conversation). We have to distinguish between jidal and conversation. Not every conversation is an argument, as long as it involves exchanging ideas then it is fruitful and useful. However when a conversation becomes heated and one party is trying to force their opinion, and not accepting any other opinion that is bad. When this happens, it is because the nafs (ego) has kicked in. And whenever the nafs kicks in, there is always Shaytan behind it. The consequences of arguing Allah Almighty said: Among the people is he who argues about God without knowledge, and follows every defiant devil. It was decreed for him, that whoever follows him—he will misguide him, and lead him to the torment of the Blaze. (22:3-4) There were many incidents involving arguments during the life of the Prophet (peace be on him). The example which stands out is of the time two companions began shouting, as they disputed inside the mosque. As a direct consequence the knowledge of Lailat-ul-Qadr was lifted, and the Prophet (peace be on him) forgot when it was because of this argument. The argument led to this loss of barakah and guidance. Allah’s Messenger (ﷺ) went out to inform the people about the (date of the) night of decree (Al-Qadr) but there happened a quarrel between two Muslim men. The Prophet (ﷺ) said, ‘I came out to inform you about (the date of) the night of Al-Qadr, but as so and so and so and so quarrelled, its knowledge was taken away (I forgot it) and maybe it was better for you. Now look for it in the 7th, the 9th and the 5th (of the last 10 nights of the month of Ramadan).’ (Bukhari) In another narration: He ﷺ then came to the people and said: O people, Lailat-ul-Qadr was made manifest to me and I came out to inform you about it that two persons came contending with each other and there was a devil along with them and I forgot it. (Muslim) Argumentation leads you astray In the hadith narrated Abu Umamah that the Messenger of Allah (ﷺ) said: ‘No people go astray after having been guided, but when they resort to arguing.’ (Tirmidhi) We see from the example, that argumentation leads to people becoming misguided. We have many more examples of this from Bani Israel whose argumentative streak was an endless source of aggravation for Prophet Musa (peace be on him). Every time a command came from Allah Almighty, his nation bickered, quibbled and argued about it. They asked questions – not understand better, but to pick holes, find loopholes and score points. They were arguing for the sake of arguing. To prove Musa (peace be on him) wrong and to show that they knew better than him. Avoiding arguments leads to jannah While arguing leads to misguidance, avoiding arguing, even when you are in the right leads to Jannah, as Abu Umamah Al-Bahili (may Allah be pleased with him) reported that the Messenger of Allah (ﷺ) said: ‘I guarantee a house in Jannah for one who gives up arguing, even if he is in the right; and I guarantee a home in the middle of Jannah for one who abandons lying even for the sake of fun; and I guarantee a house in the highest part of Jannah for one who has good manners.’ (Abu Dawud) Retain the moral high ground There are two sides in an argument. The Prophet (peace be on him) was teaching us not get involved with jidaal or mirah which are synonyms. Rather he taught us to find the common ground and to retain good Akhlaq (manners): And do not argue with the People of the Scripture except in the best manner possible, except those who do wrong among them. And say, ‘We believe in what was revealed to us, and in what was revealed to you; and our God and your God is One; and to Him we are submissive.’ (29:46) Allah Almighty wants us to choose the best way to discuss differences of opinion. The Trump Deal Sadly, argumentation has become a profession. We have spin doctors, whose job it is to change the truth. To twist it and come up with arguments to prove others wrong, even when they are right. This is what we have seen in the so-called Peace Deal Trump has presented on the two states. He has changed the truth and planted falsehood in its place, while claiming that they are peaceful. This is false jidal. The rotten deal Trump’s deal is nothing but apartheid. It is not peaceful at all. It is legitimising the settlements on land stolen from the Palestinians. It goes against over 32 UN resolutions. As long as you have power you can get away with this theft in broad daylight. If we as Muslims went against one UN resolution, there would be an uproar. As Emily Thornberry – Shadow Secretary of State for Foreign and Commonwealth Affairs said: ‘Let’s make no mistake this so called ‘Peace Plan’ has nothing in common with the Oslo accords. It destroys any prospect of an independent contiguous Palestinian state. It legitimises the illegal annexation of Palestinian land for settlers. It puts the whole of Jerusalem under Israeli control. It removes the democratic rights of Palestinians living in Israel, and removes the rights of Palestinian refuges to return to their land. This is not a peace plan it is a monstrosity, and a guarantee that the next generation of Palestinian children will grow up knowing nothing but fear violence, and division. But Trump and his administration care nothing about those children’s futures they only care about their own. The question is have today is what on earth our PM and Foreign Secretary are going along with this peace deal and saying that Palestine should get behind it. that it is a shameful betrayal of decades of consensus across this House and from one government to another. That we should unswervingly and neutrally support progress towards a two state solution, a prospect that this rips away.’ The truth will prevail Those who disbelieve argue with falsehood in order to defeat the truth. But the truth will prevail. No matter how strong someone might be with their arguments, or how much money they have, or power and authority, they can get away with making what is right wrong and what is wrong right. They may have millions following their false claims, but does it change the reality? Does it change the truth? No. Never. It does not matter how many people believe in it. The land of the Palestinians is still their land. No matter how many powerful people are behind this fraud. Even if they change the name of the capital – it is still Bait al Maqdis and Al Quds. It is still our capital. No one can take it from us. Allah Almighty has promised believers that they will get their rights. It is a matter of time, it might take a couple of years, or a hundred years. In the lifetime of nations, a hundred is not much. Our message started with Adam, to Nuh, to Ibrahim, to Musa, Eesa and Muhammad (peace be on them), so it might take a while, but Allah Almighty will make the truth prevail and the land will go its owners. Defend the truth Even if we are betrayed by our own brothers and sisters. In the hadith the Messenger of Allah (ﷺ) said: A group of my ummah will stay faithful to the command of Allah despite those who let them down or disagree with them until the command of Allah arrives; and they are as such. Mu’adh said: and they are in Shaam. (Bukhari and Muslim) It is a promise. Nevertheless we do not just believe in the promise and do nothing. We need to do our best in the best possible, legitimate ways. We have to defend our rights, online, offline, lobbying, campaigning, on the streets. No matter what spin doctors do to our reality, we can still uphold the truth and our rights. Allah will support His servants as long as they support each other. Abu Hurairah (May Allah be pleased with him) narrated that the Messenger of Allah (ﷺ) said: If anyone relieves a Muslim believer from one of the hardships of this worldly life, Allah will relieve him of one of the hardships of the Day of Resurrection. If anyone makes it easy for the one who is indebted to him (while finding it difficult to repay), Allah will make it easy for him in this worldly life and in the Hereafter, and if anyone conceals the faults of a Muslim, Allah will conceal his faults in this world and in the Hereafter. Allah helps His slave as long as he helps his brother. (Muslim) Al Quds and Palestine is the property of the Palestinians and the Muslim ummah. It is not just for the Palestinians to defend themselves. But the ummah has to come together against the Trump deal and those who follow him. Imagine that someone evicted you from your home, and then he agreed to put you in the shed, and if you complained, he called you a terrorist, and said, ‘We want peace; just give us your house and your shed. This is our property.’ The truth will prevail, but Allah Almighty says: Of the believers are men who are true to what they pledged to God. Some of them have fulfilled their vows; and some are still waiting, and never wavering. (33:23) Allah Almighty mentioned here ‘men (rijaal) who were faithful to their covenant’, so we need men of truth to carry this burden. Unfortunately, we don’t have so many men in this day and age. We need to be up to the challenge. It is not decreasing. It is increasing. So we need understand our history. We need to know our rights and defend them. We must not give up. No matter what the pressure is upon us, we must not give up. The whole ummah should come together. I know we are scattered and many of our leaders are afraid. This is why we have Israel enjoying the best time in its history, because we are disunited more than ever. We need to unite ourselves. We ask Allah to enable us to be those who make a difference to reality and make it better. Ameen. Khutbah delivered by Shaykh Haytham Tamim on 31st January 2020 at the Albanian Mosque. Transcribed by A Khan. January 01, 2021 January 01, 2021 December 25, 2020
<urn:uuid:9a579186-6c27-4c5c-9dda-acff9b395f8c>
CC-MAIN-2021-43
https://staging.utrujj.org/no-matter-how-you-dress-it-up-you-cant-change-the-truth/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00711.warc.gz
en
0.966539
2,593
2.953125
3
Ticagrelor can lower your chance of having another heart attack or dying from a heart attack or stroke. It is usually taken with a low dose of aspirin. Ticagrelor is a prescription medication used to lower your chance of having another heart attack or stroke in adults. Ticagrelor belongs to a group of drugs called antiplatelet medications, which help prevent platelets from forming clots that could lead to heart attacks and strokes. This medication comes in tablet form and is usually taken twice a day, with or without food, along with a low dose of aspirin. Some of the common side effects of ticagrelor include bleeding and shortness of breath. Ticagrelor Genetic Information CYP2C19 is an enzyme in the blood that is responsible for breaking down ticagrelor and other drugs in the body. Some patients have less of this protein in their bodies, affecting how much of the drug gets eliminated. Levels of CYP2C19 can vary greatly between individuals, and those having less of this protein are known as "poor metabolizers." CYP2C19 testing is done to determine whether you are a poor metabolizer. If you are a poor metabolizer, the levels of ticagrelor in your blood can become too high. As a result you may be at an increased risk of having more side effects from ticagrelor. Your doctor may adjust your dose of ticagrelor if you are a poor metabolizer. How was your experience with Ticagrelor? Ticagrelor Cautionary Labels Uses of Ticagrelor Ticagrelor is a prescription medicine used, with aspirin, to prevent heart attacks and strokes in adults. Ticagrelor is used to prevent blood clots and is for people who: - have had a recent heart attack or severe chest pain that happened because their heart was not getting enough oxygen. - have had a heart attack or chest pain and are being treated with medicines or with a procedure to open blocked arteries in the heart. This medication may be prescribed for other uses. Ask your doctor or pharmacist for more information. Ticagrelor Brand Names Ticagrelor may be found in some form under the following brand names: Ticagrelor Drug Class Ticagrelor is part of the drug class: Side Effects of Ticagrelor Ticagrelor can cause serious side effects, including: - Serious bleeding. See “Drug Precautions” - Shortness of breath. Call your doctor if you have new or unexpected shortness of breath when you are at rest, at night, or when you are doing any activity. Your doctor can decide what treatment is needed. This is not a complete list of ticagrelor side effects. Ask your doctor or pharmacist for more information. Tell your doctor if you have any side effect that bothers you or that does not go away. Call your doctor for medical advice about side effects. You may report side effects to the FDA at 1-800-FDA-1088. Tell your doctor about all the medicines you take, including prescription and non-prescription medicines, vitamins, and herbal supplements. Ticagrelor may affect the way other medicines work, and other medicines may affect how ticagrelor works. Especially tell your doctor if you take: - an HIV-AIDS medicine - medicine for heart conditions or high blood pressure - medicine for high blood cholesterol levels - an anti-fungal medicine by mouth - an anti-seizure medicine - a blood thinner medicine - rifampin (Rifater, Rifamate, Rimactane, Rifadin) This is not a complete list of ticagrelor drug interactions. Ask your doctor or pharmacist for more information. Ticagrelor can cause bleeding that can be serious and sometimes lead to death. In cases of serious bleeding, such as internal bleeding, the bleeding may result in the need for blood transfusions or surgery. Call your doctor right away if you: - bruise and bleed more easily - have nose bleeds - are having bleeding that is severe or that you cannot control - have pink, red or brown urine - are vomiting blood or your vomit looks like “coffee grounds” - have red or black stools (looks like tar) - are coughing up blood or blood clots Do not stop taking ticagrelor without talking to the doctor who prescribes it for you. People who are treated with a stent, and stop taking ticagrelor too soon, have a higher risk of getting a blood clot in the stent, having a heart attack, or dying. If you stop ticagrelor because of bleeding, or for other reasons, your risk of a heart attack or stroke may increase. When instructed by your doctor, you should stop taking ticagrelor 5 days before you have elective surgery. This will help to decrease your risk of bleeding with your surgery or procedure. Your doctor should tell you when to start taking ticagrelor again, as soon as possible after surgery. Ticagrelor is taken with aspirin. You should not take a dose of aspirin higher than 100 mg daily because it can affect how well ticagrelor works. Do not take doses of aspirin higher than what your doctor tells you to take. Do not take ticagrelor if you: - are bleeding now - have a history of bleeding in the brain - have bleeding from your stomach or intestine now (an ulcer) - have severe liver problems Ticagrelor Food Interactions Grapefruit and grapefruit juice may interact with ticagrelor and lead to potentially dangerous effects. Discuss the use of grapefruit products with your doctor. Before you take ticagrelor, tell your doctor about all of your medical conditions including if you: - have had bleeding problems in the past - have had any recent serious injury or surgery - plan to have surgery or a dental procedure - have a history of stomach ulcers or colon polyps - have lung problems, such as COPD or asthma - have liver problems - have a history of stroke - are pregnant, or are plan to become pregnant - are breastfeeding Tell all of your doctors and dentists that you are taking ticagrelor. They should talk to the doctor who prescribed ticagrelor for you before you have any surgery or invasive procedure. Tell your doctor about all the medicines you take, including prescription and non-prescription medicines, vitamins, and herbal supplements. Ticagrelor and Pregnancy Tell your doctor if you are pregnant or plan to become pregnant. The FDA categorizes medications based on safety for use during pregnancy. Five categories - A, B, C, D, and X, are used to classify the possible risks to an unborn baby when a medication is taken during pregnancy. This medication falls into category C. In animal studies, pregnant animals were given this medication and had some babies born with problems. No well-controlled studies have been done in humans. Therefore, this medication may be used if the potential benefits to the mother outweigh the potential risks to the unborn child. Ticagrelor and Lactation Tell your doctor if you are breastfeeding or plan to breastfeed. It is not known if ticagrelor passes into your breast-milk. You and your doctor should decide if you will take ticagrelor or breastfeed. You should not do both without talking with your doctor. Ticagrelor comes as a tablet to be taken by mouth with or without food. It is usually taken two times a day. Take your doses of ticagrelor around the same time each day. Take ticagrelor with a low dose (not more than 100 mg daily) of aspirin, as directed by your doctor. If you forget to take your scheduled dose of ticagrelor, take your next dose at its scheduled time. Do not take two doses at the same time unless your doctor tells you to. Take ticagrelor exactly as prescribed by your doctor. Follow the directions on your prescription label carefully. - The recommended loading dose of ticagrelor after an acute coronary syndrome (ACS) event is 180 mg (two 90 mg tablets). - Following the loading dose, the recommended Brilinta dose is 90 mg twice daily for the first year after an ACS event. - After one year, ticagrelor 60 mg twice daily is recommended. - After the initial loading dose of aspirin (usually 325 mg), use ticagrelor with a daily maintenance dose of aspirin of 75-100 mg. Your doctor will determine how long you should continue to take ticagrelor. If you take too much ticagrelor, call your healthcare provider or local Poison Control Center, or seek emergency medical attention right away. If ticagrelor is administered by a healthcare provider in a medical setting, it is unlikely that an overdose will occur. However, if overdose is suspected, seek emergency medical attention. - Store ticagrelor at room temperature between 59°F to 86°F (15°C to 30°C). - Keep ticagrelor and all medicines out of the reach of children. Ticagrelor FDA Warning WARNING: BLEEDING RISK - Ticagrelor, like other antiplatelet agents, can cause significant, sometimes fatal, bleeding. - Do not use ticagrelor in patients with active pathological bleeding or a history of intracranial hemorrhage. - Do not start ticagrelor in patients planned to undergo urgent coronary artery bypass graft surgery (CABG). When possible, discontinue ticagrelor at least 5 days prior to any surgery. - Suspect bleeding in any patient who is hypotensive and has recently undergone coronary angiography, percutaneous coronary intervention (PCI), CABG, or other surgical procedures in the setting of ticagrelor. - If possible, manage bleeding without discontinuing ticagrelor. Stopping ticagrelor increases the risk of subsequent cardiovascular events. WARNING: ASPIRIN DOSE AND TICAGRELOR EFFECTIVENESS - Maintenance doses of aspirin above 100 mg reduce the effectiveness of ticagrelor and should be avoided. After any initial dose, use with aspirin 75-100 mg per day.
<urn:uuid:fe4ebd49-a6c9-48e7-a8a9-79af91e0c608>
CC-MAIN-2021-43
https://rxwiki.com/ticagrelor
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00430.warc.gz
en
0.922367
2,259
2.78125
3
Pruning is an important part of looking after raspberry bushes. It’s key to their health and can be the best way to get maximum results from them. However, it can be easy to forget to do! We’ve put together this guide to what you can expect if you haven’t managed to prune your raspberries. Three things will happen if you do not prune your raspberries: - Dead canes will overcrowd the patch, resulting in less room for new or producing canes. - Berry size will decrease due to overcrowding, decreased airflow and sunlight. - Crop yield (total quantity) of your berry bushes will also be reduced. That being said, there are several kinds of pruning you can do for your raspberry bushes, including various types of pruning. So make sure you know exactly what happens to raspberry bushes when you don’t prune them. You’ll definitely want to read on to find more about these things and why they happen! Here’s What Happens to the Canes if you Don’t Prune Raspberries Not pruning back spent raspberry canes will cause the berry bush to look larger as it expands outward. The middle of the bush will be dead, though, as the spent canes prevent sunlight and other nutrients from reaching the middle of the plant. This will eventually damage the plant and harvest yields. When you don’t prune raspberry bushes, the dead canes end up taking up a lot of space in the bush, which gets in the way of the growth of other more vigorous canes. The dead canes can block the light from the lower parts of the bush, and all the parts of the bush have to compete with each other for water and nutrients. The denser the bush becomes, the more of the middle of the bush will be comprised of solely dead material. This is something of a missed opportunity, as it wastes valuable berry space! The bush becoming denser also creates dark damp conditions which can be a way for mold or funguses and infections to spawn. Removing the weak or dead canes offers the opposite results. Each cane will have plenty of access to sunlight and is free to grow easily. When the bush is well pruned, it has light, ventilated conditions that are not only optimal for its growth, but also allow you to identify any problems such as mold, fungal infections, or pests. Pro tip: prune away the spent canes. This will prevent diseases, overgrowth, dead plants, and all sorts of pests including snails. Here’s What Happens to the Berry Size of Unpruned Raspberries Not pruning spent raspberry canes can lead to smaller raspberries as fewer nutrients reach the producing canes. Not pruning growing canes (called topping) can also end in smaller berries. Topping canes leads to fewer but larger raspberries. The competition for nutrients caused by crowded unpruned raspberry bushes creates problems for the growth of berries. Without the necessary things they need to grow, plants prioritize basic survival over putting their excess energy into making tasty berries. Because of this, berries on unpruned raspberry bushes are likely to be smaller and less delicious than those grown on carefully maintained bushes. Of course, we all want to eat those big juicy delicious berries, so it’s better to get the pruning done and yield the results in the long run. Many sources of gardening advice also point out that even just a little pruning is better than nothing! Here’s What Happens to the Overall Harvest Yield of Unpruned Raspberry Plants Not pruning raspberries generally leads to a smaller overall harvest as the spent canes crowd out the yielding canes (reducing harvest yields). Untopped canes can lead to a larger number of raspberries, though they’ll generally all be smaller in size than if you topped the canes. Unpruned raspberry bushes yield lesser results, as the space in the bush is taken up by dead plant material instead of prosperous ones where berries can grow. While it might seem counterproductive to the yield of the plant to remove canes, in the long run, this promotes healthier and bigger growth that will produce greater numbers of berries, and they’ll be of higher quality. There’s some nuance to how to prune the plant properly. This will involve understanding the variety of raspberry well, most importantly to understand its growth cycle and how often it fruits. The color of the berry should also be taken into consideration, as black-colored raspberries have a slightly different process to those of the more usual red-colored ones. To find out more about how to do this properly, check our Backyard Homestead HQ ultimate guide to pruning berries here (**coming soon**). Pests that Live in Overgrown Raspberries Overgrown, crowded raspberry bushes are a haven for pests. These range from insects and mites to fungi and weeds. Pests can eat and corrupt the crop of berries, or cause deeper problems by infecting the plant with diseases. Pruning your bushes well can help prevent pests, but you’ll also need to make sure you are weeding and keeping an eye on your plants’ water source. Some pesticides are safe to use with raspberries, but consult state regulations before deciding to do this. In general, it is much easier to prevent pests than to remove them. Practicing good pruning habits is therefore an effective method that can also save you further efforts down the line! Here in Utah, the biggest pest problems I’ve found growing in our raspberry bushes when they get overgrown are snails – and they get everywhere! Thankfully, pruning the canes regularly and letting our chickens forage in the garden (after the growing season is done, of course) means we haven’t seen a snail in our berry patch in several years. Diseases that Afflict Unpruned Raspberry Bushes Many diseases can afflict unpruned raspberry bushes. Some waterborne diseases may affect the roots, like Phytophthora root rot. Other afflictions can cause the plant to wilt, such as verticillium wilt, insect larvae growing inside the plant, or excessive wind exposure. Cane diseases in particular often manifest in wet conditions and spread quickly amongst congested bushes. Pruning is a valuable weapon against these, creating open conditions that are difficult for fungi to survive in. In fact, many of the afflictions that affect raspberry bushes can be prevented with pruning. In particular, pest-borne diseases can be prevented, as can those which thrive in the moist dark conditions of an unpruned bush. At the very least, a well-pruned crop can be inspected easily, so diseases can be spotted early and treated before they become a serious issue. Even after pruning, you should keep an eye on your bushes, noting any discoloration, mold, or fungi quickly and removing it as quickly as you can. Key Takeaways on NOT Pruning Raspberries We’ve always worked hard to prune spent raspberry canes (also called floricanes) so that the bearing canes and primocanes (the canes that will bear next year’s fruit) so that we’ll have a good harvest. That being said, it’s only been of late that I’ve focused on topping the canes to continue to improve the harvest. Topping the canes does several things. - It helps control any winter damage to the canes – which is good if you haven’t read my article on How to Protect Raspberry Plants in Winter: A Step-by-Step Guide (so make sure you bookmark that to read for fall as you prepare your berry patch for winter). I didn’t top canes back then, but I do now. - It also helps limit the number of berries you get so that the berries you do get are juicier, plumper, and that much more delicious. Seriously – it’s good to top the canes. Because while the extra quantity of berries seems like it’s worth the smaller size of berries, it really isn’t in my experience. Birds tend to get those smaller, upper berries anyway. Thankfully, they leave the lower berries alone. So not only do you get those berries, but you’ve also got smaller berries in the lower canes. So go ahead and start topping your berries in addition to pruning back the dead canes – so that you can get the most delicious harvest possible. And make sure you read my ultimate guide to pruning – so that you know exactly what you need to do for any and every type of berry. Cite this article as: “What Happens If You Don’t Prune Raspberries? What To Expect.” Backyard Homestead HQ, 25 September 2021, backyardhomesteadhq.com/what-happens-if-you-dont-prune-raspberries-what-to-expect/. It’s important to learn from your own experience, but it’s also smart to learn from others. These are the sources used in this article and in our personal research to be more informed as homesteaders. 🙂 - Pritts, Marvin P. “Raspberries.” Journal of Small Fruit & Viticulture, vol. 4, no. 3–4, 1997, pp. 189–225. Crossref, doi:10.1300/j065v04n03_02. - Snyder, John. “Growing Raspberries in Washington.” Institute of Agricultural Services Bulletin No.401, January 1951. https://research.wsulibs.wsu.edu/xmlui/bitstream/handle/2376/8929/eb0401_1951.pdf?sequence=1&isAllowed=y
<urn:uuid:48ac0c8f-b5c6-4ce2-b566-9291822499f4>
CC-MAIN-2021-43
https://backyardhomesteadhq.com/what-happens-if-you-dont-prune-raspberries-what-to-expect/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00350.warc.gz
en
0.933655
2,111
2.6875
3
This is the second in a series of articles, which first appeared in iancommunity.org, iancommunity.org, examining the research on and reality of the transition to adulthood, with advice from experts who have studied the process and young adults who have lived it. With a broken alarm clock, Zosia Zaks feared oversleeping for an 8:30 a.m. college class. Who wouldn’t? But his solution was anything but typical: he decided to sleep in his classroom to make sure he wasn’t late. As someone with Asperger’s syndrome, he lacked a so-called adaptive skill — in this case, performing the steps needed to replace a clock battery — that makes adult life easier. Zaks, now a certified rehabilitation counselor and program supervisor at the Hussman Center for Adults with Autism in Towson, Maryland, and other experts say adaptive skills, or skills of daily living, need to be taught explicitly to people on the autism spectrum. Taking a shower, brushing your teeth, riding a bus, crossing the street, shopping for or preparing a meal: all of these are adaptive skills. Such skills are essential to adulthood. “For example, difficulties with everyday activities such as bathing, cooking, cleaning and handling money could drastically reduce an individual’s chance of achieving independence in adulthood,” according to a 2013 study.1 Sometimes, parents and teachers of children with autism may focus more attention on imparting academic and behavior management skills than on daily living skills. Some may assume that daily living skills are less important. Or they may believe that a person with average intelligence will learn those skills on his own. In fact, intelligence may have nothing to do with it. Problems with daily living skills “may be especially prominent in those with higher cognitive abilities” and autism, according to the same study. AVERAGE IQ BUT LOW ADAPTIVE SKILLS? The study, led by Amie W. Duncan, a psychologist at Cincinnati Children’s Hospital Medical Center in Ohio, found “surprising” deficits in daily living skills in teens with autism who have average and above-average intelligence. The study included 417 adolescents with ASD in the Simons Simplex Collection research project. Half of them had daily living skills that were “significantly below” expectations for someone of their age and IQ. About a fourth of the group scored in the low range of adaptive functioning, defined as receiving a score below 70. In other words, these teens’ adaptive skills were at the level of someone with mild to moderate intellectual disability. “It’s really shocking that these are really high-functioning adolescents and their adaptive skills are that low,” says Duncan. Daily living skills include personal hygiene and self-care (taking medicine, bandaging a cut), housekeeping, food preparation and getting around the community, she says. The study warned that “addressing these skills prior to the transition to adulthood is crucial if we expect young adults to have the necessary skills to live independently.” Daily living skills are a subset of adaptive behavior, which includes communication, social and relationship skills that are likely to be harder for someone on the spectrum to learn. The research team focused on daily living skills because they are less likely to be affected by the core deficits of autism, Duncan says. THE COMPLEXITY OF FUNCTIONAL SKILLS Peter Gerhardt, a behavior expert, helps students with autism learn daily living skills, sometimes called “functional skills.” That term has taken on a negative connotation, he said during a 2014 online presentation: when you talk to parents about teaching functional skills, they may view it as “giving up on a kid.” However, those skills can be more complex than inferential calculus, Gerhardt explained. Learning how to cross a busy New York City street safely, for example, is a complex task involving visual memory, decision making and motor skills. It’s also a skill that allows someone to get to work so he can use his academic skills. Parents can help by giving children simple chores to do beginning at a young age, says Ernst O. VanBergeijk, associate dean and executive director of the Vocational Independence Program at New York Institute of Technology. For example, a small child can learn to put dirty clothes in a hamper. As he gets older, he can learn to separate light and dark clothes into two piles. Later still, he can put those clothes in the washer, and eventually he will set the washer controls himself to do the laundry, he says. SHAKESPEARE AND SHOWERING: SKILLS FOR HIGH SCHOOL Tom Hays, educational director of Franklin Academy, a day and boarding school for students with autism spectrum disorder and nonverbal learning disability in East Haddam, Connecticut, says parents are surprised when he tells them that adaptive skills are more important than some basic high school subjects, such as Shakespeare or the periodic table of the elements. Franklin’s program includes instruction in adaptive and social skills, along with the typical college preparatory courses. “We teach the skills you have to have to get along with others, take care of yourself, and self-advocacy,” Hays says. Instruction may cover subjects ranging from daily living skills, such as personal hygiene, to more complex dating and relationship skills, he says. “I have kids with a 145 IQ who walk into my classroom, and they stink,” he says. They may think taking a shower means standing under a stream of water for a few seconds and nothing more, he says. Fortunately, Franklin has a curriculum that teaches how they should bathe by breaking a shower down into concrete steps, such as how to use soap and how long to stand under the water. “For our population, you have to do really explicit teaching,” Hays says. One cannot assume that children on the spectrum “will pick up a skill by osmosis or will be able to imitate the skill after watching someone once. What we find is that with more nuanced and sophisticated skills, students have to be taught very explicitly and sequentially how to perform the skill.” Franklin even includes instruction on the complexities of social and relationship skills. “Dating is a big issue for our kids. They are clueless when it comes to dating: What does it mean to be in a relationship? What are the norms or conventions in terms of social expectations? We have to explicitly teach those skills.” TEENS WITH ASD NEED TO PRACTICE OUTSIDE CLASS Frequently, schools do not talk about adaptive skills until students are at least 12 years old, Gerhardt says, and teens are not given enough opportunities to practice those skills in context. For example, the best place to learn how to order a meal is in a restaurant, not a classroom. Students with ASD are given far more chances to practice academic skills than adaptive ones, he says. For example, a 5-year-old may get 1,000 chances in a week to learn the names of the colors in a crayon box. When he’s 15, however, he may get only one outing every Friday to a fast-food restaurant to learn how to order lunch. “If he goes once a week, it will take 15 years to give him 1,000 instructional opportunities, and the gap between going out one Friday and the next Friday is too long,” Gerhardt says. Families can help by providing extra opportunities for their children to practice daily living skills, VanBergeijk says. “You can teach a student in high school to do his laundry, but if he goes home and Mom or Dad does his laundry, where’s the practice he needs?” Even if a teen seems to have mastered an adaptive skill, he may falter when a problem arises, says Zaks, the certified rehabilitation counselor. In his case, he says, he knew how to shop, how to change a clock battery and how to use money. But when his clock battery stopped working in college, he says, he could not “deploy those skills,” which involved stringing together the different tasks. He needed to find time in his schedule to shop, get to a store, remember to bring money and buy the battery. “There are so many nuances that go into this package of decisions and actions,” he says. He encourages parents to spell out those nuances to their children. To explain the cues and skills for shopping, for example, “you would say to your child, ‘I need new socks. My socks are worn out. Let’s look at my schedule. On Friday morning before work I can go to the store to buy socks. I will need to bring my ATM card to the store.’ “I call that technique ‘living out loud,’ and it should start when kids are very young.” WHAT YOU MAY NOT LEARN IN HIGH SCHOOL Jennifer Cuff understands the need to teach, and provide opportunities for practicing, adaptive skills. She is both the mother of a daughter with Asperger’s syndrome and an adult-service coordinator for a disability-services agency in Coeur d’Alene, Idaho. “There are certain skills that these kids are not given through the high school, and it’s difficult to transition from high school to independent living with this huge section of training missing,” says Cuff, a member of the Simons Simplex Collection autism research project. Like many parents, she taught her daughter, Elizabeth, 20, skills such as clipping coupons, shopping, preparing meals, taking the bus to work and taking care of the family cat. She wanted to give her daughter a chance to practice those skills without constant supervision. So she left Liz home alone for a week while the rest of the family moved into a recreational vehicle parked just 15 minutes away. Liz’s grandmother lives around the corner, so a relative could reach her quickly if a problem arose. But none did. Cuff checked on Liz during the week, and she was fine. Ultimately, when one assesses the so-called functional level of a person with autism, his or her adaptive skills are likely to be far more important than his academic achievements. Gerhardt says he has had clients with above-average IQs who “spend all day in their parents’ basements playing video games,” don’t bathe and don’t interact with others. He also has had clients with intellectual disability who have jobs in the community. “In that scenario, the guy with the lower IQ is the higher-functioning guy,” he says. That is why he emphasizes the importance of teaching adaptive skills to everyone with ASD. - Duncan, A.W., & Bishop, S.L. (2013). Understanding the gap between cognitive abilities and daily living skills in adolescents with autism spectrum disorders with average intelligence. Autism, epub Nov 25. View abstract.
<urn:uuid:303234a2-264f-4026-aec5-883bcf4f29fb>
CC-MAIN-2021-43
https://sparkforautism.org/discover_article/daily-living-skills-a-key-to-independence-for-people-with-autism/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00230.warc.gz
en
0.969235
2,316
3.484375
3
FREE Catholic Classes (Rodrigo, or Ruy, Diaz, Count of Bivar). The great popular hero of the chivalrous age of Spain, born at Burgos c. 1040; died at Valencia, 1099. He was given the title of seid or cid (lord, chief) by the Moors and that of campeador (champion) by his admiring countrymen. Tradition and legend have cast a deep shadow over the history of this brave knight, to such an extent that his very existence has been questioned; there is however, no reason to doubt his existence. We must, at the same time regard him as a dual personality, and distinguish between the historical Cid and the legendary Cid. History paints him as a free booter, an unprincipled adventurer, who battled with equal vigour against Christians and Moors ; who, to further his own ends, would as soon destroy a Christian church as a Moslem temple; who plundered and slew as much for his own gain as from any patriotic motives. It must be born in mind, however that the facts which discredit him have reached us through hostile Arab historians, and that to do him full justice he should be judged according to the standard of his country in his day. Vastly different indeed is the Cid of romance, legend, and ballad, wherein he is pictured as the tender, loving husband and father; the gentle courageous soldier; the noble, generous conqueror, unswervingly loyal to his country and his king; the man whose name has been an ever-present inspiration to Spanish patriotism. But whatever may have been the real adventures of El Cid Campeador , his name has come down to us in modern times in connection with a long series of heroic achievements in which he stands out as the central figure of the long struggle of Christian Spain against the Moslem hosts. Help Now > Ferdinand I, at his death (1065), had divided his dominions between his three sons, Sancho, Alfonso, and Garcia, and his two daughters, Elvira and Urraca, exacting from them a promise that they would respect his wishes and abide by the division. But Sancho, to whose lot had fallen the Kingdom of Castile, being the eldest, thought that he should have inherited the entire dominions of his father, and he resolved to repudiate his promise, claiming that it had been forced from him. Stronger, braver, and craftier than his brothers, he cherished the idea of despoiling them and his sisters of their possessions, and becoming the sole successor of his father. At this time, Rodrigo Diaz was quite young, and Sancho, out of gratitude for the services of Rodrigo's father to the State, had retained his son at the court and looked after his education, especially his military training. Rodrigo later rendered such distinguished services in the war in which Sancho became involved with Aragon that he was made alferez (standard-bearer or commander-in-chief) of the king's troops. After ending this war with Aragon, Sancho turned his attention to his plan of despoiling his brothers and sisters (c. 1070). He succeeded in adding to his dominion Leon and Galicia, the portions of his brothers, but not until in each instance Rodrigo had come to his rescue and turned apparent defeat into victory. The city of Toro, the domain of his sister Elvira, was taken without trouble. He then laid siege to the city of Zamora, the portion of his sister Urraca, and there met his fate, being treacherously slain before the gates of the city by one of Urraca's soldiers (1072). Learning this, Alfonso who had been exiled to the Moorish city of Toledo, set out in haste to claim the dominions of his brother, and succeeded him on the throne as Alfonso VI, though not without opposition, from his brother Garcia, in Galicia, and especially in Castile, the inhabitants of which objected to a Leonese king. The story is told, though not on the best historical authority, that the Castilians refused Alfonso their allegiance until he had sworn that he had no hand in his brother's death, and that, as none of the nobles was willing to administer the oath for fear of offending him, Rodrigo did so at Santa Gadea before the assembled nobility. If this be true, it would account in a great measure for the ill-will Alfonso bore Rodrigo, and for his subsequent treatment of him. He did not at first show his hatred, but tried to conciliate Rodrigo and the Castilians by bestowing upon him his niece Jimena in marriage (1074). It was not long, however, before he had an opportunity to satisfy his animosity. Rodrigo having been sent by Alfonso to collect tribute from the king of Seville, Alfonso's vassal, he was accused on his return, by his enemies of having retained a part of it. Whereupon, Alfonso, giving free rein to his hatred, banished him from his dominions (1076). Rodrigo then began his career as a soldier of fortune, which has furnished themes to Spanish poets of early modern times, and which, idealized by tradition and legend, has made of him the champion of Christian Spain against her Moorish invaders. During this period of his career, he offered his services and those of his followers first to one petty ruler and then another, and often fought on his own account, warring indifferently against Christians and Moors, always with distinguished success, and incidentally rising to great power and influence. But in time of necessity his assistance was sought by Alfonso, and in the midst of career of conquest he hastened to the latter's support when he was hard pressed by Yusuf, the founder of Morocco. Through some mistake or misunderstanding, however, he failed to join the king, who listening to the complaints and accusations of the Cid's enemies, took from him all of his possessions, imprisoned his wife and children, and again banished him for his dominions. Disgraced and plundered, the Cid resumed his military operations. Upon his return from one of his campaigns, hearing that the Moors had driven the Christians from Valencia and taken possession of the city, he determined to recapture it from them and become lord of that capital. This he did (1094) after a terrible siege. He spent the remainder of his days there. His two daughters were married to the Infante of Navarre and the Count of Barcelona respectively. His remains were transferred to the monastery of San Pedro de Cardena near Burgos, where they now rest. The exploits of El Cid form the subject of what is generally considered the oldest monument of Spanish literature. This is an epic poem of a little over 3700 lines as it has reached us (several hundred lines being missing), the author of which, as is not uncommon with works of those days, is unknown. The date of its composition has long been a disputed question. Many critics whose names must be mentioned with respect, among them Dozy and Ticknor, place it at the beginning of the thirteenth century; but today the best opinion places the poem a half-century earlier. Among those who think it was written as early as the middle of twelfth century are many eminent Spanish and foreign scholars, including Sanchez, the first editor of the poem, Capmany, Quintana, Gil y Zarate, Bouterwek, Sismondi, Shlegel, Huber, and Wolf. The learned Amador de los Rios, whose opinion carries great weight, thinks that the famous poem must have been written prior to 1157. Though based upon historical facts, the "Poema del Cid" is to a very large extent legendary. Its theme is twofold, the adventures of the exiled Cid and the mythical marriage of his two daughters to the Counts of Carrion. The first few pages are missing, and what remains opens abruptly with the banishment of the Cid by King Alfonso, and ends with a slight allusion to the hero's death. But the story it tells is not its chief claim to our consideration. The poem deserves to be read for its faithful pictures of the manners and customs of the day it represents. It is written with Homeric simplicity and in the language of the day, the language the Cid himself used, which was slowly divorcing itself from the Latin, but was still only half developed. The versification is rather crude and ill-sustained. The prevailing metre is the Alexandrine or fourteen syllabled verse with a caesural pause after the eighth; but the lines often run into sixteen or even twenty syllables, and sometimes stop at twelve or ten. This however, may be partly due to careless copying. The adventures of the Cid have furnished material for many dramatic writers, notably to Guillen de Castro , the eminent Valencian poet and dramatist of the early seventeenth century, whose masterpiece, "Las Mocedades del Cid" earned him whatever reputation he enjoyed outside of Spain. This latter work, in turn, furnished the basis for Corneille's brilliant tragedy, "Le Cid", which according to Ticknor, did more than any other drama to determine for two centuries the character of the theatre all over the continent of Europe. Among other works dealing with the life and adventures of the Cid are: - "La Legenda de las Mocedades de Rodrigo", or "La Crónica Rimada", as it is sometimes called. This work has been thought to be even older than the "Poema del Cid" by some critics, among them so eminent authority as Amador de los Ríos. - "La Crónica General ó Estoria de España", written by Alfonso the Wise. - "La Crónica del Cid", the manuscript of which was found in the very place where the Cid lies buried, the monastery of San Pedro de Cardeña. Its author and the time of its appearance are unknown. FREE Catholic Classes Pick a class, you can learn anything - Litany of the Blessed Virgin Mary - Unfailing Prayer to St. Anthony - The Apostles' Creed - Prayer to the Holy Spirit - Prayer for the Dead - The Rosary in English Copyright 2021 Catholic Online. All materials contained on this site, whether written, audible or visual are the exclusive property of Catholic Online and are protected under U.S. and International copyright laws, © Copyright 2021 Catholic Online. Any unauthorized use, without prior written consent of Catholic Online is strictly forbidden and prohibited. Catholic Online is a Project of Your Catholic Voice Foundation, a Not-for-Profit Corporation. Your Catholic Voice Foundation has been granted a recognition of tax exemption under Section 501(c)(3) of the Internal Revenue Code. Federal Tax Identification Number: 81-0596847. Your gift is tax-deductible as allowed by law.
<urn:uuid:0c8b36c6-5eee-459c-8cbe-6cd6d1c8afb0>
CC-MAIN-2021-43
https://www.catholic.org/encyclopedia/view.php?id=2965
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00590.warc.gz
en
0.984627
2,290
2.65625
3
If you are a keen aquarist, you may have just brought home your new betta fish. When you are getting the tank ready, you will need to know the ideal betta temperature to ensure that the water is neither too warm nor too cold. Like most creatures, tropical fish have their preferences when it comes to temperature, and bettas are no different. They do not like extremes in temperature, so the water temperature needs to be carefully monitored. What Is The Ideal Betta Temperature? Bettas cannot survive in water that is too cold. The average room temperature for water is about 68° Fahrenheit. This is far too cold for bettas. The ideal betta temperature is 75°-76° Fahrenheit. At this temperature, your betta will be healthy and perfectly happy. He will eat well, sleep comfortably and swim around energetically. How Can You Keep The Temperature Constant? The only way to maintain the ideal temperature in your aquarium is by using a heat pump. The heater needs to be kept running 24 hours a day to keep the temperature at a constant level. An aquarium heater is made with a built-in thermostat. This turns off the element when the water reaches the desired temperature. As soon as it starts to cool down, the thermostat kicks in, and the element is turned on once again. In this way, the temperature of the water is never allowed to drop below a certain point. It may fluctuate by only a few degrees, but the variation will be limited that it will not harm. What Is The Lowest Temperature A Betta Can Tolerate? Tropical fish like the betta always need warmer water. The betta originates from the slow-moving rivers, swamps, and marshlands of Southeast Asia. This tropical fish is native to countries like Thailand, Cambodia, Indonesia and Vietnam. In its native waters, the temperature of the water never drops below 70°F, and the betta likes its aquarium conditions to mimic that. These little tropical fish thrive in water that is between 75°F and 78°F. As soon as the temperature of the water drops below 74°F, a betta will start to feel uncomfortable. If it drops below 70°F, it will not be merely uncomfortable; it will start getting ill. When the water is colder than 70°F, a betta fish will start to become lethargic. Its activity will decrease, and you will notice that it is spending more and more time in an inert position near the bottom of the tank. If the temperature drops below 68°F, your betta will most probably not survive the cold water. Its internal organs will start to malfunction, and its entire system will start shutting down. It is important to check the water temperature in your aquarium regularly. Also, look out for signs that your betta may be too cold. The first indication is that the fish will start to become lethargic. Because the betta, like all fish, is a cold-blooded creature, it is unable to generate its own heat from within. Therefore they need to regulate their body temperature by absorbing heat from the surrounding water. If the water is too cold for them to absorb sufficient warmth, they will die. What Is The Highest Temperature A Betta Can Tolerate? Just like the betta cannot tolerate water that is too cold, so too it will not survive if the temperature rises above a certain point. While the ideal betta temperature is 75°-76° Fahrenheit, your betta will probably be okay as long as the temperature remains below 80°F. Once it reaches 80°F and above, your betta will start showing signs of poor tolerance to the higher temperature. The reasons for a betta’s inability to tolerate water that is too warm are two-fold. - The hotter the water, the lower the concentration of oxygen in the water. As the water temperature rises, the level of available oxygen in the water is depleted. This makes it difficult for the fish to breathe. If you notice that your betta is hovering close to the surface of the water, it may be struggling to breathe and trying desperately to gasp in more oxygen from above the surface. Betta fish are known as anabantoids. They have a unique organ called a labyrinth. This special organ makes them able to breathe in oxygen from the air above the surface of the water and underwater through their gills, as other fish do. When a betta is spending most of its time hovering close to the surface of the water, this could be an indication that the water is too warm, and there is not enough oxygen in the water. The betta compensates for this by trying to use its labyrinth organ to inhale enough oxygen from above the surface of the water. - Higher water temperature is likely to increase the metabolism of your betta fish. A raised metabolism will create a need for a higher level of oxygen. The fish will start to breathe very rapidly, trying to get enough oxygen to meet the needs of its faster metabolism. Higher water temperature directly causes an elevated metabolism, which could eventually kill your betta fish. If the water becomes very hot, reaching an 85°F or higher temperature, your fish will die. It will literally start cooking. Signs That Your Betta Temperature May Be Off Balance If the water in your aquarium is becoming either too cold or too warm, you will probably start to notice certain changes in your fish’s behavior. Initially, these changes might be quite subtle, and you might not be aware of them immediately. However, as the temperature either drops or rises, the change’s effects on your betta will become more pronounced. You will be more likely to observe these strange and unusual behavior patterns. Some of the things to watch out for include: - Frantic swimming around the tank If your betta is swimming frantically around the tank and seems to be super-charged with energy, this may be the first sign that the betta’s temperature is not right. His sudden burst of energy is probably going to burn out very quickly as other symptoms set in. - Bumping into things in the tank If your betta is swimming around and bumping into the sides of the tank, or the plants and logs, or bumping its nose along the bottom of the tank, this could be an indication that all is not well with the temperature of the water. A sudden drop or rise in temperature causes all the organs to malfunction, including the eyes. The change in temperature may impair your betta’s vision. - Hovering at the surface of the water If your betta is constantly hovering at the surface of the water, this could be an indication that he is gasping for air. He will be using his labyrinth organ to try to inhale as much oxygen as possible from above the surface of the water because the raised water temperature has depleted the oxygen supply in the water. - Unusually lethargic behavior The betta is usually quite a frisky little fish and is generally an energetic swimmer. If you observe that he has become somewhat lethargic and appears to be lacking in energy, this is a red flag. As the temperature of the water drops, so the betta’s internal temperature will also start to drop. The betta copes with this by trying to conserve energy. When his movement starts slowing down, this is often the first sign that the water is getting too cold for him. - Lying on its side Sometimes a betta will stop swimming and will lie inert on its side if the water is too cold. - Hovering just above the substrate of the tank If your betta temperature is getting too low, your fish may try to retain warmth by remaining as close as possible to the bottom of the tank. How To Raise Your Betta Temperature The best way to maintain the correct water temperature for your betta is by using a special water heater in your aquarium. An aquarium heater is thermostatically controlled, and the thermostat can be programmed to keep the water at the ideal temperature. If your heater breaks or malfunctions, and the water is becoming too cold for your betta, there are a few things you can do as a temporary measure to raise the temperature until you can get the heater repaired or replaced. - Warm a thick towel by putting it in the dryer for a few minutes. Wrap the hot towel around the outside of your tank. The heat will be transferred from the towel, and the water should warm up slightly. - Move the tank to a sunny position in your house. Place it in a room that gets full sun, preferably directly in front of the window. The heat of the sun will help to warm up the water. - Wash out an empty plastic bottle. Wash the outside of the bottle very well with clean water. Fill the bottle with hot – not boiling – water, and place the bottle of hot water in your fish tank. The bottle will quite literally act like a ‘hot water bottle’ and help elevate the water temperature in the tank. - Remove 5-10% of the water in the tank. Replace it with very warm – not boiling – water. The hot water should be added very gradually, by slowly pouring in a little at a time. Try to pour the hot water as far away from your fish as possible to avoid harming the fish. What Can You Do If The Water Becomes Too Warm? Occasionally your heater can malfunction, and the thermostat can stop working. This can result in the water becoming very hot. The most obvious solution to this problem is to turn off the water heater immediately. But you may need to take further steps to cool down the water. In extremely hot weather, if the outside temperature is very high, this can also raise the water temperature in your aquarium. If this happens, you will need to implement measures to cool the water down before the heat affects your fish. Any of the following methods should help to cool the water sufficiently: - Remove the cover from the aquarium. Place a fan nearby and direct the flow of air on the surface of the water. The fan will help cool air to circulate above the tank. This will help to cool the water. If you remove the cover, ensure that the water level is not high enough for your fish to leap out of the tank. You might need to remove some of the water to keep the fish in the tank. - A water aerator is usually used to help keep up the oxygen levels in the water. However, it can also be a useful tool to help to cool the water in the tank if it starts getting too warm. - If you have a light in your aquarium, turn it off for a few hours. Lights generate a certain amount of heat. By turning off the light, you will be removing this heat source, and the water will cool down as a result. - Remove between 5-10% of the water, and replace it with cooler water. The cool water should be added very gradually to prevent the fish from becoming stressed from a sudden drastic change in tank conditions. - Place a few ice cubes in a clean bag. Seal the bag well and place it gently in the tank, leaving it to float in the water. As the ice melts in the bag, the water in the tank will cool down. These are some of the most important aspects relating to the water temperature if you keep betta fish in a home aquarium. By following these guidelines, you should be able to keep your betta temperature stable and constant.
<urn:uuid:695b25e8-984d-4ee3-bd18-69167adb02a0>
CC-MAIN-2021-43
https://www.fishkeepingworld.com/betta-fish-water-temp/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00231.warc.gz
en
0.947695
2,415
2.609375
3
Browse through our Frequently asked questions on photovoltaics below. A battery is an electrical storage device ( but can really be any device that stores energy for later use). Batteries do not make electricity, they store it. As chemicals in the battery change, electrical energy is stored or released. In rechargeable batteries this process can be repeated many times. Batteries are not 100% efficient - some energy is lost as heat and chemical reactions when charging and discharging. If you use 1000 watts from a battery, it might take 1200 watts or more to fully recharge it. Slower charging and discharging rates are more efficient. A battery rated at 180 amp-hours over 6 hours might be rated at 220 AH at the 20-hour rate, and 260 AH at the 48-hour rate. Typical efficiency in a lead-acid battery is 85-95%, in alkaline and NiCad battery it is about 65%. Sulfation is the formation or deposit of lead sulfate on the surface and in the pores of the active material of the batteries' lead plates. If the sulfation becomes excessive and forms large crystals on the plates, the battery will not operate efficiently and may not work at all. Common causes of battery sulfation are standing a long time in a discharged condition, operating at excessive temperatures, and prolonged under or over charging. Batteries are divided in two ways, by application (what they are used for) and construction (how they are built). The major applications are automotive, marine, and deep-cycle. Deep-cycle includes solar electric (PV), backup power, and RV and boat "house" batteries. The major construction types are flooded (wet), gelled, and AGM (Absorbed Glass Mat). AGM batteries are also sometimes called "starved electrolyte" or "dry", because the fiberglass mat is only 95% saturated with Sulfuric acid and there is no excess liquid. Flooded may be standard, with removable caps, or the so-called "maintenance free" (that means they are designed to die one week after the warranty runs out). All gelled are sealed and a few are "valve regulated", which means that a tiny valve keeps a slight positive pressure. Nearly all AGM batteries are sealed valve regulated (commonly referred to as "VRLA" - Valve Regulated Lead-Acid). Most valve regulated are under some pressure - 1 to 4 psi at sea level. The lifespan of a battery will vary considerably with how it is used, how it is maintained and charged, temperature, and other factors. The positive terminal of the first battery is connected to the negative terminal of the second battery, the positive terminal of the second is connected to the negative of the third, etc. The voltage of the assembled battery is the sum of the battery voltages of the individual batteries. So the batteries are connected: + to - to + to - to + to -, etc. The capacity of the battery is unchanged. The positive terminal of the first battery is connected to the positive terminal of the second battery, the positive terminal of the second is connected to the positive of the third, etc. and The negative terminal of the first battery is connected to the negative terminal of the second battery, the negative terminal of the second is connected to the negative of the third, etc. So the batteries are connected: + to + to + and - to - to -. In this configuration, the capacity is the sum of the capacities of the individual batteries and voltage is unchanged. For example, if you take 5 6V 10AH batteries and connect the batteries in series, you would end up with a battery array that is 30 Volts and 10AH. If you connect the batteries in parallel, you would end up with a battery array that is 6 Volts and 50AH. By the way, this is how ordinary auto batteries are made. 6 2volt cells are put in series to give 12v battery and the 6 cells are just enclosed in one case. Many ni-cad batteries are done the same way. Starting batteries (sometimes called SLI, for starting, lighting, ignition) are commonly used to start and run engines. Engine starters need a very large starting current for a very short time. Starting batteries have a large number of thin plates for maximum surface area. The plates are composed of a Lead "sponge", similar in appearance to a very fine foam sponge. This gives a very large surface area, but if deep cycled, this sponge will quickly be consumed and fall to the bottom of the cells. Automotive batteries will generally fail after 30-150 deep cycles if deep cycled, while they may last for thousands of cycles in normal starting use (2-5% discharge). Deep cycle batteries are designed to be discharged down as much as 80% time after time, and have much thicker plates that a standard automotive battery. Marine batteries are considered a "hybrid" battery which actually fall between the starting and deep-cycle batteries. Marine batteries are usually rated using "MCA" or Marine cranking amps which is rated 32 degrees F, while CCA is at zero degree F. (For more information on CCA, CA & MCA, please see below) Sealed batteries are known as maintenance free batteries. They are made with vents that (usually) cannot be removed. A standard auto or marine maintenance free battery is sealed, but not fully leak proof. Sealed batteries are not totally sealed since all batteries must allow gas to vent during charging. There are sealed lead acid (SLA) batteries that are non-spill able. Please information on our SLA batteries, see AGM and Gel batteries below. The newer type of sealed nonspillable maintenance free valve regulated battery uses "Absorbed Glass Mats", or AGM separators between the plates. This is a very fine fiber Boron-Silicate glass mat. These type of batteries have all the advantages of gelled, but can take much more abuse. These are also called "starved electrolyte." Just like the Gel batteries, the AGM Battery will not leak acid if broken. The advantages of AGM batteries are no maintenance, sealed against fumes, hydrogen, leakage, or non-spilling even if they are broken, and can survive most freezes. AGM batteries are "recombinant" - which means the Oxygen and Hydrogen recombine inside the battery. These use gas phase transfer of oxygen to the negative plates to recombine them back into water while charging and prevent the loss of water through electrolysis. The recombining is typically 99+% efficient, so almost no water is lost. Charging voltages for most AGM batteries are the same as for a standard type battery so there is no need for special charging adjustments or problems with incompatible chargers or charge controls. Since the internal resistance is extremely low, there is almost no heating of the battery even under heavy charge and discharge currents. AGM batteries have a very low self-discharge rate (from 1% to 3% per month). So they can sit in storage for much longer periods without charging. The plates in AGM's are tightly packed and rigidly mounted, and will withstand shock and vibration better than any standard battery. While several sources state that you can mix AGMs with regular flooded cells, I would not recommend it (gel cells have sufficiently different set points to make them totally incompatible with flooded cells or AGMs). Ideally, your house bank would consist of a number of identical batteries wired in series and/or parallel that were manufactured on the same day. There are many attributes that determine the true cost of a battery technology. Much like incandescent versus compact fluorescent light bulbs, your choice of battery technology may cost you less up front but will cost you more over the life of the product. For example, the faster, more efficient bulk charging that AGMs and gel-cells allow will lead to reduced wear and tear on your charge source (engine, gen-set, etc.). More on all that later down. Suffice to say that I do not believe the T105 to be a bargain. A gel battery design is typically a modification of the standard lead acid automotive or marine battery. A gelling agent is added to the electrolyte to reduce movement inside the battery case. Many gel batteries also use one way valves in place of open vents, this helps the normal internal gasses to recombine back into water in the battery, reducing gassing. "Gel Cell" batteries are non-spillable even if they are broken. Gel cells must be charged at a lower voltage (C/20) than flooded or AGM to prevent excess gas from damaging the cells. Fast charging them on a conventional automotive charger may be permanently damage a Gel Battery. The reserve capacity of a battery is defined as the number of minutes that it can support a 25 ampere load at 80°F until its terminal voltage drops to 1.75 volts per cell or 10.50 volts for a 12V battery. Thus a 12V battery that has a reserve capacity rating of 100 signifies that it can be discharged at 25 amps for 100 minutes at 80°F before its voltage drops to 10.75 volts. The cold cranking ampere (CCA) rating refers to the number of amperes a battery can support for 30 seconds at a temperature of 0°F until the battery voltage drops to 1.20 volts per cell, or 7.20 volts for a 12V battery. Thus, a 12V battery that carries a rating of 600 CCA tells us that the battery will provide 600 amperes for 30 seconds at 0°F before the voltage falls to 7.20V. The marine cranking ampere (MCA) rating refers to the number of amperes a battery can support for 30 seconds at a temperature of 32°F until the battery voltage drops to 1.20 volts per cell, or 7.20 volts for a 12V battery. Thus, a 12V battery that carries a MCA rating of 600 CCA tells us that the battery will provide 600 amperes for 30 seconds at 32°F before the voltage falls to 7.20V. Note that the MCA is sometimes referred to as the cranking amperes or CA. The marine cranking ampere (MCA) rating of a battery is very similar to the CCA rating; the only difference is that while the CCA is measured at a temperature of 0°F, the MCA is measured at 32°F. All other requirements are the same - the ampere draw is for 30 seconds and the end of discharge voltage in both cases is 1.20 volts per cell. The full form of HCA is hot cranking amperes. It is the same thing as the MCA or the CA or the CCA, except that the temperature at which the test is conducted is 80°F. Unlike CCA and MCA the pulse cranking ampere (PCA) rating does not have an "official" definition; however, we believe that for true engine start purposes, a 30 second discharge is unrealistic. With that in mind, the PCA is a very short duration (typically about 3 seconds) high rate discharge. Because the discharge is for such a short time, it is more like a pulse. An amp-hour is one amp for one hour, or 10 amps for 1/10 of an hour and so forth. It is amps X hours. If you have something that pulls 20 amps, and you use it for 20 minutes, then the amp-hours used would be 20 (amps) X .333 (hours), or 6.67 AH. The accepted AH rating time period for batteries used in solar electric and backup power systems (and for nearly all deep cycle batteries) is the "20 hour rate". This means that it is discharged down to 10.5 volts over a 20 hour period while the total actual amp-hours it supplies is measured. Sometimes ratings at the 6 hour rate and 100 hour rate are also given for comparison and for different applications. The 6-hour rate is often used for industrial batteries, as that is a typical daily duty cycle. Sometimes the 100 hour rate is given just to make the battery look better than it really is, but it is also useful for figuring battery capacity for long-term backup amp-hour requirements. In a partially discharged state, the electrolyte in a lead acid battery may freeze. At a 40% state of charge, electrolyte will freeze if the temperature reaches approximately ¬16.0°F. The freezing temperature of the electrolyte in a fully charged battery is -92.0°F. The state of charge of a lead acid battery is most accurately determined by measuring the specific gravity of the electrolyte. This is done with a hydrometer. Battery voltage also indicates the level of charge when measured in an open circuit condition. This should be done with a voltmeter. For an accurate voltage reading, the battery should also be allowed to rest for a period sufficient to let the voltage stabilize. Lead acid batteries do not develop any type of memory, (refer to question that addresses sulphation above) All batteries, regardless of their chemistry, self-discharge. The rate of self-discharge depends both on the type of battery and the storage temperature the batteries are exposed to. However, for a good estimate, wet flooded deep cycle batteries self-discharge approximately 4% per week at 80°F. When charging lead acid batteries, the temperature should not exceed 120°F. At this point the battery should be taken off charge and allowed to cool before resuming the charge process. Lead acid batteries are 100% recyclable. Lead is the most recycled metal in the world today. The plastic containers and covers of old batteries are neutralized, reground and used in the manufacture of new battery cases. The electrolyte can be processed for recycled waste water uses. In some cases, the electrolyte is cleaned and reprocessed and sold as battery grade electrolyte. In other instances, the sulfate content is removed as Ammonia Sulfate and used in fertilizers. The separators are often used as a fuel source for the recycling process. Old batteries may be returned to the CLAMORE SOLAR, an automotive service station, a battery manufacturer or other authorized collection centers for recycling.
<urn:uuid:5a07f2e5-e714-4082-be75-bac00f239fc1>
CC-MAIN-2021-43
http://clamorepower.com/batteries/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00231.warc.gz
en
0.945346
2,974
3.359375
3
by Eliezer Barkai (Burak) fromYalkut Vohlyn, Issue #1, April 1945 [Translation by Avrom Bendavid-Val, 1999] The town of Trochenbrod, about 30 kilometers northeast of Lutsk and around 15-20 kilometers from the main road and railway line connecting Lutsk and Rovno, was also called Sofiyevka, after the name of the Russian princess Sofia who donated her land in order to found a Jewish settlement. And thus the town was established in the year 1835 as a Jewish agricultural settlement. Over time it became a town, though it remained agricultural in character until its last day. The Kivertzi-Rovno railway line [a segment of the Warsaw-Kiev railway line] was originally planned to pass near Trochenbrod, according to the town’s elders, but the inhabitants of Trochenbrod objected because they feared their cattle grazing near the tracks would be injured, and so the authorities changed the plans. As a result Sofiyevka, together with its neighbor village Ignatovka [Lozisht], was always cut off from the main routes of travel in a region of Ukrainian villages. In 1889 there were about 1200 people in 235 families in Trochenbrod. In 1897 its population reached 1580 people. In the following 40 years the town continued to grow, and in 1938 it contained 3000 Jews, and not a single Gentile. To keep the oven fires burning on Shabbat, for example, Gentiles would come from the adjacent villages, and their payment would be a piece of white challah. Even the letter carrier in the town was Jewish, although the postmaster was a non-Jew appointed by the Russian and later the Polish regime. The inhabitants of Sofiyevka, the Jews, engaged in agriculture, in milk production, and in tanning [leather goods], and they were known throughout the region as industrious and skilled farmers. The children studied in cheder, and from there many went elsewhere to study in yeshivas. Many among them also excelled in general studies. The whole area of Trochenbrod was 1,728 acres [“decyatins”], and because it was not possible to enlarge and develop the town as it should have been, many were forced to emigrate to different countries, to North and South America and also to Argentina, and they engaged even in those places in agriculture with great success. In the years of the war (1914-1918) Trochenbrod suffered greatly. The front was close to the town -- about seven kilometers away -- and forced labor was imposed on the townspeople by the Austro-Hungarian army that camped in the area for nine months. The army would distribute to the people small portions of bread and salt and also the innards and feet of the cattle that were slaughtered in the local slaughterhouse by Jewish shochets who also worked for the benefit of army. With the outbreak of the Russian revolution the youth of the town awoke and began to undertake Zionist activity: they collected money for various funds, and they established a Hebrew school and several other public institutions, but with the Bolshevik takeover their work was interfered with and suffered mainly during the period of the change of government. During several months Trochenbrod was in no-man’s-land between two fighting forces -- the Poles on one side and the Bolsheviks on the other. And from time to time one or the other would enter the town and cause the townspeople trouble. From the cities of Kovel, Rozische, and Lutsk -- that were already in the hands of the Poles -- merchants would come to Trochenbrod and sell various goods to the townspeople in exchange for Russian gold pieces, or sell them to merchants from Rovno and other cities that were under Bolshevik control. This commerce was carried on the entire summer of 1919. Some of these merchants lost their lives on the roads that were in confusion, at the hands of robbers [“leestim, leesters”] that would lay in ambush in the forests in to assault or murder them. People from the town would go out in groups to look for the bodies of those who fell this way in order to give them a Jewish burial. In the Trochenbrod cemetery there was a special section designated for Jews from elsewhere murdered in the area. With the Polish conquest of Trochenbrod Jewish national [i.e., Hebrew, Zionist] activity resumed. The youth began to engage in practical Zionist deeds with gusto: they collected money for the national funds, and they studied Hebrew in the Hebrew school that was headed by Rabbi Eliyahu-David and Yitzhak Shuster, and also studied Hebrew in night classes. The study of Hebrew was one of the basic principles of Zionist activity. In the period of the Fourth Aliya a good number of Jews [from Trochenbrod] went to Eretz Yisrael, and those who didn’t manage to get certificates [British permits to immigrate to Palestine] tried their luck in different ways (the illegal immigrant ships Parita and Atlantic contained people from Trochenbrod). In the end many of the youth of Trochenbrod went to Vilna in hopes of somehow getting from there to Eretz Yisrael. Only seven of them succeeded in overcoming the many obstacles and arriving in Eretz Yisrael, and the remainder -- what became of them is unknown to this day [it’s assumed they stayed in Russian territory]. There were seven synagogues in the town, three of them large ones that encompassed most of the worshipers, and the other four -- Hasidic houses of study -- were named after the great Master Teacher Rabbis of Matrisk, Olyka, Berezne, and Stepin. But when the Matrisker Rabbi visited the town the Hasidim from the other houses of study would also come to hear Torah from his lips. The people of the town would honor any “good Jew” (that’s why they referred to the synagogues by their names). Rabbi Baruch-Ze’ev Beigel was Chief Rabbi in Trochenbrod for about 30 years, until the war. He was very learned, humble, simple in his ways, and intelligent, and despite that he did not manage to win the hearts of the Trochenbrod Jews, who held high also a second rabbi, Rabbi Moshe Beider, of the Berezner Hasidim, who also was a great scholar and very educated in matters of the wider world. His origins were with the Zionists; he was called the Berezner Rav. During the years of the First World War Rav Beider developed a good relationship with the Austrian commandant with authority over Trochenbrod. He used his influence to obtain the freedom of the townspeople from forced labor on Shabbat and holidays and a lighter work load at other times. During the period of the Austrian occupation Rav Beider continued to teach the children and cared for the youth of the town in general. In 1917 a typhus epidemic broke out in the town that took many lives, including that of Rav Beider. But his memory remained blessed among the Jews of that place. After his death the various factions compromised and together elevated Rabbi Gershon Weisman to the position of Chief Rabbi. Rabbi Gershon Weisman was the son of Rav Haim Weisman who served as Cohen on special occasions in the town and was the son-in-law of Rav Baruch-Ze’ev Beigel. Rabbi Gershon Weisman was a unique individual who preferred to pray in accordance with traditions of the Karliner Hasidim, extreme and fanatical in his manner and in his life. When the Russians conquered the town in 1940 the local Communists wanted to rid themselves of such a fanatic -- they accused him of secret trade in salt and exiled him to Siberia. From among famous people from Trochenbrod: Rav Yehezkiel Potash, the permanent “Starusta” [Elder] (head of the town chosen by and beholden to the government) in the days of the Czarist government. He was a scholarly and learned man, and served the people of the town well with his honesty and intelligence. In 1922 he left the town and joined his sons in America. Rabbi Avraham-Yonah Drezner, a trusted businessman and permanent representative to institutions in the town from the region of Slishetz. Hirsch Kantor, a comedian who was master of his profession and very talented. At weddings and other gatherings he would bring joy to everyone with his rhymes and his cleverness. When he left this profession he examined his various characteristics and decided to become a merchant. He was also a public figure in the town and was chairman of Keren Hayesod there. He died without children in 1924. Mendel Apteker -- the pharmacist Mendel Yelner -- was also among the important people of the town. A veteran Zionist activist from the days of “Havat Zion” [the earliest Zionist organization]. He worked vigorously on behalf of the Zionist funds. In the days of the war he would help the people of the town obtain medical help. He caught typhus and died. His sons followed in his footsteps and stood among the tradespeople and Zionists of the place. Moshe Hirsch The Scholar ran a cheder in the town, and also served as chazan, and he too had great talent as a comedian. His jokes were published in the newspapers “Heint” and “Moment,” which pleased him a great deal. Trochenbrod produced two high-level communists: one was Motel Schwartz who was a well-known commissar in the Odessa fleet, and the other was Yaakov Borak who was an admiral on a Russian warship and a university graduate. In the period of the Soviet purges Schwartz disappeared, and Borak drowned with his ship near Kronstadt in the war between the Bolsheviks and the Whites. It’s noteworthy that these two studied many years in the Slobodka yeshiva, and Motel Schwartz was even a certified rabbi [“smichut”]. Remembered here are a few sons of the town, activists in the field of life and spreaders of Hebrew culture in the town and the surrounding area: Eliyahu David Schuster, who worked for many years as a teacher in the town of Rozische. Zvi Drezner, who finished the teacher’s course in Grodne, excelled, and served as a teacher in Novomias, near Warsaw. Yitzhak Schuster, who went to Vlodava in Poland and established a Hebrew School there. Yisrael Beider, the son of the Rav Beider mentioned above, was a teacher in nearby Olyka, and after that moved to Mezerich near Brisk, and continued his literary work. Yitzhak Aronski, who worked as a young and talented journalist and was a feature writer published widely in Polish Jewish newspapers, and helped establish “The Vohlyner Shtima,” which was published in Rovno, and who worked hard to encourage the reading of newspapers and books widely, and founded a library in the town. The young Motel Blitshtein, who was among the pioneer leaders of the General Zionists in Poland, and came from Warsaw to say goodbye to his mother before leaving to make his way to Eretz Yisrael, and didn’t meet up with his comrades in Vilna in time. Tzvi Klapko, Yisrael Shpulman, and others who labored in their time and devoted all their energy on behalf of their town, so that it wouldn’t be spiritually choked off by the persecution of several powerful scoundrels under the cover of the “B. B. Club” -- the party of the Polish government at the time -- who did deeds that did not add to the honor of the simple and innocent Jews of the town. The Jews of Trochenbrod were brave and determined and tended not to let themselves be bullied. There was an incident in 1925 in which a decree was imposed on them that they could no longer graze their flocks in the pasture lands of Prince Radziwill. The decree was issued by the manager of the Prince’s land, and his forest wardens, who were people of Balakhovich (the known tyrant) ["Balakhovchi"] saw to it that the decree would hold. But the people of Trochenbrod didn’t accept the decree, and as a result big fights broke out between the wardens and the Jews. When the wardens saw that it would not be easy, that the people of Trochenbrod were bold and resolute, the matter was brought before Prince Janush Radziwill, and he ordered a cancellation of the decree and a return of pasturing rights in his forests to the Jews of the town. According to the information that has reached Eretz Yisrael calamity befell the Jews of Trochenbrod when their Jewish brothers in other cities and towns in Vohlyn were in the hands of the German oppressors. The Jews were transported to the town of Trostjanetz, 12 kilometers from Trochenbrod, and were murdered there. However, several of them escaped to the forest and joined the partisans, who fought the Nazis with animal ferocity and caused them heavy losses. The town went up in flames and was destroyed, and there remains there not a single Jewish soul. The partisans from Trochenbrod and others who escaped, who at the end of 1944 numbered 33, were found mostly in Kivertzi near Lutsk. - Eliezer Barkai (Burak) [The news that Eliezer Barkai had when he wrote this was early and sketchy. The Jews of Trochenbrod were murdered at Yaromel, only a kilometer or two from Trochenbrod - in fact, Jews from other places in the area were brought there to be murdered also. According to Nahum Kohn in his book “A Voice from the Forest” and villagers in Domashiv, the closest Ukrainian village, empty houses did remain in Trochenbrod, and were looted and ultimately dismantled by people from villages and towns in the area. Also, there are other stories of how the town was founded and how it came to be named Sofiyevka.
<urn:uuid:e13d0f96-8b31-48ed-be61-38c01af1d138>
CC-MAIN-2021-43
https://www.bet-tal.com/objDoc.asp?PID=517822&OID=544151&DivID=1
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00430.warc.gz
en
0.986182
3,075
2.875
3
« AnteriorContinuar » He would begin, for instance, by endeavoring to remove his shoes, but, after vainly trying to bring his will in subjection to his desire, would desist and turn his attention to the task of taking off his coat, with no better success. After an hour or two spent in this way, to no purpose, he would succeed, generally, in getting his clothes off, but quite often he was obliged to summon assistance. In the morning a similar experience was certain to occur. Frequently, as he told me, he would sit for half an hour with his stockings in his hands, unable to determine which one to put on first. Legrand du Saulle' has very thoroughly described such cases under the name of “Folie du doute,” and they will subsequently engage our attention more fully. In certain of the neuroses, notably in hysteria and insanity, this inability to exert the power of the will is a prominent feature. In the latter condition the will is often exercised against the desires and the whole system of thought of the individual, producing what is known as “morbid impulse.” In these cases, the will, as it were, breaks loose from the intellect and causes the perpetration of acts of immorality or violence. Even within the limits of mental health some persons are noted for the strength of the will, and others for its feebleness. The influence of certain narcotics and stimulants in weakening the power of the will is a well-known fact. Among them, opium and alcohol are especially to be noted. The former, in most cases, produces its effect upon the will of the individual without in the slightest degree impairing the intellect. The latter, however, seems to have a more complex action, for it not only diminishes the will-power and places its subject under the control of others, but it prompts to the perpetration of acts of violence, the tendency to which the individual is unable to resist. The will is also suspended in reverie, in somnambulism, and in the induced condition known as hypnotism. In this last-named state the subject's will is that of some other person; he does as he is told, and his will, and even his perceptions, are under the complete control of the operator. In the normal state of an individual the will has no power over the perceptions. He cannot, for instance, by any effort of his will, alter his perception of color or form, or change the impression which any one of the sensory organs produces in the perceptional centre. 1 “La folie du doute (avec delire du toucher),” Paris, 1875. Like others of the mental faculties, the will-power is greatly developed by education. While the will is certainly located in the brain, it is by no means certain that in some of the lower animals, at least, it is not also situated in the spinal cord. The acts which are witnessed in the frog after the head has been cut off, and with it, of course, the entire encephalon, are clearly volitional in character, being adapted to the end in view, and such as the animal would perform in its unmutilated state. But, while the brain is the chief, if not the only, seat of the will in man, we have no data by which we are authorized to localize it in any particular part of this organ. Probably each motor and ideational centre is, at the same time, also volitional; but even this is merely an inference. By certain French physiologists it has been located in the pons Varolii, but without, in my opinion, sufficient warrant from facts. An idea of the relation of the will to perception and intellect and a volitional act will be obtained from the accompanying diagram (Fig. 4), in which a is the organ of sense; b, the nerve of transmission ; c, the perceptive ganglion; d, fibres of transmission to e, ideational ganglion; f, communicating fibres with g, volitional ganglion; h, efferent nerve communicating with i, a muscle. An image of a blow about to fall on the finger is formed on the eye, a; the image is transmitted by the optic nerve, b, to the perceptive ganglion, c, where it becomes a perception; from c it passes through the white fibres of the brain, d, to e, an ideational centre, where it becomes an idea, being comprehended, and the danger to the finger realized. At once the knowledge excites an impulse either in the ideational centre or in a contiguous one, g, through the intermediation of brain fibres, f; and this impulse—a volition-passes through the nerve h to the muscle i, and the hand is immediately withdrawn. The mind, therefore, as before stated, is a compound force evolved by the brain-or, rather, a collection of several forces -and its elements are perception, intellect, emotion, and will. The sun, likewise, evolves a compound force, and its elements are light, heat, and actinism. One of these forces-light-is made up of several primary colors; and the intellect of man, one of the mental forces, is composed of faculties. It would be easy to pursue the analogy, but enough has been said to indicate how closely the relationship between brain and mind is that of matter and force. It is to be regretted that the present state of cerebral anattomy and physiology is such as to prevent our making any precise localizations of the several forces and faculties which go to make up the mind. I have only ventured to do that in a single instance—the optic thalamus as a centre for perception—and even that is questioned by several eminent investigators. The evidence, however, appears to me so explicit on this point that I do not see how it is to be questioned.' Much has been done by the labors of Broca, Fritsch and Hitzig, Nothnagel, Meynert, Ferrier, and others, in the direction of the localization of brain functions, but it has been almost entirely confined to the determination of the centres for speech and for motor impulses. Gall, Spurzheim, Combe, and others, made honest attempts to found the science of phrenology, and, if their localizations of the various faculties of the mind-perceptional, intellect * For the evidence serving to establish the matter in question, reference is made to Magendie, “ Leçons sur le système nerveux," t. i, p. 103, et seq. ; Luys, “Recherches sur le système nerveux," pp. 198, 344, 346, Paris, 1865; Ritti, " Théorie physiologique de l'hallucination," p. 37, Paris, 1874; Fournié, “Recherches experimentales sur le fonctionnement du cerveau," Paris, 1873; also, a memoir by the writer, entitled, “ Thalamic Epilepsy," in Neurological Contributions, No. 3, p. 1, New York, 1881, in which additional facts are submitted. ual, emotional, and volitional—had been established, we should have as complete a knowledge of psychological topography as could be desired ; but they built on insufficient data, and, as a consequence, phrenology as a science does not exist at the present time. We know, however, that the gray matter of the brain originates mental operations, and that possibly the gray matter of the spinal cord and of the sympathetic sys-! tem supplements the process, and, under certain circumstances, especially in the lower animals, may, to a considerable extent, take its place. We know, also, that the cortical substance of the brain is of far greater importance in the evolution of mind than any other portion of the nervous system, and that it is here that experimentation and other methods of investigation have the greatest prospect of obtaining positive results. It is certainly established that the brain is not a single organ, but consists of a congeries of organs with different functions. Owing to this fact of our ignorance of the relation existing between the faculties of the mind and the different parts of the brain, and our consequent inability to construct a positive system of cerebral physiology, it is equally beyond our power to propose a classification of the phenomena of insanity based upon morbid anatomy and pathology. We are, therefore, driven to either a psychological or a clinical arrangement, or such a combination of the two as will best serve the purposes of study, till such time as we may become so thoroughly acquainted with the anatomical structure of the brain and its physiology as will admit of a more scientific system. GENERAL REMARKS ON THE MENTAL AND PHYSICAL CONDITIONS INHERENT IN THE INDIVIDUAL WHICH INFLUENCE THE ACTION OF THE MIND. In individuals whose brains are well-formed, free from structural changes, and are nourished with a due supplyneither excessive nor deficient-of healthy blood, the perception, the intellect, the emotions, and the will act in a manner which within certain limits is common to mankind in general. Slight changes in the structure or nutrition of the brain induce corresponding changes in the mind as a whole, or in some one or more of its parts or faculties, while profound alterations are accompanied by more severe and extensive mental disturbances. As no two brains are precisely alike, so no two persons are precisely alike in their mental processes. The argument, therefore, that if the mind resulted from the brain it would be the same in each individual instance, is simply ridiculous, and is made by those who have no conception of the subject of which they write. Thus, M. Simonin,' one of the most recent of the antiphysiological psychologists, says: “If thought is secreted or produced exclusively by a material organ, this secretion ought to have a uniform character, and ought to be always identical with itself, as are other secretions, as the gastric juice secreted by the stomach, the pancreatic juice by the pancreas, etc. How is it, therefore, that this cerebral secretion, which ought always to be identical with itself, as are the secretions of other organic materials, can produce such systems of thought, such calculations, such sublime arrangements, such speculations of the mind as are found in the works of Aristotle, Leibnitz, Lavoisier, Humboldt, Cuvier, Arago, Agassiz, etc. ?” To this absurd question I would reply by remarking that, if M. Simonin's brain had been exactly like that of Aristotle, his thoughts would also have been exactly like Aristotle's, when evolved by like causes acting under like circumstances. But as M. Simonin's brain is certainly very different from that of the Greek philosopher, so also is the product of his brain different. And I would say, further, that M. Simonin's assumption that the gastric juice and other secretions are alike in all men is as erroneous as are most of the other views contained in his book. . No two persons ever lived in whom any one secretion possessed exactly the same composition in each, and hence it is that one man will digest with impunity things which another man's stomach instantly rejects. If M. Simonin has studied cerebral anatomy, and has ever compared two brains-and, being a psychologist whose faith is stronger than his love for facts, he probably disdains any such proofs—he has certainly discovered that there is as much dissimilarity between them as there is between any two peach-trees. How, 1" Histoire de la psychologie," etc., Paris, 1879, p. 391.
<urn:uuid:cb85bb9e-f08a-4394-8f12-3a34fd2011f2>
CC-MAIN-2021-43
https://books.google.com.mx/books?id=Q8rhZS7U8NQC&pg=PA34&vq=%22awakening+suspense+%3B+a+music+like+the+opening+of+the+Coronation+Anthem,+and+which+like+that+gave+the+feeling+of%22&dq=editions:OCLC615342558&lr=&output=html_text
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00070.warc.gz
en
0.968014
2,491
2.671875
3
Why The Big Chill made me hot under the collar. You probably have heard during the past dozen years of the “Ocean Conveyor” and how increased melting of Greenland glaciers could affect this “conveyor” in such a way that the Gulf Stream would stop, thus giving the UK the same climate as Alaska within the next twenty years, i.e by 2023. That is utter balderdash. The Gulf Stream can’t stop. Evidence shows that the clockwise circulation of warm water has not stopped once in at least the past 30,000 years. Having said that, I’ll explain later about how a sudden change in climate affecting NW Europe may occur soon due to the bi-stable nature of the North Atlantic current system. So where did this misinformation come from? I first encountered it whilst watching the BBC-tv Horizon programme entitled The Big Chill on 13th November 2003; it was unlucky 13 for anyone unfortunate enough to watch it as they could well have been taken in by it. For the record, here is the transcript of the broadcast: http://www.bbc.co.uk/science/horizon/2003/bigchilltrans.shtml Horizon had become a sensationalist programme, not really worth watching, long before this edition. However, in this case, it seemed to me that much of the blame could be laid at the door of the scientists themselves. Of course, this could all be due to the film-makers taking things out of context but it’s hard for me to believe that, given some of the claims made during the programme. Prof Richard Alley found from Greenland ice cores that “temperatures could drop suddenly and catastrophically” and said that “This flabbergasted us, I think this flabbergasted a lot of people.” OK, count me as one of the unflabbergasted. Why? Because I’d discovered this over thirty years before the programme was first broadcast. How? By studying gases or heavy water in ice cores, foraminifera in cores from the ocean bed? No. I read it in a book. I confess it; I’m lazy and will always take the easy way out if I see one. The book wot I read. Unfortunately, I can’t remember the name of the book that I read in the late sixties. I think it was one that dealt with different types of ice and their properties. In one section, the author explained why ice was slippery and that it was not due to the commonly-held theory that pressure on the ice melted it and produced a lubricating layer of water. Instead, he explained how the molecular structure of the ice at its surface was responsible. I noticed an item in last year’s New Scientist that this has been discovered again. In the book, the author explained how the current system in the North Atlantic is bi-stable, meaning that it has two stable states and can flip suddenly from one to the other. Once it has flipped, it will remain in the new state for a long time. He likened a bi-stable nature to that of a pencil standing on end; if no external force acts on it, it will remain in that position but, if nudged, it will fall onto its side. I’m not sure whether this is a perfect analogy for the North Atlantic as the force required to move the pencil from its new stable state to its original one is much less than that required to displace it in the first place. I suppose that could be true for the North Atlantic and that one current system only requires a little nudge to flip it but a larger force is required to tip it back. The discovery related in the anonymous book was that the North Atlantic Drift (NAD) can suddenly switch from being a warm current breaking off from the Gulf Stream circulation to a cold one sourced by the Labrador Current. The author explained that these flips to a cold NAD were correlated to sudden changes to a cold climate in NW Europe. He did not know what caused the NAD to flip but thought it might be due to a slowing of the Gulf Stream, perhaps as a result of a weakening of the sub-tropical high. The one good point about the Horizon programme was that a solution to the above problem seemed to have been provided, namely that of a weakening or cessation of the deep conveyor due to increased melting of the Greenland ice cap and consequent lowering of salinity. This would then lead to the slowing of the Gulf Stream and the subsequent failure of the NAD to reach escape velocity and its replacement by an extension of the Labrador Current. A light-bulb moment? However, a couple of weeks ago, a doubt about all this suddenly popped into my mind. I didn’t really buy into the idea of the warm NAD suddenly stopping due to a slowing of the Gulf Stream and, I suppose that deep down, I’d always had a nagging unease about it. Along with the doubt springing to mind, came a possible alternative explanation which, if true, would explain how the NAD can suddenly flip. The chart below shows the present current circulation [tempted to write “current current” there] with a warm North Atlantic Drift (NAD) breaking off from the Gulf Stream circulation and passing by to the west of the UK, helping to keep the country relatively warm, considering the latitude. Now, another problem I had with the Horizon programme was the location specified for the sinking of the warm water to provide the source for the conveyor. As I recall, the map showed the main warm water sink occurring to the south of Iceland but that went against all I’d learnt some thirty-five years earlier. As I recall it, there was one sink in the gyre NE of Iceland and another probable one, perhaps two, in gyre(s) north of Jan Mayen. The map in the Wikipedia article here https://en.wikipedia.org/wiki/Thermohaline_circulation although idealised, I think is a little closer to the mark although it shows a sink off Labrador as being for warm water. As I remember it, the main sink in that area is a cold-water one where the cold Labrador Current sinks below the Gulf Stream, as I’ve indicated on the following map. Now, how do the sinks behave differently? Who is the densest of them all? For those east of Greenland and others further north, the warm water is more saline than the cold water. Although the temperature difference acts in the opposite way, making the density of the warm water somewhat less than it would have been without that difference, it’s still insufficient to make it lighter than the cold water so it sinks below it. There is also some mixing occurring in the gyres so heat and salt is exchanged and some cold water also sinks. The situation off the Grand Banks is somewhat different. Here, the although the cold water of the Labrador Current is less saline than the Gulf Stream, the temperature contrast is greater and is sufficient to make the warm water lighter than the cold, hence the Labrador Current largely sinks off the Banks beneath the Gulf Stream. Another consideration is that the Labrador Current has travelled a long way from its Greenland source and has its salinity level raised through an admixture with saltier currents. There are several causes of ocean currents, including the wind, but the one I want to consider here are density currents. Where adjacent waters have different densities, the sea surface of that with the lower density is higher than the other. Thus water flows downhill from low density to high. However, the Coriolis effect takes hold and the water is turned to the right so that a current flows parallel to the density contours. In the case of the Gulf Stream, the strong current near the boundary with the cold, south-flowing extension of the Labrador Current is due mainly to the density component. So what happens when the salinity of the Labrador Current decreases due to increased melt from Greenland’s icy mountains? I would expect the density component of the Gulf Stream to weaken as the density contrast weakens. But what else? Could the lowering in salinity of the Labrador current be sufficient to reverse the density contrast? I suspect this may be happening north of the Grand Banks where the temperature contrast is lower and hence makes a lower contribution to the density. It may be possible that the cold pool in the Atlantic is a sign that some Labrador water is now flowing over the warmer waters to the east. NAD waving, but drowning? If the change in salinity goes further and the density of the Labrador current becomes less than that of the Gulf Stream, the flip could occur where the old NAD sinks below the Labrador Current and then turns south to join the THC. Meanwhile, the Labrador Current sails eastwards on the surface, swapping places with the NAD. The map below is my idea as to how the new situation [well, not exactly new as it’s happened before] will look. I confess the route of the submarine NAD is pure guesswork and chosen mainly to make the map somewhat clearer. Conclusions – at last! I think my light-bulb moment a couple of weeks ago explains how the NAD can suddenly cease and a wholesale change in surface currents can occur, but then I would, wouldn’t I? The big problem is that I don’t have the data to calculate whether the theory is tenable; I leave that to the readers – if any. The one obvious thanks I have are to the author of the book I read so long ago. I can’t remember his name or the name of the book. I think he was an oceanographer at Woods Hole and that the book – a light blue soft-back – was published in the early sixties. Those whose software I used to produce the maps may wish to remain anonymous and not be associated with the ham-fisted use I’ve made of their products. However, they are: Maps generated from Marble: https://marble.kde.org/ My artwork [snort!] from Gimp: http://www.gimp.org/
<urn:uuid:421cbaba-1605-4d57-a1be-78e1a021aab8>
CC-MAIN-2021-43
http://www.scarlet-jade.com/bi-stable-nature-of-north-atlantic-ocean-currents/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00670.warc.gz
en
0.963274
2,115
2.78125
3
Although pyramids have been built by civilizations in many parts of the world including China and Mesoamerica, the pyramids built in Egypt are the most famous. The first Egyptian pyramid was the Pyramid of Djoser, which was designed for the burial of Djoser, a 3rd dynasty pharaoh who ruled around the middle of 27th century BC. Initially the Egyptians built step pyramids and their first successful attempt to construct a “true”, or smooth-sided, pyramid was accomplished with the Red Pyramid at Dahshur. Egyptian pyramid building reached its pinnacle during the reign of Khufu with the construction of the Great Pyramid of Giza around 2560 BC. The last known Egyptian pyramid is the Pyramid of Ahmose, built in the first half of 16th century BC. Due to the preciseness and magnificence of Egyptian pyramids, especially the three main pyramids at Giza, there are many theories regarding their purpose and construction methods but none has been established with certainty. Here are 10 interesting facts about the Ancient Egyptian pyramids. #1 Egyptian pyramids were built to serve as royal tombs of pharaohs Despite the established view that pyramids were royal tombs, till date never once has a mummy of a pharaoh been found within a pyramid. This had led some to question the purpose of a pyramid. However, there is substantial evidence that suggests that pyramids were built for a mortuary function. It includes presence of sarcophagi or stone coffins within pyramids; substantial pyramid texts which relate how a pharaoh will reanimate his body after death and ascend to the heavens; and the presence of mummy parts and remains of the dead. The absence of complete mummies is usually explained as a consequence of tomb robbery as expensive artifacts like jewelry were often buried along with the mummy. That the pyramids were tombs is considered irrefutable but some, including prominent Egyptologist Miroslav Verner, state that, “To suppose that the pyramid’s only function was as a royal tomb would be an oversimplification.” #2 The shape of pyramids was most probably modeled on the Benben Stone According to a creation myth of Ancient Egypt, a hill called Benben rose out from the waters and Atum, the first God, coughed and spat out Shu, the god of the air, and Tefnut, the goddess of moisture; thus beginning the creation of the world. The Benben Stone, a conical shaped stone, was venerated in the temple at the city of Heliopolis, the major religious center of Ancient Egypt during the Pyramid Age. A pyramidion is the uppermost piece or capstone of an Egyptian pyramid. The pyramidal-shaped Benben Stone was the basis for the design of pyramidia and probably even for the design of the monumental Egyptian pyramids. Many Egyptologists claim that the Benben Stone was symbolic of the sun as Atum was later associated with the sun god Ra; while others consider it to be symbolic of the ‘star-soul’ of Osiris, the god of the afterlife. #3 The first pyramid in Egypt was the Pyramid of Djoser The Ancient Egyptians initially buried their dead in pit graves along with items believed to help them in the afterlife. The first tomb structure they built was the mastaba. First used in the Early Dynastic Period (c. 3150 BC – c. 2686 BC), a mastaba is a flat-roofed, rectangular structure with inward sloping sides, constructed out of mud-bricks or stone. Egyptian architect Imhotep is usually credited with being the first to design a structure by stacking up mastabas on top of each other, resulting in a step pyramid. The first Egyptian pyramid was the Pyramid of Djoser, which was built at Saqqara during 27th century BC. It was designed by Imhotep for the burial of Pharaoh Djoser of Third Dynasty (c. 2686 – c. 2613 BC) and consisted of six mastabas built atop one another. #4 Sneferu’s reign saw major evolution in pyramid building leading to the first “true” pyramid During the reign of Sneferu, the first pharaoh of the Fourth Dynasty (c. 2613 – 2494 BC), there were major innovations in the design and construction of pyramids. At least 3 pyramids were built. The first of which, the Meidum pyramid, represents a transitional form between step-sided and smooth-sided pyramids. It was initially conceived as a seven-stepped structure but modifications were made later to add another platform. The second extension of the pyramid turned the original step design into a smooth pyramid by filling in the steps with limestone encasing. Sneferu’s second pyramid, the Bent Pyramid, shows an even greater degree of architectural innovation and is more closer to a smooth-sided pyramid. However, it rises from ground at a 54-degree inclination but the top section is built at a shallower angle of 43 degrees, lending the pyramid its bent appearance. The Red Pyramid, the third and largest of Sneferu’s three pyramids, was built at a constant angle of 43 degrees and was Egypt’s first successful attempt at constructing a “true” smooth-sided pyramid. #5 Egyptian pyramid building reached its pinnacle with the Great Pyramid of Giza Sneferu was succeeded by Khufu, considered to be his son by most Egyptologists. Egyptian pyramid building reached its pinnacle during the reign of Khufu, his son Khafre and his grandson Menkaure. These three pharaohs of the Fourth Dynasty built the three famous pyramids at the Giza Plateau on the outskirts of Cairo. The Pyramid of Khufu or the Great Pyramid of Giza is the largest Egyptian pyramid; and the oldest and only one of the Seven Wonders of the Ancient World still in existence. The Pyramid of Khafre, though smaller than the Great Pyramid, was built at a higher elevation and was surrounded by a more elaborate complex which contained the Great Sphinx of Giza. The Pyramid of Menkaure, around one-tenth the size of Khafre’s, is the smallest of the three pyramids. #6 The Great Pyramid was the tallest structure in the world for more than 3800 years The Great Pyramid of Giza was the tallest man-made structure in the world for more than 3,800 years until the 160-meter-tall Lincoln Cathedral was completed around 1300 AD. The Great Pyramid initially stood at 146.5 meters but due to erosion and absence of its uppermost stone, its present height is 138.8 meters. Its mass is estimated to be 5.9 million tonnes while its volume is roughly 2,500,000 cubic meters. Based on an estimated 20 year construction period, it would involve moving an average of more than 12 of its nearly 2.3 million blocks into place each hour, day and night. The accuracy of the pyramid’s workmanship is such that the four sides of the base have an average error of only 58 millimeters. The ratio of the perimeter to height equates to 2π to an accuracy of better than 0.05%; which has led to much debate among Egyptologists. Some Egyptologists including Robert Bauval and Graham Hancock, believe that the three pyramids of Giza were built by Egyptians keeping in mind the relative positions of the three stars in the constellation of Orion. However mainstream Egyptologists reject this theory. #7 The Old Kingdom Period of Ancient Egypt is known as the Age of the Pyramids Pyramids continued to be built throughout the Fifth (2494 – 2345 BC) and Sixth (2345 – 2181 BC) Dynasties. However the quality and the scale of construction declined. The pyramids were smaller, less well-built and often hastily constructed. This was due to decrease in ability and willingness to harness resources for the construction of very large projects. The last great pyramid builder was Pepi II Neferkare (2284 BC – c. 2247 BC), the last major pharaoh of the sixth dynasty. A few decades after his death, Egypt entered into a turbulent phase known as the First Intermediate Period in which pyramids were rarely constructed. Dynasties III, IV, V and VI are together known as the Old Kingdom, which is often described as the Age of the Pyramids. #8 The last known Egyptian pyramid is the Pyramid of Ahmose Kings of the Twelfth Dynasty (c. 2050 BC – 1800 BC) returned to pyramid building but it was never on the same scale. They used cheaper building methods and a lot of these pyramids were made of mud bricks instead of stone blocks. Hence these later pyramids were unable to withstand the test of time and look like a large lump today, though the interior structure is often still intact. After the Twelfth Dynasty, pyramids were rarely constructed and the pharaohs ultimately resorted to hidden underground tombs in places like the Valley of the Kings. The last known Egyptian pyramid is the Pyramid of Ahmose. It was constructed for the founder of the Eighteenth Dynasty, Ahmose I, who reigned from around 1549 BC to 1524 BC. To date, there are 138 known pyramids in Egypt and more are still to be discovered. #9 Construction workers of pyramids were well fed and given tax breaks Egyptian pharaohs usually began building their pyramids as soon as they took the throne and appointed a high ranking official to oversee construction. The belief that slaves were forced to build the pyramids against their will is incorrect. The peasants who worked on the pyramids were given tax breaks and provided with shelter, food and clothing at the site of construction. It is estimated that 4,000 pounds of meat, on average, was fed to the workers of the Great Pyramid of Giza. To move the stones, the Egyptians used large sledges that could be pushed or pulled by gangs of workers. The sand in front of the sledges was probably dampened with water to reduce friction. This can reduce the workforce required to move a block by half. When the stones arrived at the pyramids, a system of ramps was probably used to haul the stones up. However little evidence of these ramps survive and it is not known how these ramps were designed. There is still ongoing research on the construction techniques the Egyptians used to built their impressive pyramids. #10 Egyptian pyramids were aligned to true north very precisely Egyptian pyramids were built on the west bank of the Nile. As the site of the setting sun, it was associated with the realm of the dead in their mythology. The cores of the pyramids were often composed of local limestone. The outer layer was made of polished, highly reflective white limestone, to give them a brilliant appearance when viewed from a distance. The capstone of the pyramids was usually made of granite or basalt and was often plated with gold, silver or electrum (an alloy of gold and silver). This made the capstone highly reflective in the bright sun. The Egyptians had the ability to align structures to true north very precisely, something that may have helped in planning the pyramids. The Great Pyramid is aligned to true north within one-tenth of a degree. How the Egyptians achieved such accuracy is not known with certainty.
<urn:uuid:11157299-eb6c-4c50-9a2e-89134ce5ba0b>
CC-MAIN-2021-43
https://learnodo-newtonic.com/egyptian-pyramids-facts
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00471.warc.gz
en
0.978988
2,344
3.515625
4
Carrots are among the healthiest and most versatile veggies: you can eat them raw, add them to salads, cook, boil, or saute them to make incredible dishes. But can these versatile treats be enjoyed by your chickens as well? That’s what we are going to learn in this article. So, can chickens eat carrots? Yes, they certainly can. Carrots are a storehouse of nutrients that can benefit your feathered pets in several ways. They can eat carrots both raw as well as cooked, although the cooked ones are better for them. Moreover, the carrot tops are healthy for them as well. However, you must limit their intake unless you want to eat bitter eggs for breakfast. Below, we will talk about all aspects of feeding carrots to chickens: health benefits and risks, moderation, prepping ideas, and more. - Are carrots healthy for chickens? - Carrots: Nutritional composition - Carrots: Health benefits for Chickens - Is there any downside to feeding carrots to chickens? - How many carrots can chickens safely eat? And how often? - Raw or cooked carrot: Which one is better for chickens? - Preparing carrots for chickens - Can baby chicks eat carrots? - Can chickens eat carrot tops? - Is it okay for chickens to eat canned carrots? Are carrots healthy for chickens? The first question that you should ask yourself before adding anything new to your chickens’ diet should be: will this benefit my pet’s health in any way? This should apply to their snacks and treats as well. Chickens cannot afford to eat empty calories, therefore you must make their diet as diverse and nutrient-dense as possible. Do carrots meet these criteria? We’ll find out below: Carrots: Nutritional composition Let’s begin by learning about the nutritional value of carrots by going through the table given below: |Vitamin A||835 IU| |Thiamine (Vitamin B1)||0.065 mg| |Riboflavin (Vitamin B2)||0.057 mg| |Niacin (Vitamin B3)||0.984 mg| |Choline (Vitamin B4)||8.8 mg| |Pantothenic acid (Vitamin B5)||0.274 mg| |Pyridoxine (Vitamin B6)||0.137 mg| |Vitamin C||5.9 mg| |Vitamin E||0.67 mg| |Potassium, K||321 mg| |Calcium, Ca||33 mg| |Magnesium, Mg||12 mg| |Manganese, Mn||0.144 mg| |Zinc, Zn||0.25 mg| |Iron, Fe||0.3 mg| |Dietary fibers||2.8 g| Serving size: 100 grams Carrots: Health benefits for Chickens After looking at the nutritional content of carrots, it is time for us to analyze how these nutrients can enhance your pet’s health: - Carrots have Vitamin A in abundance, which can improve chickens’ eyesight, reproductive, and overall health. They also contain beta-carotene to keep their feathers bright and shiny. - Carrots contain Vitamin C in a moderate amount; Vitamin C is vital for your chickens’ immune health. - Carrots are also rich in minerals like Calcium, Iron, and Potassium; these keep their bones strong and healthy and regulate the fluid balance in their body. - Carrots are full of healthy antioxidants that add to the immune system of your chickens. - Carrots contain about 88% of water. This is great for the chickens that need plenty of water to stay hydrated throughout the day. Is there any downside to feeding carrots to chickens? After reading the last section, you must be convinced that carrots can make an ideal treat for your feathered pets. And you wouldn’t be wrong; these veggies are indeed healthier than most of the snack options for these birdies. However, no matter how healthy something is, if you start eating too much of it, it can have an adverse effect on your health. The same is true for your chickens. Carrots are great for them as occasional snacks, but if you start feeding them these on a daily basis, you will soon start noticing several health issues in them. There are three things wrong with feeding too many carrots to chickens. The first one is the sugar content of these veggies; although carrots have a lower sugar content than most fruits when consumed in excess, they can certainly increase your pet’s blood sugar to dangerous levels, which is not healthy for their cardiac as well as overall health. Secondly, carrots are also rich in fibers, and if your chickens are consuming more fibers than they need, it can impact their digestive system negatively. The last concern is, if your chickens are eating too many carrots, they wouldn’t want to eat their regular feed anymore. Birds as small as the chickens do not have the luxury of eating whatever they want as we do; they have limited appetite, and if all their nutritional requirements are not met, their health will degrade. How many carrots can chickens safely eat? And how often? So, now you know that you shouldn’t feed your pets too many carrots or too frequently. But how would you know how many is “too many” or how often is “too frequently”? Following proper moderation for your pet’s diet is not an easy task; it gets particularly more difficult for those who are new to the world of chickens and are still getting used to it. That’s what we are here for: to make your life easier. Chickens, like most other birds and animals, thrive on a diet that is balanced and diverse. Therefore, we would recommend you to feed them about 30 grams of carrots (about half of an average-sized carrot) at once. You can repeat this not more than twice a week. As long as you stick to this moderation, your feathered pets will have no trouble with eating carrots. Raw or cooked carrot: Which one is better for chickens? When you bring home carrots, how do you like to eat them? While some of us love to munch on these crunchy veggies raw, there are others who prefer to cook it a little before they eat it. But what about your feathered pets? How do they like their carrots, raw or cooked? Chickens can eat both cooked and uncooked carrots safely, and enjoy them both equally. However, if you look at it from a health perspective, cooked carrots are much better for them than raw ones. Now, you might’ve heard before that most vegetables are most nutritious in their rawest form and should, thus, be consumed like that. But that’s not how carrots work. In the case of carrots, cooking them actually frees up more of their antioxidants and increases their calcium levels, thus, making them a healthier alternative for your little pets. Moreover, the cooked carrots have a much softer texture, which makes it easier for the chickens to digest them. Preparing carrots for chickens So far, we have already learned the moderation we need to follow while feeding carrots to chickens and the fact that cooked carrots are better for their health than raw ones. In this section, we’re going to give you some pointers on prepping carrots for the chickens that might come in handy to you: - Always try to buy organic carrots for your pets, for they’re grown in a safe environment and do not contain any kind of chemicals (insecticides or pesticides) that can be detrimental to your pet’s health. - Washing the carrots is an essential step in prepping them for your birdies, and you must never forget or skip it, even in the case of organic carrots. Only, you’d have to be more thorough while washing the commercially-grown carrots than the organic ones. - In case you ever feel like feeding them a raw carrot, never feed a whole carrot. Always chop carrots down into bite-sized pieces for them. In fact, using a peeler or grater to shred the carrots for them is even safer. - Whenever you’re feeding them cooked carrots, always remember to let it cool down properly, or it could burn the insides of their mouth. - In order to add variety to their snack, you can also add other veggies with carrots and serve them a salad of sorts. Can baby chicks eat carrots? Do you have little chicks at your farm and are wondering if carrots would be good for them? You’re right, these veggies are both safe and healthy for them. Just keep one thing in mind: always feed them cooked carrots, for the raw ones can be difficult for them to chew and digest and can also pose a choking hazard. Also, you will have to follow strict moderation for feeding these little birdies carrots as well. Can chickens eat carrot tops? Although we usually discard the carrot tops while eating them, you’d be surprised to know that your pets can eat these tops safely. In fact, if you offer a carrot to your chicken along with its top, chances are they will start eating the tops first. Many chicken-owners who don’t want to waste carrot tops, feed them to their pets. However, in order to be able to feed them these tops safely, you must make sure the carrots are organic, otherwise, they might have been sprayed with pesticides. Also, too many of these greens can impact the taste of their eggs so do not feed these in access. Is it okay for chickens to eat canned carrots? Absolutely not. It is never a good idea to feed your pets any canned fruit or vegetable, for these are made solely for human consumption and can, thus, have additives and chemicals that might be dangerous for their health. Sodium is one such example; canned carrots’ sodium content is much higher than the raw ones. And for those of you who didn’t know this, too much sodium can make your pet dehydrated to the point of collapsing. Conclusion: Can Chickens Eat Carrots? Sweet, crunchy, and nutritious, carrots are a great addition not only to your diet but also to your poultry. Chickens can safely eat these veggies and do so lovingly. However, you might want to monitor your chickens’ carrot intake, for if they’re eating too many of them too frequently, it could disturb their blood pressure levels, cardiac health, and upset their digestive system.
<urn:uuid:eee6fa3e-5096-4835-9dcc-a30355c4f9ee>
CC-MAIN-2021-43
https://animalhype.com/chickens/can-chickens-eat-carrots/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00591.warc.gz
en
0.957848
2,252
2.953125
3