text
stringlengths
1.59k
23.8k
id
stringlengths
47
47
dump
stringclasses
8 values
url
stringlengths
15
3.15k
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
2.05k
4.1k
score
float64
2.52
5.34
int_score
int64
3
5
We live in an information-driven world, one where data is king. Unsurprisingly, it’s necessary that we analyze the pertinent data to make crucial business decisions. Regression is one of the more widely used data analysis techniques. The field of machine learning is growing and with that growth comes a popular algorithm: linear regression. In this article, you will learn about linear regression in R and how it works. Why Linear Regression? Before we try to understand what linear regression is, let’s quickly explore the need for a linear regression algorithm by means of an analogy. Imagine that we were required to predict the number of skiers at a resort, based on the area’s snowfall. The easiest way would be to plot a simple graph with snowfall amounts and skiers on the ‘X’ and ‘Y’ axis respectively. Based on the graph, we could infer that as the amount of snowfall increased, so the number of skiers would obviously increase. Hence, the graph makes it easy to see the relationship between skiers and snowfall. The number of skiers increases in direct proportion to the amount of snowfall. Based upon the knowledge the graph imparts, we can make better decisions relating to the operations of a ski area. To understand linear regression, we need to understand the term “regression” first. Regression is used to find relationships between a dependent variable (Y) and multiple independent (X) variables. Here, the independent variables are known as the predictors or explanatory variables, and the dependent variable is referred to as a response or target variable. A linear regression’s equation looks like this: y = B0 + B1x1 + B2x2 + B3x3 + .... Where B0 is the intercept(value of y when x=0) B1, B2, B3 are the slopes x1, x2, x3 are the independent variables In this case, snowfall is an independent variable and the number of skiers is a dependent variable. So, since regression finds relationships between dependent and independent variables, then what exactly is linear regression? What is Linear Regression? Linear regression is a form of statistical analysis that shows the relationship between two or more continuous variables. It creates a predictive model using relevant data to show trends. Analysts typically use the “least square method” to create the model. There are other methods, but the least square method is the most commonly used. Below is a graph that depicts the relationship between the heights and weights of a sample of people. The red line is the linear regression that shows the height of a person is positively related to its weight. Now that we understand what linear regression is, let’s learn how linear regression works and how we use the linear regression formula to derive the regression line. How Does Linear Regression Work? We can better understand how linear regression works by using the example of a dataset that contains two fields, Area and Rent, and is used to predict the house’s rent based on the area where it is located. The dataset is: As you can see, we are using a simple dataset for our example. Using this uncomplicated data, let’s have a look at how linear regression works, step by step: 1. With the available data, we plot a graph with Area in the X-axis and Rent on Y-axis. The graph will look like the following. Notice that it is a linear pattern with a slight dip. 2. Next, we find the mean of Area and Rent. 3. We then plot the mean on the graph. 4. We draw a line of best fit that passes through the mean. 5. But we encounter a problem. As you can see below, multiple lines can be drawn through the mean: 6. To overcome this problem, we keep moving the line to make sure the best fit line has the least square distance from the data points 7. The least-square distance is found by adding the square of the residuals 8. We now arrive at the relation that, Residual is the distance between Y-actual and Y-pred. 9. The value of m & c for the best fit line, y = mx+ c can be calculated using these formulas: 10. This helps us find the corresponding values: 11. With that, we can obtain the values of m & c. 12. Now, we can find the value of Y-pred. 13. After calculating, we find that the least square value for the below line is 3.02. 14. Finally, we are able to plot the Y-pred and this is found out to be the best fit line. This shows how the linear regression algorithm works. Now let's move onto our use case. Use Case of revenue prediction, featuring linear regression Predicting the revenue from paid, organic, and social media traffic using a linear regression model in R. We will now look at a real-life scenario where we will predict the revenue by using regression analysis in R. The sample dataset we will be working with is shown below: In this demo, we will work with the following three attributes to predict the revenue: - Paid Traffic - Traffic coming through advertisement - Organic Traffic - Traffic from search engines, which is non-paid - Social Traffic - Traffic coming in from various social networking sites We will be making use of multiple linear regression. The linear regression formula is: Before we begin, let’s have a look at the program’s flow: - Generate inputs using csv files - Import the required libraries - Split the dataset into train and test - Apply the regression on paid traffic, organic traffic, and social traffic - Validate the model So let’s start our step-by-step linear regression demo! Since we will perform linear regression in RStudio, we will open that first. We type the following code in R: # Import the dataset sales <- read.csv('Mention your download path') head(sales) #Displays the top 6 rows of a dataset summary(sales) #Gives certain statistical information about the data. The output will look like below: |dim(sales) # Displays the dimensions of the dataset| Now, we move onto plotting the variables. |plot(sales) # Plot the variables to see their trends| Let’s now see how the variables are correlated to each other. For that, we’ll take only the numeric column values. library(corrplot) # Library to finds the correlation between the variables As you can see from the above correlation matrix, the variables have a high degree of correlation between each other and with the sales variable. Let’s now split the data from training and testing sets. # Split the data into training and testing library(caTools) #caTools has the split function split <- sample.split(sales, SplitRatio = 0.7) # Assigning it to a variable split, sample.split is one of the functions we are using. With the ration value of 0.7, it states that we will have 70% of the sales data for training and 30% for testing the model train <- subset(sales, split = 'TRUE') #Creating a training set test <- subset(sales, split = 'FALSE') #Creating a testing set by assigning FALSE Now that we have the test and train variables, let’s go ahead and create the model: Model <- lm(Revenue ~., data = train) #Creates the model. Here, lm stands for the linear regression model. Revenue is the target variable we want to track. pred <- predict(Model, test) #The test data was kept for this purpose pred #This displays the predicted values res<-residuals(Model) # Find the residuals res<-as.data.frame(res) # Convert the residual into a dataframe res # Prints the residuals # compare the predicted vs actual values # Let’s now, compare the predicted vs actual values plot(test$Revenue, type = 'l', lty = 1.8, col = "red") The output of the above command is shown below in a graph that shows the predicted revenue. Now let’s plot our test revenue with the following command: |lines(pred, type = "l", col = "blue") #The output looks like below| Let’s go ahead and plot the prediction fully with the following command: |plot(pred, type = "l", lty = 1.8, col = "blue") #The output looks like below, this graph shows the expected Revenue| From the above output, we can see that the graphs of the predicted revenue and expected revenue are very close. Let’s check out the accuracy so we can validate the comparison. # Calculating the accuracy rmse <- sqrt(mean(pred-sales$Revenue)^2) # Root Mean Square Error is the standard deviation of the residuals The output looks like below: You can see that this model’s accuracy is sound. This brings us to the end of the demo. Learn data structures in R, how to import and export data in R, cluster analysis and forecasting with the Data Science with R Certification. Check out the course now. Now you can see why linear regression is necessary, what a linear regression model is, and how the linear regression algorithm works. You also had a look at a real-life scenario wherein we used RStudio to calculate the revenue based on our dataset. You learned about the various commands, packages and saw how to plot a graph in RStudio. Although this is a good start, there is still so much more to discover about linear regression. Want to Learn More? If this has piqued your interest in advancing your career in data science, check out Simplilearn’s Data Science Certification Course, co-developed with IBM. This comprehensive course will help you develop your expertise in data science using the R and Python programming languages. You will all learn about regression analysis in-depth, including linear regression. Data scientists are some of the most sought after IT professionals in the world today, so what are you waiting for?
<urn:uuid:707a6a9d-6cd3-4d25-9649-8576f34297e5>
CC-MAIN-2021-43
https://www.simplilearn.com/tutorials/data-science-tutorial/linear-regression-in-r?source=sl_frs_nav_user_clicks_on_next_tutorial
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00271.warc.gz
en
0.876165
2,212
4.1875
4
A group of Canadian scientists is mobilizing public support to once again save Canada’s northernmost research laboratory from being mothballed for the lack of federal funding. The Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, about 1,110 kilometres south of the North Pole, is about to run out of funding to continue its research into key atmospheric data, said James Drummond, professor of atmospheric physics at Dalhousie University in Nova Scotia. Located at the latitude of 80 degrees north, more than 4,000 kilometres north of Toronto, PEARL is able to do research that no other lab in Canada can, said Drummond. “Eureka is almost as far north on Canada as you can go,” Drummond said. “We are the most northerly civilian research lab within Canada.” Key atmospheric and climate research The lab’s research focuses on three main areas: ozone depletion, air quality and climate change. “We know that the climate in the Arctic is moving much faster than it is on the rest of the planet and we’re in a very good position that far north to be able to look at the effects of climate change on the long term,” Drummond said. “That does require long-term presence and taking data over many-many years, even decades to do the job properly.” For the last 4.5 years, the operation of the lab has been paid for under a funding program that was started by the previous Conservative government, called the Canadian Climate and Atmospheric Research Program (CCAR), Drummond said. PEARL, which receives about $1 million a year, is one of seven science projects funded under CCAR, he said. In addition to that, PEARL receives “in kind” support from the Department of Environment and Climate Change Canada and the Canadian Space Agency, Drummond said. “We don’t think the request to continue the station is in any sense controversial or unreasonable,” Drummond said. “This station has been operating in this form since 2006 and in a different form since the 1990s.” ‘Highly regarded around the world’ The PEARL laboratory is very highly regarded around the world, Drummond said. “Everywhere I go people want to know how’s the work there going,” he said. “It’s one of the very-very few in number – you can pretty well count them on one hand – of facilities that high up in the Arctic making measurements.” The lab is also one of the few facilities that operates during the polar night as well as the polar day, he said. The scientists working at PEARL – about 20 to 30 researchers from nine universities across Canada – have been talking to federal officials at the Natural Sciences and Engineering Research Council of Canada (NSERC) convince them to continue funding the lab. A year ago, NSERC actually put in an application to the Treasury Board asking to renew funding for PEARL, but that request was denied, Drummond said. Evidence for Democracy (E4D), a non-partisan, not-for-profit organization promoting the transparent use of evidence in government decision-making in Canada, has launched an online petition to try to convince the government to come up with new funding. “The reality is you can either operate this laboratory or not,” Drummond said. “You can’t operate half of the laboratory, you can’t heat half of the building, you have to heat all the building.” The loss of that $1 million funding means that scientists will have to mothball the facility sometime in the spring of 2018, Drummond said. A political dejavue? This isn’t the first time a funding shortfall has threatened the future of PEARL. In 2012, cuts under the Conservative government of then Prime Minister Stephen Harper saw researchers come within 20 days of beginning a shutdown of the laboratory, until last-minute funding came through from the Climate Change and Atmospheric Research Initiative (CCAR), CBC News reported. One of the more vocal proponents to try to save PEARL at the time was Liberal MP Kirsty Duncan, who now holds the Science portfolio in the federal cabinet. “This is a government that has a war on science, a war on the environment,” Duncan declared in the House of Commons on Oct. 29, 2012, referring to the Conservative cuts. “The government has cut the Polar Environment Atmospheric Research Laboratory in the far North, which looks at ozone, at climate change. This year we have had the greatest melting, ever, of sea ice in the High Arctic. Last year, an ozone hole was discovered that was two million square kilometres. “Why would the government cut a research station at a time when major environmental changes are taking place?” ‘Cold War on science’ Matt Generoux, the Conservative Party’s shadow cabinet member responsible for the science portfolio, said he has been left wondering “where all the passion that Minister Duncan had while in opposition for maintaining PEARL has suddenly gone.” “In our 2011 budget, our government created the Climate Change and Atmospheric Research Initiative (CCAR),” Generoux said in a statement to Radio Canada International. “In 2012, the CCAR ensured funding for the PEARL station until 2017.” The Liberal government has had plenty of time to plan for the continued funding of PEARL if they were genuinely interested in making the laboratory a priority, Generoux said. Kennedy Stewart, the science critic for the New Democratic Party, said the issue of PEARL’s funding is rather emblematic of the Liberal approach to science funding in general. “I’d say we have moved from a ‘war on science’ under the Conservatives to a ‘Cold War on science’ under the Liberals,” Stewart said in a phone interview with Radio Canada International. Government personnel records show that the federal government under the Trudeau Liberals employs five per cent fewer scientists than under the Harper Conservatives when there were about 40,000 scientists on Ottawa’s payroll, Stewart said. The proportion of the overall budget spent on science and technology is also lower under Trudeau than it was under Harper, Stewart said. “So we’re having a steady erosion of both personnel and funding,” Stewart said. “However, the only thing that is different is that Harper and his ministers were openly hostile to scientists, where Trudeau and his ministers are hugging us.” Stewart said the PEARL file will be a key test for the Liberal government. “Kirsty Duncan fought for PEARL all the way through the last government and now she’s in charge of the funding and she can’t find the small amount of money to keep it open,” Stewart said. Duncan was not available for an interview Monday but officials from her office dismissed opposition jabs. “Our government is doing more to combat climate change than any federal government in history,” officials said in a statement to Radio Canada International. The Liberal government of Prime Minister Justin Trudeau has signed the Paris Accord and worked with the provinces and territories on a Pan-Canadian framework to address climate change, including putting a price on carbon pollution, officials said. “Budget 2017 announced the creation of a new Canadian Centre for Climate Services to improve access to foundational climate science and we look forward to the opening of the Canadian High Arctic Research Station (CHARS) which will help ensure Canada remains a world leader in Arctic science,” the statement said. While the CCAR program has reached the end of its funding cycle, there is ongoing annual funding for the operation of the PEARL facility and the work of researchers continues to be funded into 2018, it said. Officials are also working with researchers to find other avenues of support, including through the approximately $50 million in climate change research that NSERC funds annually, the statement said. “The previous government used CCAR as a one-off to climate change research but Arctic research deserves more than that,” the statement said. “Our government knows we need a thoughtful, comprehensive approach to Arctic research, one that includes Indigenous voices and the role of traditional knowledge.” CHARS is no substitute for PEARL It is impossible to do the kind of research performed at PEARL in Eureka at CHARS in Cambridge Bay, some 1,300 kilometres south, Drummond said. “The distance between Cambridge Bay and Eureka is roughly the distance between Montreal and Atlanta, Georgia,” Drummond said. “Measurements made in Cambridge Bay are fine, we have no quarrel with CHARS making measurements in Cambridge Bay, we think it’s a great idea but they don’t substitute for measurements made up 80 degrees North in the very High Arctic anymore than you would accept measurements made in Atlanta, Georgia were the things that you need to figure out what’s going in Montreal, Quebec.” With files from Nick Murray of CBC News Related stories from around the North: Canada: Arsenic contamination persists in Yellowknife lake a decade after gold mine shut: study, Radio Canada International Finland: Finnish air pollution shortens life, Yle News Greenland: Study finds increase in litter on Arctic seafloor, Blog by Mia Bennett Russia: Pollution in Arctic Russian city of Nikel increases – Will new technology turn the tides?, The Independent Barents Observer Sweden: Stockholm cleans up and passes air quality test, Radio Sweden United States: NASA research flight around the world pauses in Anchorage, Alaska, Alaska Dispatch News
<urn:uuid:a46cd260-9ea6-4552-93ad-b13964cee271>
CC-MAIN-2021-43
https://www.rcinet.ca/eye-on-the-arctic/2017/09/26/canadian-scientists-fight-to-save-key-climate-change-science-lab-in-high-arctic/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00591.warc.gz
en
0.94913
2,069
2.84375
3
One of the primary objectives of Islām is to establish happiness in this world. The objectives of the Sharīʿah have shown that if this dīn were practised and applied, there would not only be more justice and fairness, but more specifically happiness—for people, animals and our external environment. So Allāh wants us to be happy, and He re-affirms this in many different places in the Qur’ān. “Allāh intends for you ease and does not intend for you hardship.” “We have not sent down to you the Qur’ān that you be distressed.” “Whoever does righteousness, whether male or female, while he is a believer – We will surely cause him to live a good life.” That is why the Arabic word ‘Sharīʿah’, the Islāmic legal framework dictated by the Qur’ān and Sunnah, originates from the word ‘stream’ with the Arabic word ‘Shari’ meaning ‘path’ also being derived from the same root. For it is our path that leads to salvation, the same way that a stream leads you to a river or a sea which purifies you (externally), the Sharīʿah purifies you, your heart, your Īmān and your surroundings (internally). Following the stream will allow you to reach ultimate happiness, and thus we conclude that the true objective of the Sharīʿah, despite what many claim for it to be today, is to preserve our intellect and wealth, our honour and dignity, our family and property, it is to preserve our life. Through preserving these five, happiness is gained for individuals and the society. With that being said, let us take a look at some of the ways a Muslim, governed by the principles laid out in the Sharīʿah, can attain happiness in the life of this world. 1. Have strong Īmān Happiness stems from within the heart, so in order to achieve happiness, the internal mechanism, namely the heart, must be sound. Malik Ibn Dinar (raḍiy Allāhu ʿanhu) said: “One of the great causes of sadness and despair in a person’s life is due to a hardened heart.” The heart is a muscle that can be described as a plant; the more you water it, the more it grows. But the opposite is also true; if you neglect it, it will die. Thus, the heart can take many different forms, and Imām Ibn al-Qayyim mentions three of these: “Just as the heart may be described in terms of being alive or dead, it may also be regarded as belonging to one of three types; these are the healthy heart, the dead heart, and the sick heart.” One of the most effective ways to achieve the sound heart (Qulūbun salīma) devoted to Allāh is by increasing our Īmān. “The Day when neither wealth nor children will be of any benefit, except for whoever brings to Allāh a sound heart.” The sick heart (Qulūbun marīḍa) comes from turning away from Allāh and following desires. “In the hearts is a sickness, so Allāh has increased their sickness.” The dead heart (Qulūbun mayyita) is due to being extremely far from Allāh, and thus to ensure this is never our state, we must always be in a state of obedience and distant from disobedience, as Īmān increases with the former, and decreases with the latter. The more good habits we have in our lives, the more our Īmān is increased, and the more Īmān increases the happier we will be. On the other hand, the more sins we commit, the weaker our Īmān becomes, and more blessings will be removed from our lives, families and wealth. This will then lead to unhappiness. According to a recent Harvard study, one of the most important factors affecting levels of happiness is feeling loved, so imagine the happiness you would feel from being loved by Allāh—the best feeling ever. 2. Perform Dhikr Allāh blessed human beings with an intellect and speech, and both are means by which we can perform dhikr (remembrance of Allāh). The grateful servant is someone who uses his blessings to come closer to Allāh and please Him. The Prophet (sallAllāhu ʿalayhi wasallam) said: “The most beloved speech to Allāh consists of four. There is no harm with which one you begin – subhāna Allāh, wa al-hamdu lillāh, wa lā ilāha illAllāh, wa Allāhu akbar.” What are the benefits of remembering Allāh? a) You will become successful “And remember Allāh much that you may be successful.” b) You will find peace in your heart “Those who have faith and whose hearts find peace in the remembrance of God—truly it is in the remembrance of God that hearts find peace.” c) You will love and be in awe of Allāh more and reflect more on His great creation “Indeed, in the creation of the heavens and the earth and the alternation of the night and the day are signs for those of understanding. Who remember Allāh while standing or sitting or [lying] on their sides and give thought to the creation of the heavens and the earth, [saying] “Our Lord, You did not create this aimlessly; exalted are You [above such a thing]; then protect us from the punishment of the Fire.” It is imperative to note that dhikr should not be formed in a parrot-like fashion, repeating statements with your tongue whilst your mind and heart remain unengaged. Rather, the movement of our tongue whilst being in a state of reflection over Allāh and His creation can only result from a sense of awe in our heart. 3. Be grateful ʿĀ’isha (raḍiy Allāhu ʿanha) said: “The Prophet (sallAllāhu ʿalayhi wasallam) would stand [in prayer] so long that the skin of his feet would crack. I asked him, ‘Why do you do this while your past and future sins have been forgiven?’ He said, ‘Should I not be a grateful slave of Allāh?’” If Allāh gives you more, you should show more gratitude. You receive more money? Give more to charity. Your health has improved? Start fasting regularly. You’ve received a higher position? Help more people. When Allāh blesses you with something, use the blessings and then give thanks to Allah. “If you are grateful, I will surely increase you [in favour]…” 4. Be selfless Ibn ʿUmar reported that the Prophet (sallAllāhu ʿalayhi wasallam) said: “The most beloved people to Allāh are those who are most beneficial to the people. The most beloved deed to Allāh is to make a Muslim happy, or to remove one of his troubles, or to forgive his debt, or to feed his hunger. That I walk with a brother regarding a need is more beloved to me than that I seclude myself in this mosque in Madīnah for a month. Whoever swallows his anger, then Allāh will conceal his faults. Whoever suppresses his rage, even though he could fulfil his anger if he wished, then Allāh will secure his heart on the Day of Resurrection. Whoever walks with his brother regarding a need until he secures it for him, then Allāh the Exalted will make his footing firm across the bridge on the day when the footings are shaken.” The above statement can be summarised in one sentence: “The best of people are those that bring most benefit to the rest of mankind.” However, a by-product of assisting others out of your own good will, is that we often seek praise for our efforts, regardless of whether we explicitly state or subtly hint it. But Allāh says: “And they give food in spite of love for it to the needy, the orphan, and the captive, [saying] “We feed you only for the countenance of Allāh. We wish not from you reward or gratitude.” 5. Be positive The Prophet (sallAllāhu ʿalayhi wasallam) was always positive, even during the most difficult of times. Ibn ʿAbbās reported: The Prophet (sallAllāhu ʿalayhi wasallam) visited a bedouin who was sick. Whenever he visited an ailing person, he would say, “Lā ba’sa, tahūrun inshā’Allāh [No harm, (it will be a) purification (from sins), if Allāh wills].” How can we be positive? - Do not complain too much; - Do not be disappointed in what Allāh has given you; - Show more gratitude – count your blessings; - Do not focus on the problems – bring solutions; - Focus on virtues, not vice. 3-part series: In Pursuit of Optimism by Ustadh Ali Hammuda Recall that the Prophet Ibrāhīm (ʿalayhī al-Salām) instructed his son, Ismāʿīl, to divorce one wife and to cherish the other. What was the difference between them? Though provisions were low, food was scarce, and life was difficult, the first wife would complain of their situation, displaying her true character, albeit an unpleasant one. Whilst the second wife was content, positive and happy with what Allāh had bestowed upon them. Therefore, she was the honourable woman that Ibrāhīm instructed his son to keep and treat well. 6. Have a good balance between worship, family and work The scholars of Sīrah said that the Prophet’s life was divided into 3 parts: “And they were worshippers of Us…” It was narrated from Ibn ʿAbbās that the Prophet (sallAllāhu ʿalayhi wasallam) said: “The best of you is the one who is best to his wife, and I am the best of you to my wives.” “O mankind! Verily, I am sent to you all as the Messenger of Allāh.” Your ʿibādah, family and daʿwah should be balanced and none should come at the expense of the other. 7. Keep yourself productive Productivity occurs when you have a vision. Without a vision, you will be lost. As the Japanese proverb says, “Vision without action is a daydream. Action without vision is a nightmare.” Too much free time will eventually lead to boredom and this is when the Shaytān will capitalise and mislead you. As Imām Ibn al-Qayyim said, “Shaytān tries to destroy the son of Ādam in one of seven phases. Some of them are more intense than others. Shaytān will not try to destroy him in the next phase until he fails to destroy him in a previous one.” If he can’t get you to commit shirk, he will try and get you to commit a major sin; and if he can’t get you to commit a major sin, he will try and get you to sin; and if he can’t get you to sin, he will try and get you to waste time, and thus you are more likely to be affected if you are not occupying yourself with meaningful tasks whether it be work, seeking knowledge, daʿwah, sport or even community work. 8. Have good companions Having a good support network is crucial for your mental and spiritual health. People who are positive and righteous will make you happy, and studies have shown that playing sports decreases stress so get your football boots or your badminton rackets out and meet up with your friends from time to time. 9. Be patient You won’t know what ease is until you taste hardship, and you won’t know what happiness is until you feel sadness, so we will not always be happy on this Earth. Moments of hardship are inevitable, but times of ease will be quick to follow. As human beings with emotions, we are bound to become sad. Even in the life of the Prophet (sallAllāhu ʿalayhi wasallam), when he was mourning the death of his son and they asked him why, he replied: “”The eyes shed tears and the heart becomes sad, but we do not say except what pleases our Lord, and with your departure O Ibrahim we are sad.” But whenever a person is tested, it is a reason to say “al-hamdu lillāh” for when Allāh loves a person, He tests them in order to purify them of their sins, multiply their good deeds, and elevate their status in Paradise. Anas b. Mālik reported that Allāh’s Messenger (sallAllāhu ʿalayhi wasallam) said: “One amongst the denizens of Hell who had led a life of ease and plenty amongst the people of the world would be dipped in the Fire only once on the Day of Resurrection and then it would be said to him, ‘O, son of Ādam, did you find any comfort, did you happen to get any material blessing?’ He would say, ‘By Allāh, no, my Lord.’ And then one of the people of the world will be brought who had led the most difficult life [in the world], who will be from amongst the people of Paradise, and he would be dipped once in Paradise, and it would be said to him, ‘O, son of Ādam, have you ever faced any hardship, or had any distress fallen to your lot?’ And he would say, ‘By Allāh, no, O my Lord, never did I face any hardship or experience any distress.’” 10. Remember paradise Remember that this world is temporary, every pleasure in this world will expire but in the afterlife it will always be everlasting. Constant remembrance of paradise will remind you of how trivial this world is in comparison to the afterlife and will help us with patience and working towards entering Jannah. This motivation is crucial to remind us that true and permanent happiness will only be in Paradise, when we see Allāh’s face and truly live a happily ever after life. These are ten habits that we can begin to implement in our lives, on our path to attain happiness. I hope that we will begin this journey today. Al-Qur’ān, 2:185 Al-Qur’ān, 20:2 Al-Qur’ān, 16:97 Al-Qur’ān, 26:88-89 Al-Qur’ān, 2:10 Al-Qur’ān, 8:45 Al-Qur’ān, 13:28 Al-Qur’ān, 3:190-191 Bukhari and Muslim Al-Qur’ān, 14:7 Al-Mu’jam al-Awsat 6196 – sahih according to Al-Albani Al-Qur’ān, 76:8-9 Al-Qur’ān, 21:73 Ibn Majah Al-Qur’ān, 7:158 Madaarij as-Saalikeen Al-Qur’ān, 94:6 Al-Qur’ān, 2:155
<urn:uuid:90bf61c5-99fc-4f4d-b62f-d9a7b1fb02fa>
CC-MAIN-2021-43
https://www.islam21c.com/islamic-thought/10-habits-for-happiness/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00710.warc.gz
en
0.950299
3,544
2.6875
3
Rangeley Lakes Camps How fishing started a Tourist Mecca Many parts of New England were more populated a century or more ago, and much of the countryside was more cultivated and tamed than it is today. The Rangeley Lakes region, which over the past 150 years has hosted tourists and fishermen at more than 100 hotels and commercial camps, is one of these places. The resort hotels are gone today, with a few exceptions; the lawns and manicured grounds surrounding the lakes have almost entirely returned to forest. Many of the camps survive, some as group-owned associations, but most are in private ownership. A number of the hotel cottages still stand as well, also in private hands. The rustic style also remains, as does the communal nature of camp life, although generally on a smaller, familial scale. In 1796, James Rangeley, in company with three other men, bought 30,000 acres at the headwaters of the Androscoggin River from the Commonwealth of Massachusetts. But Rangeley and his company never occupied their lands, and the lakes remained in the possession of their ancestral occupants, the Abenaki Indians, for another two decades. The first European settlers arrived in 1816 or 1817, when Luther Hoar, accompanied by his wife and eight children, left their former home in Madrid, Maine, and arrived at the lake settlement on the eastern shore of what is now Rangeley Lake. In the next few years the Hoars were joined by several other families, including those of John Toothaker and David Quimby. These names can be found today not only attached to places, such as Toothaker Island, but also among the area’s present residents, the descendants of the original settlers. Rangeley never saw his land, but his son, James Rangeley, Jr., inherited his father’s share and later bought out the other three partners. In 1825 he moved his family onto the land with plans to build a community and economy based on agriculture, lumber, and mining. The younger Rangeley’s discovery of Hoar’s settlement on his land, already established as a modest community engaged in farming and logging, fit his ambitions nicely. Rather than eject the squatters, Rangeley adapted his plan to their presence. Five big lakes and a constellation of smaller lakes and ponds form the headwaters of the Androscoggin, which rise in a high plateau close up against the crest of the northern Appalachians. Two of the big lakes still bear their original Indian names, Mooselookmeguntic and Cupsuptic, but the three others, Oquossoc, Mollychunkamunk, and Welokennabacook, are now better known as Rangeley, Upper Richardson, and Lower Richardson. The big lakes had been rich hunting and fishing grounds for the Abenaki for generations, and the new settlers quickly discovered that fish, especially brook trout, thrived in the clear, cold lake waters. Despite its remoteness, by the 1840s a few sportfishermen were coming from Rhode Island, Connecticut, and New York to explore the lakes. By this time the “Lake Settlement” on Oquossoc Lake was known as the town of Rangeley and had grown to 39 families. Logging was also on the rise, with the Androscoggin River filled with logs destined for sawmills downstream. The river itself was beginning its long career as one of New England’s industrial arteries. One of the first stages in that evolution was the construction in 1850 of the Upper Dam at the western end of Mooselookmeguntic Lake. This dam was used to raise the lake water levels, linking it to Cupsuptic Lake and facilitating the movement of logs. As a side benefit, the dam created pools and spillways, which made especially good fish habitats and increased the sportfishing potential of the place even further. When Henry O. Stanley (Maine’s fisheries commissioner after 1883) and New Yorker George S. Page visited Rangeley in 1860 to investigate rumors of good fishing, they found large and abundant brook trout. Page returned to New York with eight trout, ranging between five and eight pounds, packed in sawdust and ice. Their adventure drew public attention to the Rangeley Lakes, but their report of the quality and size of the fish was met with some incredulity by sportsmen back in the city. Skeptics were satisfied only after Harvard naturalist Louis Agassiz inspected Page’s catch and confirmed that the fish were indeed brook trout and not lake trout or some other less desirable species. Fish—not just plentiful fish but big fish, with trophy potential—started to make Rangeley famous at roughly the same time that the recreational potential of the Adirondack Mountains was attracting notice. The quality of sportfishing in the Rangeley region compared favorably with that in the Adirondacks, and soon local residents began to supplement farming and logging with the more profitable, easier, and probably more fun occupation of guiding. From the 1870s on, Rangeley built its fortunes around recreational fishing. Guidebooks described the geography of the lakes and mountains, gave advice on securing guide services, and provided details on transportation by rail, road, and steamboat, as well as on accommodations at the rapidly increasing numbers of hotels and commercial camps catering principally to fishermen—and, as it turned out, fisherwomen. Fly fishing, rather than fishing with live bait or cast lures, was the fishing style of choice at Rangeley, and no one in the last decades of the 19th century devoted greater energy to the promotion of fly fishing, the Rangeley Lakes, and Maine in general as a sporting destination than Cornelia T. “Fly Rod” Crosby (1854–1946). Crosby’s accomplishments as a writer and personality were part of a larger effort conducted by a number of individuals, businesses, and state agencies to boost Maine’s economy through tourism at a time when the state’s agricultural strength was declining and its manufacturing industries were facing ever-increasing competition from southern New England mill towns. In 1895, long before “Vacationland” appeared on Maine motor vehicle license plates (that happened in 1936), the Maine Sportsman’s Fish and Game Association claimed that hunting, fishing and related support services constituted the state’s most important source of income. The association lobbied for all businesses with an interest in tourism (including, significantly, the Maine Central Railroad), and heavily influenced commercial development in Maine as a tourist destination. The association also pressed for legislation establishing fish and game regulations, typically to benefit the out-of-state visitors, or “sports,” who came for the hunting and fishing. The fishing camps are Rangeley’s unique architectural contribution. Originating in the simplest concepts of shelter—roof and hearth—these camps became something more than rustic cabins by virtue of their shared use, either by paying clients or by members of a private club who shared in a camp’s use and ownership. The first private fishing club in the Rangeley region, the Oquossoc Angling Association, was founded in 1868 by George S. Page and several fellow fishing enthusiasts. Page and his colleagues bought a substantial parcel of land on Cupsuptic Lake, where they built their headquarters and base of operations. Camp Kennebago was designed for shared use by the association’s members, and consisted of a large dormitory and adjacent kitchen and dining area, all contained in a rectangular space some 100 feet long by 30 feet wide and open to the ridge of a single gable roof. The interior space was minimally divided—the single dormitory contained at least 13 beds—and was entirely unfinished. Privacy was neither expected nor desired. An adjacent building provided separate rooms for married couples, and eventually private cabins were added for members with families. The need to provide services to clients efficiently required some compartmentalization of space by use, for example into service, kitchen, and dining areas, but these needs were met without completely isolating those spaces as they might be in a conventional hotel. Communal spaces shared by loosely connected acquaintances linked by a common interest in fishing created an atmosphere of informal collegiality and reinforced the rustic nature of their activity and setting. The services and accommodations were simple, comfortable without being indulgent. Buildings were deliberately functional, often built for use and occupancy by large numbers. Activities like dining, sleeping, and socializing would often have separate structures. The origins of this style of compound architecture are unknown but became common, particularly in lake settings, across northern New England. The Rangeley camps were distinguished from their contemporaneous Adirondack counterparts by their consistently utilitarian function. In the Adirondacks, the Adirondack Lodge style was quickly eclipsed by “Great Camps,” which took the basic functionality of the fishing camp and transformed it into a social and aesthetic display not unlike the “cottages” of Newport and Bar Harbor. By retaining their simple style and construction, the Rangeley camps continued to embody the most essential elements of shelter, physical and emotional communication with the environment, and communal experience of nature, ultimately becoming iconic representations of robust outdoor living. The style was adopted (first at Squam Lake in New Hampshire, in 1881) as the prototype for summer youth camp architecture. Fishing wasn’t the only activity that made Rangeley such a popular destination in its boom years. The lakes also offered health resorts and vacation hotels oriented, unlike the camps, more toward leisure than sport. Between 1860 and World War II, more than 100 hotels, clubs, and commercial camps operated on the shores of the big lakes. The large establishments—hotels of 100 rooms and more—were beautiful buildings. But changes in the mobility of vacationers following World War II, combined with changing expectations of what summer resorts had to offer, led to the closure and eventual disappearance of the great resort hotels, not just at Rangeley, but across New England. Many of the individual cottages that surrounded the hotels, most built as annexes to the hotels themselves and integral to the hotel property, have survived and are now in private ownership. These surviving cottages share many of the qualities of summer houses on the Maine coast; the fishing camps less so. The camps are the product of a building up of primitive units of shelter and hearth, possibly through an intermediate tent phase, to create a domestic dwelling for simplified or occasional use, while the cottages are a scaling down of traditional or “conventional” domestic architecture for essentially the same purpose. Ultimately, though, the differences vanish and the distinction becomes academic. As Rangeley’s status as a commercial resort destination declined during and following World War II, its camps and cottages started to pass into private hands. The clustered, multiple-structure style of the fishing camps lent itself well to occupancy by extended families and multiple generations. Here, as in so many summer communities, the camp or cottage often became the annual meeting place for extended families scattered geographically and across generations. Frequently the family camp became the true “home,” the one fixed and unvarying place where grandparents, children, and grandchildren shared not only a common experience, but to a large degree a common identity as each passed through childhood, adolescence, and adulthood in a place far less variable and evolving than what was experienced in the “real” world of careers and modern life. This article was adapted from The Hand of the Small-Town Builder: Summer Houses in Northern New England, 1876-1930 by W. Tad Pfeffer (David R. Godine, 2014, 200 pages, hardcover). W. Tad Pfeffer is a geophysicist, teacher, and photographer at the University of Colorado at Boulder. He is a Fellow of the university’s Institute of Arctic and Alpine Research and Professor in the Department of Civil, Environmental, and Architectural Engineering.
<urn:uuid:0a493b01-6437-45eb-8336-b8d5a67d300e>
CC-MAIN-2021-43
https://maineboats.com/print/issue-135/rangeley-lakes-camps
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00070.warc.gz
en
0.970623
2,487
3.046875
3
Several days ago, I came across John Carmack’s post on learning programming. His advice is truly helpful for programming beginners and worth more reading. This reminds me to spread other great quotes in mind, which from great programmers and computer scientists. Some of them help me understanding more on computing, some of them are principles I’m trying to apply to my daily work, and some of them are funny 🙌. Design and architecture Abstraction is essential Complexity is anything that makes software hard to understand or to modify. All problems in computer science can be solved by another level of indirection. – David Wheeler The power of these statements can be seen in the domains of software development, design patterns, architecture and hardware design. The computing world is a combination of different abstraction layers, operating system, networking model, distribution system and graphic libraries are all abstractions, in different levels. As software engineers, reduce the complexity of abstractions is our key task in development. Keep design simple and changeable Simplicity is prerequisite for reliability. – Edsger Dijkstra Walking on water and developing software from a specification are easy if both are frozen. – Edward V Berard Design is the art of arranging code to work today, and be changeable forever. -– Sandi Metz There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult. – C.A.R. Hoare. The Dutch computer pioneer Dijkstra, had many profound insights on computing complexity and algorithms. He is possibly one of favorite computer scientist in world wide. One of his most cherished habits was in creating amazing articles with a fountain pen. Simplicity doesn’t mean doing less; rather, it’s a way to keep your software maintainable. When you start writing code, you tend to make it very complex. As you become an experienced programmer, you will find keeping things simple is the surest way to build complex systems. The ability to make complex things simple is what sets apart a great programmer from an average one. Coding in a right way Make it work, make it right, make it fast. – Kent Beck When in doubt, use brute force. The sooner you start to code, the longer the program will take. —— Roy Carlson If you can’t write it down in English, you can’t code it. —— Peter Halpern Get your data structures correct first, and the rest of the program will write itself. —— David Jones Don’t write a new program if one already does moie or less what you want. And if you must write a program, use existing code to do as much of the work as possible. —— Richard Hill We better hurry up and start coding, there are going to be a lot of bugs to fix. 😏 I’m always happy to follow these principles when programming. These advice helped me to save a lot of time. Remember, do the right thing at proper time. Make sure you have a good design before you start writing the code, otherwise you will most likely to rollback the finished work. Optimize it or not Before optimizing, use a profiler to locate the “hot spots” of the program. —— Mike Morton In non-I/O-bound programs, less than four per cent of a program generally accounts for more than half of its running time. —— Don Knuth Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. —— Don Knuth Premature optimization means starting to optimize a program without “hot spot” tracing. You won’t fix performance issue and introduce bugs in this way of optimization. Keep code readable Programs must be written for people to read, and only incidentally for machines to execute. -– Hal Abelson and Gerald Sussman. Structure and Interpretation of Computer Programs It’s harder to read code than to write it – joel spolsky Any fool can write code that a computer can understand. Good programmers write code that humans can understand. – Martin Fowler Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live. – Rick Osborne Don’t comment bad code. Rewrite it. – Brian Kernighan Good code is its own best documentation. As you’re about to add a comment, ask yourself: ‘How can I improve the code so that this comment isn’t needed?’ – Steve McConnell (Code Complete) If the code and the comments disagree, then both are probably wrong. —— Norm Schryer When explaining a command, or language feature, or hardware widget, first describe the problem it is designed to solve. —— David Martin Code is like humor. When you have to explain it, it’s bad. -– Cory House Do you have trouble in reading the code written by yourself two years ago? A single piece of code will be read hundreds, maybe thousands of times, by different programmers. Good programmers will write easy to understand the code, and don’t care whether the machine can run(this is compiler or interpreter’s job). Comments will help much on making code readable. But too much of comments also will not help. If the code is self-explanatory, there is no need for comments. Even if you do need a comment, the comment should be about why you did it, not about what you did. When writing code, it is better to be clear than to be clever. “Be cleaver” is something like: condensing multiple lines of code into one, using those tricky algorithms, or using some obscure feature of programming language to accomplish a task in a novel way. Tricky code will make it hard to maintain. Testing can show the presence of bugs, but not their absence. —— Edsger W. Dijkstra If debugging is the process of removing bugs, programming must be the process of putting them in. – Edsger Dijkstra Testing leads to failure, and failure leads to understanding. – Burt Rutan It takes 3 times the effort to find and fix bugs in system test than when done by the developer. It takes 10 times the effort to find and fix bugs in the field than when done in system test. Therefore insist on unit tests by the developer. – Larry Bernstein There is no doubt on the importance of testing. I’m afraid to maintain a code base which don’t contains enough test cases. We should try to find bugs in development phase as many as possible. Unit testing, integrated testing, fuzzing testing are all good practices to improve the coding quantity. From my experience, the testing code is also document for code, which helpful for others understanding code. Embrace testing, it will save you much of time. Of all my programming bugs, 80% are syntax errors. Of the remaining 20%, 80% are trivial logical errors. Of the remaining 4%, 80% are pointer errors. And the remaining 0.8% are hard. —— Marc Donner The first step in fixing a broken program is getting it to fail repeatably. —— Tom Duff Programming is like sex. One mistake and you have to support it for the rest of your life. – Michael Sinz Debugging is a last-ditch effort to save the code. Debugging code is more difficult than writing code. Because when we need to debug, it means the error have escaped from coding, reviewing, and testing. Usually, finding out the root cause of a bug is much harder than fixing it. If you reproduced a bug, you almost finished 80% of work. I’m a fan of Printf Debugging(a.k.a Caveman Debugging). The most effective debugging tool is still careful thought, coupled with judiciously placed print statements. -— Brian Kernighan, “Unix for Beginners” (1979) Debuggers don’t remove bugs. They only show them in slow motion. Don’t reapeat yourself. Every piece of knowledge must have a single, unambiguous, authoritative representation within a system. – Andy Hunt and Dave Thomas I’m not a great programmer; I’m just a good programmer with great habits. – Kent Beck Lazy programmers are good programmers, who want to do avoid duplication and won’t do thing reputably. If there is a lot of repetition in the code, it is likely that we should spend time to refactor the code. Most repetitive tasks are better suited to be done by machines, so we should let it be automated. The most disastrous thing that you can ever learn is your first programming language. – Alan Kay A language that doesn’t affect the way you think about programming is not worth knowing. ― Alan J. Perlis Programming languages, editors, libraries, all are tools for programmers. Pick out the tools you will use frequently, we should know them well, polish them, and make them to be productive. The only way to learn a new programming language is by writing programs in it. – Dennis Ritchie The first principle is that you must not fool yourself and you are the easiest person to fool. ―- Richard P. Feynman Avoid “cookbook programming”, where you copy and paste bits of code that you have found to make something work. Make sure you fully understand what everything it actually doing, and that you are comfortable applying the techniques in other situations. Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter. – Eric S. Raymond Programming isn’t about what you know; it’s about what you can figure out. – Chris Pine Theory is when you know something, but it doesn’t work. Practice is when something works, but you don’t know why. Programmers combine theory and practice: Nothing works and they don’t know why. When I learned programming, I was also anxious about knowing the details, the programming language syntax, IDEs, frameworks, etc. We have a ton to learn. This way of learning will make beginners frustrated. Instead, don’t learn details, learn the essentials and concepts, apply them in practice. Problem-solving is the skill we end up using most. Finally, don’t lose your curiosity on your learning journey. That’s all, hope you enjoy it and share with us your favorite programming quote. Join my Email List for more helpful insights, It's Free!😋
<urn:uuid:f432fa46-aee0-4af3-8685-6e88d38c3db5>
CC-MAIN-2021-43
https://coderscat.com/the-great-quotes-of-programming/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00310.warc.gz
en
0.912858
2,425
2.921875
3
Texas freedom, last stand at the alamo isbn 9781621412250, inc, paperback, 511 pages by thomas j. Last stand at the alamo is a saga that explores the texas revolutionn in the early 1830s, culminating with the battle of the alamo in march 1836 and the battle of san jacinto two months later. The texas freedom network tfn is a texas organization which describes its goals as protecting religious freedom, defending civil liberties, and strengthening public schools in the state. Mexican casualties are disputed but were approximately 600 killed and wounded. When the battle of the alamo ended at approximately 6. The project gutenberg ebook of the battle of san jacinto and the san jacinto campaign. Colonists from america came to the mexican territory looking for a new life and cheap land while others sought out adventure and fortune. Proposed marker to anglo lawyers still riling tejano scholars. While travis and bowie were killed in the fighting, crocketts death is a subject of controversy. Founded in 1996 by cecile richards, the daughter of former governor ann w. All of the following is based on accounts from the daughters of the alamo and dr. Jul 15, 20 the battle of the alamo is the event where many men fought bravely against overwhelming odds and died for their freedom. The alamo is an illustrated history of the fort that became a symbol of courage and sacrifice for freedom. Download it once and read it on your kindle device, pc, phones or tablets. High resolution of the maps used in this video can be found at cgsc. After the fall of the alamo, the building was practically in ruins, but no attempt was made at that time to restore it. However, the men and women who filled these pages are more than footnotes in history. Mar 06, 2018 in the early morning hours of march 6, over 2,000 mexican troops stormed the crumbling adobe mission where approximately 200 defenders awaited the attack, willing to give their lives for the cause of freedom and texas independence. Crazy horses account of the battle of the little bighorn. Buy crosshairs by thomas j berry online at alibris. Joe and ben were not the first african americans involved in the struggle for the alamo. From the heroic battle of the alamo, where brave frontiersmen fight for independence, to the rise of the texas rangers, where fearless volunteers defend their home against outlaws and enemies, the people of texas. By 1800, the missionaries were displaced and their land was seized for. The alamo, and other poems the portal to texas history. From the heroic battle of the alamo, where brave frontiersmen fight for independence, to the rise of the texas rangers, where fearless volunteers defend their home against outlaws and enemies, the people of texas are a force to be reckoned with. Free shipping and pickup in store on eligible orders. Alamo the handbook of texas online texas state historical. Thomas berry received a bachelor of arts degree in philosophy from st. In this video you will hear the classic song garyowen with the original words, while seeing paintings of custers 7th cavalry. Let me start by saying that i am a texan with family that were at the alamo. The battle of the alamo cost the texans the entire 180250man garrison. How houston won freedom for texas full online book. South plainfield resident thomas berry to have book. The republic of texas, on january 18, 1841, passed an act returning the chapel of the alamo to the catholic church. On the second day of the siege, february 24, 1836, travis called for reinforcements with this heroic message. The french legation was built in 1841, and still stands in austin as the oldest frame structure in the city. Mar 05, 2011 books about the alamo began appearing shortly after the smoke of the battle cleared a philadelphia lawyer named richard penn smith published col. Because of this distinction, i cant help but feel a sense of respect when my family and i visit the alamo. Rare book and texana collections and was provided to the portal to texas history by the unt libraries special collections. Phil collinss top five alamo reads true west magazine. Texas decided it wanted to be independent from mexico. Aug 23, 2016 she is also the coauthor of the upcoming book last soul standing, an historical narrative about joe, a slave owned by famed alamo commander william barret travis. Awarded the book of the year 2018 silver medal award from the coffee pot book club. So when i saw this title in goodreads, and found that it was available at my local library, i was elated. With the texas victory of mexican forces under gen. Friends of the south plainfield library host thomas berry. Susanna dickinson susanna dickinson was born in 1814 in bolivar, tennessee. They echo innate human devotion to the idea of fighting for freedom across the world. Berry who looked around desperately for support from his friends. Meriwether lewis, fresh off his famous expedition to explore the louisiana territory, is found dead at a roadside inn along the natchez trace, a dangerous and remote indian trail in tennessee. Freedom school operated in the early 1900s but was probably closed by the mid1900s. Plagued by political and financial scandal, lewis died under mysterious circumstances, leading many to believe he took his own life. The texas state library and archives commission is proud to present this rare opportunity for texans to view what is perhaps the most famous document in texas history. His pistol empty, the colonel flung the solid piece of hardware with terrific force at the. Last men out and millions of other books are available for amazon kindle. Voices of texas history william barret travis 1836 letter. This book, similar to all of the other fictional portrayals, follows closely the facts of the encounter. Flushed with their alamo victory, the mexican forces were following the colonists. An article about crazy horses account as it appeared in the bismarck tribune, june 11, 1877. It has been viewed 18010 times, with 296 in the last month. The battle of san jacinto and the san jacinto campaign, by l. List of texian survivors of the battle of the alamo wikipedia. Today marks the 179th anniversary of the famous battle of san jacinto that ended the texas revolution in 1836. At the alamo in san antonio, then called bejar, 150 texas rebels led by william barret travis made their stand against santa annas vastly superior mexican army. Use features like bookmarks, note taking and highlighting while reading texas freedom. I have read numerous accounts, mexican and texian, nonfiction and fiction, of this historical confrontation. Mar 23, 2017 i was in the alamo prior to, and at its fall, on the 6th march 1836, and knew a man there by the name of henry warnell, and recollect distinctly having seen him in the alamo about three days. Nov 12, 2000 groneman is the author of death of a legend. After texas was annexed to the united states, the alamo was declared property of the united states government. Nov 05, 2012 phil collinss top five alamo reads november 5, 2012 phil collins hes sold more than 200 million records, and hes a lieutenant of the royal victorian order, but what we like most about phil collins is that he is a devotee to all things alamo. One of their chief scouts was a free black man, hendrick arnold. The last stand at the alamo reimagining the alamo for texans and tourists by. If the texans lost at the alamo, why is it always portrayed. Freedom is a small rural community located off state highway 19 about six miles northeast of emory in northeastern rains county and near lake fork reservoir. Iron and bronze volume, on librarything librarything is a cataloging and social networking site for booklovers home groups talk zeitgeist. Offering a comprehensive view of the souths literary landscape, past and present, this volume ofthe new encyclopedia of southern. Alamo, a new book by phillip thomas tucker is likely to rile defenders of the alamo anew. The myth and mystery surrounding the death of davy crockett. The remarkable story of the irish during the texas revolution. Everyday low prices and free delivery on eligible orders. Join the adventure through history, romance, and family legacy as the daughters of the mayflower continues with the alamo bride by kathleen ybarbo in 1836 as texians are facing war with mexico. I figuratively devoured the original 24 books in the wagons west series from 19801989, followed by the 10 books in the holts sequel, from 19891995. Last stand at the alamo by berry, thomas j author on mar312012 paperback by berry, thomas j isbn. It needs to complete it the story of how travis and his band of heroes were avenged. Alamo was designate by unesco as a world heritage site in 2015. Trump administration approves funding for texas womens health program that excludes abortion. Just send us an email and well put the best up on the site. Tom clavin is the author or coauthor of sixteen books. Sam houston, commanderinchief of the texas army, left 6 washington. Last stand at the alamo order the complete book from. Aug 18, 2018 in this video you will hear the classic song garyowen with the original words, while seeing paintings of custers 7th cavalry. Joe, the slave who became an alamo legend save texas. Explore articles from the history net archives about battle of the alamo. Rare book and texana collections and was provided by unt libraries special collections to the portal to texas history, a digital repository hosted by the unt libraries.878 1307 589 979 59 687 927 620 1308 440 838 1283 1157 409 939 833 883 1169 188 532 338 136 395 1100 1162 951 1264 1122 1242 15 1229
<urn:uuid:21a60e65-099f-4bf7-816e-aea1d32c72dd>
CC-MAIN-2021-43
https://prefworkvicen.web.app/531.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00391.warc.gz
en
0.950762
2,197
2.515625
3
OR WAIT null SECS The impetus to consider inflammation as potentially relevant to the pathoetiology of domain-based psychopathology (eg, anhedonia) and/or mental disorders, is provided by a confluence of factors discussed here. Abnormalities in the inflammatory system are linked to many brain-based disorders, including but not limited to, MDD and bipolar disorder (BD). The impetus to consider inflammation as potentially relevant to the pathoetiology of domain-based psychopathology (eg, anhedonia) and/or mental disorders, is provided by a confluence of factors. Many individuals who receive conventional treatments for mood disorders do not achieve/sustain symptom remission. Patient reports also indicate that among those who experience “clinically relevant improvement,” significant deficiencies and dissatisfaction with treatment remain, particularly as it relates to experiences in positive mental health, vitality, resiliency, and premorbid levels of function. Support for research on inflammation in psychiatry comes from advances in the characterization of neurobiological alterations in individuals with brain-based disorders. The ability to identify and characterize genomic variants in large data sets using Genome-Wide Association Studies, as well as a fuller understanding of the role of microRNA. Finally, the Research Domain Criteria initiative of the NIH, has brought the field to the attention of common dimensions/domains of psychopathology regardless of DSM category. Evidence implicates inflammation as a fundamental neurobiological alteration, thus targeting this system may provide beneficial effect on illness trajectory. Preliminary evidence indicates that some anti-inflammatory treatments are not only generally effective in mitigating dimensional psychopathology measures, but may have additional benefits on comorbidity that differentially affects individuals with mood disorders (eg, obesity).1 Neurobiology of inflammation The inflammatory system can be classified into either the innate or the adaptive immune system. The innate system is primordial with low memory capacity present throughout much of the animal species and comprising multiple cellular populations including but not limited to monocytes. The adaptive and/or humoral immune system is responsible for humoral immunity (eg, antibodies) and is comprised of cellular populations (B and T cells) that are capable of recognizing billions of different antigens. The primary role of the inflammatory system dating back to early humanity was to fight off pathogens, which often came from wounding at the hands of predators and other humans. Inflammation was also key for wound healing. The agricultural revolution occurred approximately 10,000 to 12,000 years ago and brought humanity into close proximity with animal-borne pathogens. While this had the negative effect of increased mortality in the short term, in the longer term, it may have also fortuitously augmented immune-inflammatory capability. Advances in food production, agriculture, hygiene, sanitation, urbanization, as well as vaccination, all have contributed to the shift away from infectious disease, and to chronic non-communicable diseases as the principle source of morbidity and mortality. Notwithstanding the fact that predators and pathogens are no longer the principle instigators of the inflammatory system, the system is still wired to react to “threat” without regard to cause. For much of the past century, causes such as urbanization, dissolution of family structure, social alienation, air pollution, and exposure to refined high-glycemic high-calorie foods, along with financial stressors, have become principles triggers of the inflammatory system.2 Consequently, “low-grade” inflammation is now implicated as cause, consequence, and comorbidity of many noncommunicable diseases, including mood disorders. Findings indicates that many individuals with mood disorders exhibit alterations in both innate and adaptive immune systems in both the peripheral and central compartments. For example, comprehensive reviews and meta-analyses indicate that proinflammatory cytokines and acute phase proteins (eg, C-reactive protein (CRP), interleukin-1, interleukin-6, tumor necrosis factor-α), are abnormal in individuals with mood disorders.3 Rather than conceptualize mood disorders as associated with elevated proinflammatory markers, it is more accurate to state that mood disorders are associated with a proinflammatory balance (ie, relative increase in proinflammatory markers associated with relative decrease in select anti-inflammatory markers). The consequence is an abnormal “inflammatory biosignature.” But what is the extent to which the disturbances in the inflammatory system proceed, cause, are associated with, and/or are consequences of mood disorders? Each of these possibilities could be of relevance in any patient. Results from longitudinal studies show that a proinflammatory balance is associated with an increased risk for incident mood disorders. For some individuals with mood disorders, perturbations in the inflammatory-homeostatic network are more pronounced during a state of depression compared with periods of remission. Mechanistically, this may be related to proinflammatory effects of sleep disruption, chaotic eating patterns, as well as dysphoric cognitive emotional processing. For other individuals, the magnitude of disturbance in the inflammatory system may be greater later in the illness trajectory after multiple episodes, suggesting a communicative and/or consequential effect. Additional confounding factors relevant to mood disorders are the effect of comorbidity on the inflammatory system and iatrogenic effects. For example, overweight/obesity, as well as diabetes mellitus, are associated with abnormalities in the inflammatory system, while some medications (and other treatment modalities), may exert either salutary and/or amplifying effects on the proinflammatory balance (eg, lithium and weight-promoting drugs respectively). Advances in neuroscience have enabled the identification of disturbances in the inflammatory system across multimodal and multilevel units of analysis. Disturbances in the inflammatory system have been identified in genetic variance, peripheral and central cytokine alterations, brain nodal structure, and circuit alteration in response to inflammatory challenge. Moreover, findings indicate associations between inflammation and motivation, reward, cognitive emotional processing, and cognition. Can anti-inflammatory treatments ameliorate depressive symptoms? Conventional treatments for mood disorders can exert clinically mediated effects on the human inflammatory system. It is largely a consequence of the “streetlight” effect that relatively little attention has been given to it, with most of the emphasis over the past several decades on the monoamine system. It is not without historical interest that the first Nobel prize in medicine and physiology awarded to a psychiatrist was for the therapeutic effects of malaria fever for individuals institutionalized in insane asylums.4 Conventional pharmacotherapy (eg, SSRIs), as well as other classes of psychiatric medication (eg, lithium) exert effects across disparate levels of the inflammatory system. Perhaps the most compelling proof-of-concept that SSRI therapy indirectly engages inflammatory systems is replicated evidence that the prophylactic use of SSRI reduces the hazard for incident depressive episodes in persons receiving interferon-α therapy for hepatitis C or cancer. Conventional pharmacotherapies, small to moderate-sized study trials, as well as systematic reviews and meta-analyses, give reason to believe that clinically significant benefits within dimensional measures of depression, anxiety, anhedonia, and cognitive functions may be realized with these treatments. Rather than conceptualize anti-inflammatory agents as mechanistically identical, it would be more accurate to evaluate these agents separately according to their postulated mechanism of action, as not all agents can be expected to be helpful, and some may even engender psychopathology. For example, corticosteroid therapy prescribed to individuals with established or latent mood disorders unequivocally exacerbate risk and severity of mood disturbance in select cases. Moreover, available evidence indicates that NSAIDs may interfere with optimal antidepressant efficacy. It is conjectured that deleterious effects of corticosteroids on cognitive emotional processing are in part due to “off-target” effects of these agents (ie, suppressing endogenous anti-inflammatory effects and amplifying proinflammatory balance), while for NSAIDs they may alter critical molecular targets relevant to SSRI efficacy. It may be that the beneficial effects of anti-inflammatory interventions are elevated in discrete subpopulations with mood disorders with perhaps less meaningful effects in other subpopulations. For example, infliximab, FDA-approved for several inflammatory related conditions, significantly mitigates depressive symptom severity in individuals with elevated pre-treatment CRP levels, but not in depressed individuals with lower CRP levels. Furthermore, a proinflammatory balance observed in pretreatment may also identify a subgroup of individuals more likely to benefit from other approaches that engage inflammatory systems (eg, omega-3 fatty acids, ketamine, L-methylfolate, aerobic exercise).5–7 Nonpharmacological approaches (eg, electroconvulsive therapy, mindfulness-based therapy, aerobic therapy), are all “anti-inflammatory,” indicating that the inflammatory system is a convergent target across multiple treatment modalities.8,9 Other anti-inflammatory approaches that appear promising include the use of minocycline 60 to 200 mg daily, which has demonstrated beneficial effects on negative and cognitive symptoms of schizophrenia, as well as depressive symptoms of bipolar disorder.10,11 The antidiabetic agent liraglutide, which also targets the inflammatory system, is FDA-approved for not only type 2 diabetes but also weight loss.12 Preliminary proof-of-concept data indicate that liraglutide may improve depressive and cognitive symptoms in adults with bipolar disorder. A randomized placebo controlled double blind proof-of-concept study is evaluating infliximab for adults with bipolar disorder, who have pretreatment elevated CRP levels; results are expected in July of 2018. Finally, renewed interest in the gut microbiome/microbiota provides convergent evidence indicating that for some individuals with mood disorders, disturbances in the inflammatory system may be partially mediated by gut dysbiosis. It is not known, however, whether dietary manipulation, as a single modality intervention, is sufficient to correct gut dysbiosis and normalize a proinflammatory balance with associated improvement in psychopathology. For some individuals with mood disorders, disturbances in the inflammatory system are directly causative of select symptom/domain of psychopathology (eg, fatigue, anhedonia, cognitive impairment). The pivotal role played by inflammatory systems suggests that engaging with this target could modify illness trajectory in mood disorders. In the short term, what type of “anti-inflammatory” approaches should clinicians consider for their patients? Sleep hygiene, normalization of sleep behavior, and resetting chronobiology are all potently anti-inflammatory. Education and lifestyle modifications, including detailed information related to appropriate diet and exercise, are not only part of good lifestyle choices, but may in fact have direct effects on inflammatory systems relevant to psychopathology in select individuals. Psychosocial interventions (eg, mindfulness-based therapies) exert anti-inflammatory effects and beneficial for individuals who have been affected by trauma, which in itself may be an antecedent to mood disorders s well as a proinflammatory trigger. In addition, the significant contribution of comorbidity (eg, obesity) to inflammation invites the need to specifically target these conditions when present, reiterating the emphasis on the need for treating the entire patient. Currently no anti-inflammatory agent can be considered as being ready for “prime time” or highly recommended for use in persons with mood disorders either along with, or adjunctive to other agents. Instead, select anti-inflammatory agents should be considered promising, with a need for more evidence to establish efficacy, short- and long-term safety, as well as to identify which populations are more-or-less likely to respond. It is not without interest, however, that preliminary data suggest that short-term exposure to minocycline, liraglutide, omega-3 fatty acids, as well as ketamine and L-methylfolate, may be preferentially effective in patients with a proinflammatory balance. Rigorous studies are needed to evaluate dietary interventions that specifically target the gut enterotype as potentially anti-inflammatory and anti-depressant/pro-cognitive. From a population health perspective, it would be propitious to evaluate the effect of “anti-inflammatory approaches” such as the removal of soft drink machines from public schools on brain health, as well as a fuller characterization of the proinflammatory effects of urbanization, social isolation, and climate change on incident depression. Climate change is particularly relevant in some parts of the world, with emerging evidence that links air pollution and suicidality. Dr. McIntyre reports that he is on the Speakers Bureau for AstraZeneca, Bristol-Myers Squibb, Janssen-Ortho, Eli Lilly, Lundbeck, Pfizer, Shire, Otsuka, Purdue, Takeda, and Allergan; he has received research support/grants from Stanley Medical Research Institute, National Alliance for Research on Schizophrenia and Depression (NARSAD), and National Institutes of Mental Health. Dr. Rong reports no conflicts of interest concerning the subject matter of this article. 1. Rosenblat JD, Kakar R, Berk M, et al. Anti-inflammatory agents in the treatment of bipolar depression: a systematic review and meta-analysis. Bipolar Dis. 2016;18:89-101. 2. Ragguett R-M, Cha DS, Subramaniapillai M, et al. Air pollution, aeroallergens and suicidality: a review of the effects of air pollution and aeroallergens on suicidal behavior and an exploration of possible mechanisms. Rev Environ Health. 2017;32:343-359. 3. Fernandes BS, Steiner J, Molendijk ML, et al. C-reactive protein concentrations across the mood spectrum in bipolar disorder: a systematic review and meta-analysis. Lancet Psychiatry. 2016;3:1147-1156. 4. Wikipedia. Julius Wagner-Jauregg. https://en.wikipedia.org/wiki/Julius_Wagner-Jauregg. Accessed February 22, 2018. 5. Papakostas GI, Shelton RC, Zajecka JM, et al. Effect of adjunctive L-methylfolate 15 mg among inadequate responders to SSRIs in depressed patients who were stratified by biomarker levels and genotype: results from a randomized clinical trial. J Clin Psychiatry. 2014;75:855-863. 6. Rapaport MH, Nierenberg AA, Schettler PJ, et al. Inflammation as a predictive biomarker for response to Omega-3 fatty acids in major depressive disorder: a proof-of-concept study. Mol Psychiatry. 2016;21:71-79. 7. Machado-Vieira R, Gold PW, Luckenbaugh DA, et al. The role of adipokines in the rapid antidepressant effects of ketamine. Mol Psychiatry. 2017;22:127-133. 8. Schwieler L, Samuelsson M, Frye MA, et al. Electroconvulsive therapy suppresses the neurotoxic branch of the kynurenine pathway in treatment-resistant depressed patients. J Neuroinflam. 2016;13:51. 9. Wetherell JL, Hershey T, Hickman S, et al. Mindfulness-based stress reduction for older adults with stress disorders and neurocognitive difficulties: a randomized controlled trial. J Clin Psychiatry. 2017;78:e734-743. 10. Rosenblat JD, McIntyre RS. Efficacy and tolerability of minocycline for depression: a systematic review and meta-analysis of clinical trials. J Affect Disord. 2017;227:219-225. 11. Soczynska JK, Kennedy SH, Alsuwaidan M, et al. A pilot, open-label, 8-week study evaluating the efficacy, safety and tolerability of adjunctive minocycline for the treatment of bipolar I/II depression. Bipolar Dis. 2017;19:198-213. 12. Mansur RB, Ahmed J, Cha DS, et al. Liraglutide promotes improvements in objective measures of cognitive dysfunction in individuals with mood disorders: a pilot, open-label study. J Affect Dis. 2017;207:114-120.
<urn:uuid:600a64be-92f4-4133-90dd-5eedfe0234d2>
CC-MAIN-2021-43
https://www.psychiatrictimes.com/view/where-theres-smoke-theres-fire
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00350.warc.gz
en
0.909733
3,377
2.765625
3
Sexual dimorphism in the walrus - male in the background, female in the foreground. Photo from www.marinebio.net. Tusks in Odobenus rosmarus Tusks in the modern walrus Odobenus rosmarus occur in both sexes, but are generally larger and longer in males – and like most other pinnipeds, they are polygynous (a single male mates with multiple females) and sexually dimorphic (males are larger than females). The walrus is restricted to the Arctic – and owing to this, tusks were usually assumed to have something to do with ice. For example, walruses tend to use their tusks to assist in hauling out onto ice, leading many to originally propose that tusks evolved for this purpose. Other workers erroneously identified tusks as being used for excavation of mollusks on the seafloor. However, observations by Francis Fay (1982) and Edward Miller (1975) indicate that a use in feeding or haul out behavior is unlikely. Miller (1975) studied aggressive behavior in male walruses, and observed that tusks perform a central role in male interactions. Most interactions consist of tusk threat displays – the aggressor leans his head back so that the tusks are horizontal and pointing toward the target. If the target is somewhat submissive the aggressor will perform a “stabbing” motion. In more aggressive interactions the aggressor strikes the target with the tusks using the same downward stabbing motion, typically striking the hindquarters, back, or neck. These strikes commonly draw blood but Miller (1975) doubted that many cause serious injury (similar to elephant seal combat). Tusks were also frequently used to parry strikes close to the face. Predictably, walruses preferentially threatened smaller males; perhaps more adorably, juvenile males that even lacked tusks performed play fighting that was similarly ritualized. Strikes tended to follow visual threats, and Miller (1975) indicated that ritualized aggressive behavior like this is fundamentally similar to that seen in sea lions, who perform visual displays (prior to striking) by similarly leaning back and opening the mouth to show the canines. Interestingly, the pattern of scarring is completely opposite to the pattern of observed tusk strikes: scarring is mostly present on the anterior neck region, and Miller (1975) attributed this to several reasons: 1) his observations were on land and 2) during the summer. He hypothesized that during the breeding season, more intense “face to face” combat on ice (or more likely, in the water, as some rare anecdotes suggest) is the origin of anterior scarring. So, the relatively violent behavior that Miller (1975) described is not even that which is known to cause the most scarring on walruses – which seems to suggest that walrus breeding behavior might be a bit terrifying and may give the elephant seal a run for its money. Walrus tusk display and combat. Threat displays frequently prelude tusk strikes. Photo from www.flickr.com Many earlier workers (see Fay, 1982: 134-135 and references therein) concluded that walruses dug prey items out of seafloor with their tusks, and this was based primarily on observations of tusk abrasion in dead animals. At least one early study suggested that walruses scraped the seafloor with their tusks in a posterior direction, but later revised to a side-to-side motion as no abrasion exists on the posterior side of the tusks. Some early reports did cast doubt upon these hypotheses, as occasional individuals were identified as lacking tusks but of otherwise healthy appearance. Fay’s (1982) classic study re-examined the abrasion patterns, and concluded that the primary direction of sediment-tusk interaction was from proximal to distal (e.g. base of tusk to tip), which indicates that tusks are passively dragged through the sediment during benthic foraging. Fay (1982) also indicated that tusks are frequently used for locomotion – including hauling out onto sea ice, and even during aquatic sleeping with the tusks hooked over the edge of an ice floe (like a swimmer resting at the edge of a pool). However, he suggested that these were secondary functions and that by far and away the most significant functions were all social in origin. He hypothesized that because all/most pinnipeds are polygynous, the capability for tusk development is probably universal among the group but extreme canine enlargement is probably only possible once a pinniped lineage has made the shift from piscivory (fish eating) to suction feeding. Notably, most toothed whales with tusks (beaked whales, narwhal, Odobenocetops) are all either known or inferred to be suction feeders. Abrasion of walrus tusks - figure from Fay (1982). Abrasion is focused on the anterior side of the tusk, indicating passive dragging of the tusks through sediment during foraging rather than active digging. Temperate and Subtropical tusked walruses Further eroding ice-related hypotheses for the evolution of tusks in walruses are discoveries of fossil walruses that inhabited drastically warmer waters than the extant Odobenus rosmarus. The earliest known temperate tusked walrus was Alachtherium, which for the past 130 years was known from Belgium, and in the late 1990’s was also reported from the northwestern coast of Africa (Geraads 1997). Subsequently, additional discoveries indicated more occurrences of Alachtherium from Japan and the eastern USA as far south as Florida, and records of the toothless odobenine walrus Valenictus from southern California and even Mexico (Deméré, 1994). Fossils of Valenictus from San Diego and the Imperial Desert indicated to Deméré (1994) that walrus tusks evolved long before walruses became ice-bound in the Arctic, and that tusks are thus “structures with history”. Life restoration of Odobenocetops by Smithsonian artist Mary Parrish. The walrus-faced whale Odobenocetops: implications for tusk use The 1990 discovery of a bizarre fossil mammal, named in 1993 by Christian de Muizon as Odobenocetops, led to a reinterpretation of tusk function in walruses. Odobenocetops was collected from late-Miocene strata of the Pisco Formation of Peru and initially accidentally misidentified as a walrus; I’ve been told that an early SVP abstract with this mistake can be found. LACM Curator Emeritus takes credit for setting the record straight and asking those involved “why does the skull have premaxillary sac fossae?” These fossae, for the uninitiated, are unique to odontocetes (toothed whales), and Muizon (1993) named it as a new genus and species in a new family, Odobenocetopsidae, which he and others (Muizon et al., 2002) considered to be a sister clade to the Monodontidae – the family that includes the beluga and narwhal (and the fossil belugas, Bohaskaia and Denebola). I won’t go into too much detail irrelevant to the tusks, but Odobenocetops only possesses two teeth: asymmetrical left and right tusks that are posteriorly directed and set into elongate, columnar alveolar processes, and exhibits a deeply concave palate. These features and their similarity with the modern walrus indicated a similar mode of feeding. However, the occurrence of similar tusks in a completely different type of marine mammal that independently evolved benthic suction feeding for mollusks begs the question: did tusks really evolve for social purposes? Muizon et al. (2002) conclude that the orientation of the tusks is a bit too coincidental, and that the alveolar processes likely behaved as “sled runners” to stabilize and properly orient the head of Odobenocetops as it trawled the ocean floor for molluscan prey. They conceded that the asymmetry of the tusks (the left tusk is barely erupted while the right tusk is very long – up to 1.35 meters in Odobenocetops leptodon; Muizon and Domning, 2002) indicates that such a function was not optimized in Odobenocetops, and it likely reflects a social function like the tusk of the narwhal. Seafloor foraging of a walrus. From this paper by Levermann et al. Speaking of tusked cetaceans… what the heck is the narwhal tusk for? This is a bit of a convenient topic to tack on here; I’d like to revisit it in more detail in the future since some interesting papers have come out in recent years on the topic. The narwhal (Monodon monoceros) is also sexually dimorphic, and possesses a pair of tusks, generally only the left tusk erupts from the soft tissue. Rarely males will possess an erupted right tusk. Although formerly considered an incisor, recent CT studies indicate that the tusk is embedded entirely within the maxilla and is therefore the canine tooth; a series of other vestigial postcanine teeth also form (Nweeia et al. 2012) but rarely erupt from the skull or soft tissues (and are therefore detectable only using CT imaging). Sexual tusk dimorphism is a bit more extreme than in the walrus: only 15% of female narwhals ever possess tusks that erupt from the soft tissue, and the tusks are always smaller and shorter than those of males. Significantly, narwhals do not appear to be polygynous. The narwhal tusk is conspicuously “spiraled” (presumably for structural rigidity) and exhibits dentine tubules exposed on the surface of the tooth – which suggests some ability to sense water temperature and salinity (Nweeia et al. 2009). In contrast, in mammals that masticate their food the dentine tubules do not extend to the outer margin of the tooth; indeed, toothaches may be caused by dentine tubules being exposed to the oral environment when a cavity forms. Additionally, a pulp cavity extends along the entire length of the tusks. Field experiments which consisted of exposing a small section of tusk to high salinity solution resulted in rapid head movements and breathing in several different individuals. These observations lead Nweeia et al. (2009) to propose that the narwhal tusk fulfills a sensory function. Male and female narwhals underwater. There are surprisingly few underwater photos of narwhals, although this is generally true of most arctic marine mammals and I for one don't blame photographers: it's damned cold! Photo by Paul Nicklen, National Geographic. However, the above arguments follow for the narwhal: the tusks are indeed dimorphic, and if these functions are not important for females (85% of females lack erupted tusks, making sensory functions useless for nearly half of the species), they probably do not reflect the main purpose of the tusk. The extreme sexual dimorphism strongly indicates a social role, and another recent study (Kelley et al. 2014) has found a strong correlation between narwhal tusk size and testes mass – confirming the sexual/social importance of tusks. More observations of tusk use in the narwhal is necessary, but males have been observed rubbing or slapping tusks together, and broken tips of tusks have been found embedded in other male narwhal heads (and, heads of belugas) – indirect evidence of narwhal combat. Similarly, underwater observations of walrus and narwhal behavior and combat are rare or lacking altogether. Adorable bonus photo (by Paul Nicklen, Nationa Geographic/Getty Images). What about other walruses? Thus far, almost all discussions of tusk evolution in walruses have either been confined to the modern species, or daresay even cetaceans like Odobenocetops. Obviously, the former is a necessary starting point, and the latter merits consideration – but, what about extinct walruses? The only serious consideration of tusk evolution using fossil walruses was Deméré (1994), who (as outlined above) remarked upon tusks in walruses (e.g. Valenictus) from temperate and subtropical latitudes. An important question that hasn’t really been asked before is: who had the first tusks? The answer is remarkably easy and quick: the dusignathine Gomphotaria pugnax, which is 2-3 million years older than the earliest known tusked odobenine fossils. Tusks in Gomphotaria are quite a bit different in morphology than modern Odobenus: the tusks are short and procumbent, lack globular dentine, and a smaller pair of lower tusks are present; similar double-tusks are seen in Dusignathus (particularly D. seftoni). There is some variation even amongst the odobenines: Protodobenus has thickened maxillae and large canine roots, but the emergent canine crowns are barely proportionally larger than that in a sea lion; tusks are absent in Aivukus, and short, curved, and procumbent (forward inclined) tusks are present in Alachtherium/Ontocetus and Valenictus (although somewhat longer but no less precumbent). Morgan Churchill and I discussed a few of these points in our paper on Pelagiarctos (Boessenecker and Churchill, 2013). This pattern tells us several things: 1) “Sled runner” tusk function would have only really been present in the modern walrus, as most earlier forms had somewhat procumbent tusks that would not have been aligned with the seafloor; 2) tusks do not really seem to be correlated with any subset of the marine environment, and association with ice likely reflects a relatively recent (e.g. Pleistocene) adaptation of Odobenus to high latitude environments; and 3) tusks evolved in several directions in the last 8 million years, which if anything signifies sexual selection and recalls horn and antler diversity amongst small clades of sexually dimorphic and selective ungulates. The moral of the story is this: there is a difference between what a structure evolved for and what its current function(s) is/are; when walrus tusks first evolved, there was no extensive pack ice and walruses inhabited temperate and subtropical latitudes. The walrus tusk continues to serve an important role in social behavior, but has been used for other purposes (locomotion, sleeping) and is thus an exaptation of sorts. This point can be extended to the narwhal: simply because the narwhal tusk can be sensitive to salinity and temperature does not mean that it evolved for that purpose. In both cases the evidence of sexual dental dimorphism is the most significant, and the evidence rather overhwhelmingly supports a social or sexual origin of tusks in both Arctic species. R. W. Boessenecker and M. Churchill. 2013. A Reevaluation of the Morphology, Paleoecology, and Phylogenetic Relationships of the Enigmatic Walrus Pelagiarctos. PLoS One 8(1):e5411. Deméré, T.A. 1994. Two new species of fossil walruses (Pinnipedia: Odobenidae) from the upper Pliocene San Diego Formation. Proceedings of the San Diego Society of Natural History 29:77-98 Geraads, D. 1997. Carnivores du Pliocene terminal de Ahl al Oughlam (Casablanca, Maroc). Géobios 30(1):127-164 Fay, F.H. 1982. Ecology and biology of the Pacific walrus Odobenus rosmarus divergens Illiger. North American Fauna 74:1-279. Kelley, T.C., Stewart, R.E.A., Yurkowski, D.J., Ryan, A., and Ferguson, S.H. 2014. Mating ecology of beluga (Delphinapterus leucas) and narwhal (Monodon monoceros) as estimated by reproductive tract metrics. Marine Mammal Science (Online early: DOI: 10.1111/mms.12165 Miller, E.H. 1975. Walrus ethology 1. The social role of tusks and applications of multidimensional scaling. Canadian Journal of Zoology 53: 590-613. Muizon, C. de. 1993. Walrus-like feeding adaptation in a new cetacean from the Pliocene of Peru. Nature 365-745-748. Muizon, C. de., and Domning, D.P. 2002. The anatomy of Odobenocetops (Delphinoidea, Mammalia), the walrus-like dolphin from the Pliocene of Peru and its palaeobiological implications. Zoological Journal of the Linnean Society 134: 423-452. Muizon, C. de., Domning, D.P., and Ketten, D. 2002. Odobenocetops peruvianus, the walrus-convergent delphinoid (Mammalia: Cetacea) from the early Pliocene of Peru. Smithsonian Contributions to Paleobiology 93: 223-261. Nweeia, M.T., Eichmiller, F.C., Nutarak, C., Eidelman, N., Giuseppetti, A.A., Quinn, J., Mead, J.G., K’issuk, K., Hauschka, P.V., Tyler, E.M., Potter, C., Orr, J.R., Avike, R., Nielsen, P., and Angnatsiak, D. 2009. Considerations of anatomy, morphology, evolution, and function for the narwhal dentition. In Krupnik, I., Lang, M.A., and Miller, S.E. (editors), Smithsonian at the Poles: contributions to International Polar Year science. 223-240. Nweeia, M.T., Eichmiller, F.C., Hauschka, P.V., Tyler, E., Mead, J.G., Potter, C.W., Angnatsiak, D.P., Richard, P.R., Orr, J.R., and Black, S.R. 2012. Vestigial tooth anatomy and tusk nomenclature for Monodon monoceros. The Anatomical Record 295:1006-1016.
<urn:uuid:de64140a-c52a-4ce2-b57b-4c7724b3ee29>
CC-MAIN-2021-43
https://coastalpaleo.blogspot.com/2014/11/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00271.warc.gz
en
0.919958
4,077
3.296875
3
The Hidden Heroism of Witold Urbanowicz PILOT, PATRIOT, FAMILY MAN, NEIGHBOR “From her sorrowful eyes and scarred face Our Lady of Czestochowa encourages all to entrust themselves to her protection.” These are the cherished words of the hymn to the Black Madonna of Poland, heard in Polish churches for centuries. Her image is scarred by a number of violent attempts to destroy her. It is fitting that at the American shrine in her honor in Doylestown, Pennsylvania, there is a cemetery with rows of identical stone crosses marking the graves of those who gave their lives and youth for freedom, for America, and for Poland. There, under a magnificent statue of the Resurrection, and both the American and Polish flags, hundreds of war veterans lie together in silent witness to their sacrifice. Among the crosses are the graves of a husband and wife, graves that, in their simplicity, do not call the passing visitor to stop and take notice. One is marked Witold Urbanowicz — General — Pilot and the other simply Jadwiga Urbanowicz. Their story begins in 1908 in the country village of Olszanka, near the city of Augustow, Poland, where Witold was born to a modestly well-off family. Historically, Poland’s military heroes came from the cavalry. The Husaria, as the cavalry was known, had proved itself in battle from at least the sixteenth century. But for modern Poles, the heroes were found in the air. Thus, young Witold entered the Polish military academy in Deblin to train as a pilot. After Deblin, he received further training with the elite Kósciuszko Squadron in Warsaw, where he eventually rose to the rank of second-in-command. An unusually gifted pilot, Urbanowicz became a flight instructor at the Deblin Training Center, where he earned the nickname “Cobra.” By 1939 rumors of war abounded, and Urbanowicz was given the assignment of taking a group of air cadets to Romania for the purpose of flying new “Hurricane” fighter planes back to Poland. While on this assignment, war came to Poland in the early morning with a Blitzkrieg of nearly a million German troops and hundreds of planes and tanks smashing the country from the north, south, and west. By midday the German Panzer division was in the heart of Poland, and it seemed that the entire country had been set on fire. The treaty between Germany and the Soviet Union enabled Stalin to invade Poland from the east. If the Poles thought the Russians would assist them, they soon learned how mistaken they were. Urbanowicz returned to Poland on foot and was immediately arrested, some accounts state by the Red Army, some by the Nazis. Regardless, he managed to escape his captors and rejoin his unit in Romania. He was given false papers and money with instructions to take his cadets to Bucharest, where they would find help getting into France. Had this not taken place, Urbanowicz most likely would have been killed in the Katyn Forest massacre, as were many of his friends and colleagues. The pilots’ welcome in France was less than cordial. But soon a call came for Polish pilots to go to England and join the Royal Air Force (RAF). Due to their superior training, the Polish pilots acquitted themselves in their flying ability in both France and England. Disdain soon transformed into respect. During the Battle of Britain in 1940, Lt. Urbanowicz, now the youngest squadron commander in the RAF, was credited with fifteen confirmed kills and one probable, giving him the title of top Polish ace and placing him in the top ten of all allied aces of the battle. It is widely claimed that without the aid of the Polish pilots England would have succumbed to a Nazi invasion. When Lt. Urbanowicz learned that his mother was seriously ill, he surreptitiously returned to Poland, only to learn that his brother had been killed at the assault of Monte Cassino. Once again he was arrested, this time by the Communist Security Service, but he successfully managed to escape. In 1941, after Urbanowicz’s many courageous sorties, Gen. Wladyslaw Sikorski, prime minister of the Polish Government in Exile and commander-in-chief of the Polish Armed Forces, appointed him to the Second Air Attaché to the Polish embassy in the U.S. Urbanowicz’s exploits attracted the attention of Lt. Gen. Claire Channault, and in 1943 Urbanowicz was asked to join the U.S. Army Air Force unit in China with the 75th Fighter Squadron known as the “Flying Tigers.” He took part in a number of combat missions over China and fought against Japanese fighters with the skill of an expert pilot. After he finished his Chinese tour he was again assigned to diplomatic service in the U.S. Elevated to the rank of colonel, Urbanowicz was awarded six medals of valor from the RAF, including the Distinguished Flying Cross, and three from the U.S.: the Air Medal, the Distinguished Flying Cross, and the Chinese Army, Navy and Air Corps Medal. To the great sorrow of the Polish nation and most certainly the Polish airmen, at the close of the war the Yalta Conference decided the “Polish question.” Roosevelt and Churchill, worried about antagonizing Stalin, refused to listen to the Poles’ heartfelt cry, summed up in the words of Col. Urbanowicz: “We want Poland back.” Instead, Poland was handed over to Stalin and suffered over forty years of communist oppression. At the end of the war, despite their amazing courage and accomplishments, the Kósciuszko Squadron was deliberately barred from the British victory parade. They were relegated to standing on the sidewalk and watching. In order to appease Stalin and the communist government, the soldiers’ cause — and Poland’s — was betrayed. It was one of the most scandalous betrayals in history. On a lecture tour as a diplomat, Urbanowicz met his future wife, Jadwiga, the daughter of a shipping executive from Kraków. Jadwiga had managed to escape to America with her mother. Once married, Witold and Jadwiga settled in New York, where they had a son. How this military hero and his family came to live in my neighborhood I do not know. It is an unremarkable area in one of the outer boroughs of New York City, Queens, nestled between cemeteries and parklands. At the time the Urbanowiczes arrived, there was no Polish community to speak of. Their son, Witold, was named after his father, but the neighborhood boys called him “Vito.” To the neighbors they were simply a not-too-young couple who went to work every day while their son attended the local parochial school. On the rare occasions when they went out, they did not want to leave Vito alone, and so I became Vito’s “baby sitter” of sorts, even though there was not much difference in our ages. I remember one New Year’s Eve. Mr. and Mrs. Urbanowicz, as I called them, were going out to dinner dressed formally in tuxedo and evening gown. Perhaps they had been invited to the Polish Embassy or perhaps Mr. Urbanowicz was being given a special honor. I never asked and they never said. After they left, Vito, who was sitting on the living-room floor, looked around the apartment and said, rather ruefully, “This is a very strange house.” Then he went to bed. Vito didn’t really mean “strange,” just “different.” I think Vito would have liked his father to have been an ordinary man who took his son to ball games and perhaps headed a Boy Scout troop. But in this house, in this family, things were different. In the corner of the living room of their home, a marble bust of Mrs. Urbanowicz stood on a pedestal. There was no television. The room was filled with floor-to-ceiling bookshelves stuffed with hundreds of books. I couldn’t read any of the titles because they were all in Polish. I did find one that had a picture of Mr. Urbanowicz in an RAF uniform covered with military medals. I knew then that this was not your average neighborhood family. Time went on, and Vito no longer needed a “baby sitter.” Years later I ran into Mrs. Urbanowicz on the street near my house, looking rather sad and forlorn. I think her husband had just died, though she didn’t say. She had recently retired from her position as a librarian with the New York Public Library. Vito did not follow in his father’s footsteps by joining the military. Instead, she told me, Vito had gone on to New York University and had started a rhythm and blues band — a far cry from Chopin and Paderewski. It was not easy being the son of a famous and courageous war-hero father. Eventually, I found Mr. Urbanowicz’s obituary in The New York Times, and was moved to try to unravel the mystery of this humble, quiet couple, whose story had puzzled me for so long. There are two parts to their story. One is the story of the brave Polish airman and his wartime exploits; the other is how the Urbanowiczes lived in such an unpretentious way. They were so completely modest that no one ever suspected that they had lived such remarkable lives. Besides his work in the airline industry, Col. Urbanowicz began writing — an activity he greatly enjoyed. He wrote several books and articles, some of which were published in the American Fighter Aces Bulletin. His books — Fire Over China, The Beginning of Tomorrow, Fighters, Dawn of Victory, and Flying Tigers — had to be published in Poland for want of an American publisher, and as a result they were heavily censored. Yet we neighbors had no inkling as to who the Urbanowiczes really were. Perhaps Miranda’s words in Shakespeare’s The Tempest say it best: “O wonder! How many goodly creatures are there here! How beauteous mankind is! O brave new world that has such people in’t!” Before he died, Witold Urbanowicz was able to return to his beloved Poland once it was free. He was finally given the victory parade he was banned from in London so many years before. With the stirring strains of the Polish National anthem, “Poland Has Not Yet Perished as Long as We Live,” and amid the red-and-white Polish flags, Lech Walesa raised Urbanowicz to the rank of general, an honor long in coming but never so well deserved. In the 1980s, when Walesa was leading the outlawed Solidarity movement, he sent a surreptitious message to the people of Poland: On the lapel of his suit jacket he wore a medal of Our Lady of Czestochowa — the Black Madonna of Jasna Góra — Queen of Poland. Silently, in all the hearts of the Polish people, the final refrain from her hymn most assuredly resounded: “O how good it is to be Your child, to be hidden in Your arms.” Enjoyed reading this? READ MORE! REGISTER TODAYSUBSCRIBE You May Also Enjoy In the final analysis, can any of the objections voiced against his beatification stand up to the proof that God Himself intervened in the temporal world in response to the heavenly intercession of Pope John Paul II? Throughout Central America the Church is a voice of the simple people. Often it is hemmed in by at best suspicious regimes, of the Right or the Left. Socialism rejects the salvation of Jesus Christ and insists that man can be “freed” through the destruction of Church, family, and community.
<urn:uuid:3872ab0a-65da-4683-9159-c78c4ee11407>
CC-MAIN-2021-43
https://www.newoxfordreview.org/documents/the-hidden-heroism-of-witold-urbanowicz/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00071.warc.gz
en
0.98298
2,528
2.828125
3
Recycling by product Products made from a variety of materials can be recycled using a number of processes. Building and construction waste Aggregates and concrete Concrete aggregate collected from demolition sites is put through a crushing machine, often along with asphalt, bricks, dirt, and rocks. Smaller pieces of concrete are used as gravel for new construction projects. Crushed recycled concrete can also be used as the dry aggregate for new concrete if it is free of contaminants. Builder's rubble (like broken down bricks) is also used for railway ballast and gravel paths. This reduces the need for other rocks to be dug up, which in turn saves trees and habitats. Asphalt and tarmac Asphalt including asphalt shingle can be melted down and in part recycled. Tarmac can also recycled and there is now an active market for recycling tarmac in the developed world. This includes tarmac scalpings produced when roads are scarified before a new surface is laid. Gypsum, plaster and plasterboard products Because up to 17% of gypsum products are wasted during the manufacturing and installation processes. Wallboard (Australia and others), plasterboard, Gyp (New Zealand), drywall (USA) or plasterboard (UK and Ireland) is frequently not re-used and disposal can become a problem. Some landfill sites have banned dumping of gypsum because of the tendency to produce large volumes of hydrogen sulfide gas. Some manufacturers take back waste wallboards from construction sites and recycle it into new wallboard. Gypsum waste from new construction, demolition and refurbishment activities can be turned into recycled gypsum through mechanical processes, and the recycled gypsum obtained can replace virgin gypsum in the Gypsum Industry. Some of the reasons for recycling this waste are: - Gypsum is one of the few construction materials for which closed loop recycling is possible. - Closed loop gypsum recycling saves virgin gypsum resources. - According to the European Directive 2008/98/EC on Waste, recycling should be preferred to recovery and landfill disposal. - This Directive also establishes that the preparing for re-use, recycling and other material recovery of non-hazardous Construction and Demolition (C&D) waste (excluding soil and stones other that those containing dangerous substances) have to be increased to a minimum of 70% by weight by 2020. - The disposal of gypsum-based materials can become a problem if they are accepted at normal cells in non-hazardous landfills, as the sulphate content of gypsum mixed with organic waste can break down under certain conditions into hydrogen sulfide gas. Intact bricks recovered from demolition can be cleaned and re-used. The large variation in size and type of batteries makes their recycling extremely difficult: they must first be sorted into similar kinds and each kind requires an individual recycling process. Additionally, older batteries contain mercury and cadmium, harmful materials that must be handled with care. Because of their potential environmental damage, proper disposal of used batteries is required by law in many areas. Unfortunately, this mandate has been difficult to enforce. Lead-acid batteries, like those used in automobiles, are relatively easy to recycle and many regions have legislation requiring vendors to accept used products. In the United States, the recycling rate is 90%, with new batteries containing up to 80% recycled material. Japan, Kuwait, the USA, Canada, France, the Netherlands, Germany, Austria, Belgium, Sweden, the UK and Ireland all actively encourage battery recycling programs. In 2006, the EU passed the Battery Directive of which one of the aims is a higher rate of battery recycling. The EU directive said at least 25% of all the EU's used batteries must be collected by 2012, and rising to no less than 45% by 2016, of which, that at least 50% of them must be recycled. Kitchen, garden, and other green waste can be recycled into useful material by composting into leaf mold and regular compost. This process allows natural aerobic bacteria to break down the waste into fertile topsoil. Much composting is done on a household scale, but municipal green-waste collection programs also exist. These programs can supplement their funding by selling the topsoil produced. Electronics disassembly and reclamation Electronic recycling is recycling or reuse of computers or other electronics. It includes both finding another use for materials (such as donation to charity), and having systems dismantled in a manner that allows for the safe extraction of the constituent materials for reuse in other products. The direct disposal of electrical equipment—such as old computers and mobile phones is banned in many areas, such as the UK, parts of the USA, Japan, Ireland, Germany and the Netherlands, due to the toxic contents of certain components. The recycling process works by mechanically separating the metals, plastics, and circuit boards contained in the appliance. When this is done on a large scale at an electronic waste recycling plant, component recovery can be achieved cost-effectively. With high lead content in CRTs, and the rapid diffusion of new flat-panel display technologies, some of which (LCDs) use lamps containing mercury, there is growing concern about electronic waste from discarded televisions. Related occupational health concerns exist, as well, for disassemblers and scrap dealers removing copper wiring and other materials from CRTs. Further environmental concerns related to television design and use relate to the devices' increasing electrical energy requirements. Computers that are termed trashware in North America or totally reconditioned hardware in the UK and Ireland are computer equipment that has assembled from old hardware, using cleaned and checked parts from different computers, for use by disadvantaged people to bridge the digital divide. Trashware is different from retrocomputing, which has only cultural and recreational purposes. Ink jet printer cartridges Because printer cartridges from the original manufacturer are often expensive, demand exists for cheaper third party options. These include ink sold in bulk, cartridge refill kits, machines in stores that automatically refill cartridges, re-manufactured cartridges, and cartridges made by a corporate entity other than the original manufacturer. Consumers can refill ink cartridges themselves with a kit, or they can take the cartridge to a refiller or re-manufacturer where ink is pumped back into the cartridge. PC World reports that refilled cartridges have higher failure rates, print fewer pages than new cartridges, and demonstrate more on-page problems like streaking, curling, and colour bleed. A wide range of metals in commercial and domestic use have well developed recycling markets in most developed countries. Domestic recycling is commonly available for Iron and steel, aluminium and in particular beverage and food cans. In addition, building metals such as copper, zinc and lead are readily recyclable through specialised companies. In the UK, these are usually either specialised scrap dealers or car breakers. Other metals present in smaller quantities in the domestic waste stream such as tin and chromium are also extracted from metal put into the recycling system but are rarely recovered from the general waste stream. Paper and newsprint Paper and newsprint can be recycled by reducing it to pulp and combining it with pulp from newly harvested wood. As the recycling process causes the paper fibres to break down, each time paper is recycled its quality decreases. This means that either a higher percentage of new fibres must be added, or the paper down-cycled into lower quality products. Any writing or colouration of the paper must first be removed by deinking, which also removes fillers, clays, and fibre fragments. Almost all paper can be recycled today, but some types are harder to recycle than others. Papers coated with plastic or aluminium foil, and papers that are waxed, pasted, or gummed are usually not recycled because the process is too expensive. Sometimes recyclers ask for the removal of the glossy paper inserts from newspapers because they are a different type of paper. Glossy inserts have a heavy clay coating that some paper mills cannot accept. Most of the clay is removed from the recycled pulp as sludge, which must be disposed of. If the coated paper is 20% by weight clay, then each ton of glossy paper produces more than 200 kg of sludge and less than 800 kg of fibre. The price of recycled paper has varied greatly over the last 30 or so years. The German price of €100/£49 per tonne was typical for the year 2003 and it steadily rose over the years. By the September 2008 saw the American price of $235 per ton, which had fallen to just $120 per ton, and in the January 2009, the UK's fell six weeks from about £70.00 per ton, to only £10.00 per ton. The slump was probably due to the economic down turn in East Asia leading to market for waste paper drying up in China. 2010 averaged at $120.32 over the start of the year, but saw a rapid rise global prices in May 2010, with the June 2010 resting $217.11 per ton in the USA as China's paper market began to reopen! Mexico, America, the EU, Russia and Japan all recycle paper en masse and there are many state run and private schemes running in those countries. In 2004 the paper recycling rate in Europe was 54.6% or 45.5 million short tons (41.3 Mt). The recycling rate in Europe reached 64.5%3 in 2007, which confirms that the industry is on the path to meeting its voluntary target of 66% by 2010. Plastic recycling is the process of recovering scrap or waste plastics and reprocessing the material into useful products. Compared to glass or metallic materials, plastic poses unique challenges. Because of the massive number of types of plastic, they each carry a resin identification code, and must be sorted before they can be recycled. This can be costly; while metals can be sorted using electromagnets, no such 'easy sorting' capability exists for plastics. In addition to this, while labels do not need to be removed from bottles for recycling, lids are often made from a different kind of non-recyclable plastic. To help in identifying the materials in various plastic items, resin identification code numbers 1-6 have been assigned to six common kinds of recyclable plastic resins, with the number 7 indicating any other kind of plastic, whether recyclable or not. Standardized symbols are available incorporating each of these resin codes. Tire recycling or rubber recycling is the process of recycling vehicles tyres that are no longer suitable for use on vehicles. These tyres are among the largest and most problematic sources of waste, due to the large volume produced and their durability. Those same characteristics which make waste tyres such a problem also make them one of the most re-used waste materials, as the rubber is very resilient and can be reused in other products. In the United States, approximately one tyre is discarded per person per year. However, material recovered from waste tyres, known as "crumb" is generally only a cheap "filler" material and is rarely used in high volumes. Tyre recycling is also fairly common. Used tires can be added to asphalt for producing road surfaces or to make rubber mulch used on playgrounds, basketball courts and new shoe products. They are also often used as the insulation and heat absorbing/releasing material in specially constructed homes known as earthships. It is arguable that tire crumb in applications such as basketball courts could be better described as "reused" rubber rather than "recycled". Ship breaking is a type of ship disposal involving breaking up ships for scrap recycling, with the hulls being discarded in ship graveyards. Most ships have a lifespan of a few decades before there is so much wear that refitting and repair becomes uneconomical. Ship breaking allows materials from the ship, especially steel, to be reused. Equipment on board the vessel can also be reused. Until the late 20th century, ship breaking took place in port cities of industrialized countries such as the United Kingdom and the United States. Today, most ship breaking yards are in Alang in India, Chittagong in Bangladesh, Aliağa in Turkey and Gadani near Karachi in Pakistan. Ship breaking is one example that has associated environmental, health, and safety risks for the area where the operation takes place; balancing all these considerations is an environmental justice problem. In many countries, there is an active market in re-selling used clothes. In Britain, this dominated by charity shops who sell donated clean clothes. Less saleable clothes are put into the re-cycling waste stream. Textiles are made of a variety of materials including cotton, wool, synthetic plastics, linen, modal and a variety of other materials. The textile's composition will affect its durability and method of recycling. Textiles entering the re-cycling stream are sorted and separated by workers into good quality clothing and shoes which can be reused or worn. There is a trend of moving these facilities from developed countries to developing countries either for charity or sold at a cheaper price. Many international organisations collect used textiles from developed countries as a donation to those third world countries. This recycling practise is encouraged because it helps to reduce unwanted waste while providing clothing to those in need. Damaged textiles are further sorted into grades to make industrial wiping cloths and for use in high quality paper manufacture or material suitable for fibre reclamation and filling products. If textile reprocessors receive wet or soiled clothes, however, these may still be disposed of in a landfill, as the washing and drying facilities may not be present at sorting units. Fibre reclamation mills sort textiles according to fibre type and colour. Colour sorting eliminates the need to re-dye the recycled textiles. The textiles are shredded into "shoddy" fibres and blended with other selected fibres, depending on the intended end use of the recycled yarn. The blended mixture is carded to clean and mix the fibres and spun ready for weaving or knitting. The fibres can also be compressed for mattress production. Textiles sent to the flocking industry are shredded to make filling material for car insulation, roofing felts, loudspeaker cones, panel linings and furniture padding. According to Earth911.com, "Metal hangers, while made of steel, can be difficult to recycle because their hooks can damage recycling equipment and some have a petroleum coating. Some curbside recycling programs do accept them.... Many dry cleaners take back hangers, too...." Chat and furnace slag In North America, mine chat waste can be used on snow-covered roads to improve traction; as gravel; and as construction aggregate, principally for railway ballast, highway construction, and concrete production. Furnace slag and to a lesser degree coal slag have been used in lieu of construction and railway ballast gravel in the UK. Clinker, slag, fly ash and in some cases ashes have all historically been used in places such as the industrial parts of Yorkshire and South Wales to make domestic cinder paths. - Netregs - Guidance on Roadstone coating processes - Plasterboard recycling - EUROGYPSUM, Environmental and Raw Material Committee. Factsheet on: What is gypsum? "Archived copy" (PDF). Archived from the original (PDF) on 2013-12-02. Retrieved 2013-11-25.CS1 maint: archived copy as title (link) Retrieved 16 December 2013. - The European Parliament and The Council of the European Union. Directive 2008/98/EC on waste (Waste Framework Directive). Official Journal L 312 , 22/11/2008 P. 0003 - 0030. http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32008L0098:EN:NOT Retrieved 16 December 2013. - Hurd, David C. (1993). Recycling of consumer dry cell batteries. Park Ridge, N.J: Noyes Data Corp. ISBN 0-8155-1325-9. - "Batteries - Municipal Solid Waste (MSW)". United States Environmental Protection Agency. Retrieved 2008-02-21. - "Archived copy". Archived from the original on 2012-10-15. Retrieved 2012-10-15.CS1 maint: archived copy as title (link) - "EU agrees battery recycling law". BBC News. 2006-05-03. - "The Rise of the Machines: A Review of Energy Using Products in the Home from the 1970s to Today" (PDF). Energy Saving Trust. July 3, 2006. Archived from the original (PDF) on August 8, 2007. Retrieved 2007-08-31. - "Printers: Refills or new cartridges?". PCWorld.ca. 2007-04-03. Archived from the original on 2007-06-05. Retrieved 2009-07-22. - Metals - aluminium and steel recycling Archived 2007-10-16 at the Wayback Machine - Recycling of Copper - Zinc recycling - Lead recycling - Tin recycling - "EarthAnswers - How is Paper Recycled?". Archived from the original on 2008-04-13. Retrieved 2008-02-23. - "Archived copy". Archived from the original on 2009-04-12. Retrieved 2010-07-01.CS1 maint: archived copy as title (link) - Sutherland, Keri; Gallagher, Ian (2009-01-05). "Recycling crisis: Taxpayers foot the bill for UK's growing waste paper mountain as market collapses". Daily Mail. London. - Tomlinson, Heather (2003-04-06). "Recycled paper up in price". The Independent. London. - "ERPC Facts and Figures". European Recovered Paper Council (ERPC). Archived from the original on 2007-09-30. Retrieved 2006-09-27. - "European Declaration on Paper Recycling 2006–2010. Monitoring Report 2007" (PDF). European Recovered Paper Council. Archived from the original (PDF) on 2009-09-07. Retrieved 2009-01-17. - "UK reliance on foreign textile sorting "frightening"". Letsrecycle.com. 2006-11-08. Retrieved 2010-06-19. - "Salvation Army". Retrieved 2008-02-29. - "Councils "need to understand" importance of textile quality". Letsrecycle.com. 2006-11-24. Retrieved 2010-06-20. - "10 Things in Your Closet You Can Reuse or Recycle". Retrieved 2014-02-09.
<urn:uuid:a0a4aa0b-8f2c-4d4a-84b5-70a09b9471ce>
CC-MAIN-2021-43
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Recycling_by_product
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00429.warc.gz
en
0.92616
3,867
3.53125
4
According to the National Kidney Foundation, kidney disease is the ninth leading cause of death in the United States. It affects 15% of the U.S. population, or around 37 million people, making it a serious public health concern. What’s even more concerning is that most people living with kidney disease don’t even know they have it. In many cases, chronic kidney disease doesn’t cause any symptoms until it’s considered moderate or advanced. As a result, people with chronic kidney disease have an increased risk of diabetes, high blood pressure, and other complications. This guide explains what the BUN test is, why it’s used, and what it involves. It also provides an overview of CKD symptoms, risk factors, and treatment options. The purpose of the BUN test is to determine if an individual has a higher-than-normal amount of urea nitrogen in their blood. In healthy people, the kidneys filter waste products out of the bloodstream, play a role in maintaining normal fluid volume, and help control blood pressure and other functions. When the kidneys are damaged, they can’t carry out these important functions as well as usual. As a result, urea nitrogen and other waste products build up in the blood, causing symptoms such as fatigue and nausea. The BUN test is typically performed as part of a basic or comprehensive metabolic panel. In adults, both tests are often ordered once per year. The BUN test should also be done when an individual is experiencing symptoms of late-stage kidney disease. These symptoms include fatigue, muscle cramps, and frequent urination. People who have high blood pressure or diabetes should have this test, along with other kidney-function tests, at least once per year, as recommended by the American Academy of Family Physicians. All that’s required for the BUN test is a blood sample. No special preparation is required for the BUN test, but it may be necessary to fast for 10 to 12 hours if the test is to be performed as part of a basic or comprehensive metabolic panel. Both panels check the amount of glucose in the blood, which is affected by eating and drinking. In people with chronic kidney disease (CKD), the kidneys have lost some of their ability to filter the blood. This allows minerals, fluid, and waste products to build up in the bloodstream. Sodium, potassium, calcium, and phosphorus are some of the most important minerals in the human body. Although they’re necessary for survival, allowing them to build up in the bloodstream can cause life-threatening complications. In addition to filtering blood, the kidneys are also responsible for producing certain hormones, maintaining normal blood pressure, and inducing the manufacturing of blood cells. When kidney function is impaired, the kidneys can’t carry out these functions, increasing the risk for high blood pressure and medical problems caused by hormone imbalances or a lack of red blood cells. For example, some people with chronic kidney disease develop anemia, which can cause fatigue, dizziness, pale skin, chest pain, difficulty breathing, weakness, and headaches. People with CKD may also develop bone disorders caused by an imbalance of phosphorus and calcium in the blood. These disorders don’t cause symptoms right away, but as they worsen, can cause pain in the bones and joints. CKD may not cause any symptoms at first. As the disease worsens, swelling is usually one of the first symptoms to appear. Swelling occurs because the kidneys can’t filter excess fluid from the bloodstream. People with CKD may also experience itching, dry skin, chest pain, fatigue, changes in urination, sleep problems, vomiting, muscle cramps, loss of appetite, or headaches. Some people are more likely to develop CKD than others. High blood pressure and diabetes both increase the risk for CKD by damaging the blood vessels that supply the kidneys. People with heart disease or a family history of CKD are also more likely to develop the disease. When a health care provider orders a BUN test, the patient must provide a blood sample. The sample is obtained via a procedure known as venipuncture, which involves puncturing a vein with a needle. Before performing this procedure, a phlebotomist examines the patient’s arms to identify a suitable vein. Once the vein has been located, the phlebotomist uses a tourniquet to stop some of the patient’s blood from returning to the heart. This causes the blood to collect in the vein, making it easier to draw a blood sample. After puncturing the vein with a needle, the phlebotomist uses a tube to collect enough blood for analysis. When the sample arrives at the laboratory, a laboratory scientist uses a procedure known as LX20 modular chemistry to quantify the amount of urea nitrogen in the blood. A normal BUN level usually ranges from 6 to 20 mg/dL (milligrams per deciliter). Chronic kidney disease is usually irreversible, but it’s possible to slow the progression of CKD with lifestyle changes and medications. Since CKD causes minerals to build up in the bloodstream, diet is extremely important for managing kidney disease. The National Institute of Diabetes and Digestive and Kidney Diseases recommends avoiding high-sodium foods and consuming no more than 2,300 mg of sodium per day. High levels of sodium are typically found in canned soups, frozen meals, snack foods, and other prepared foods. Consuming less sodium can help a person with CKD avoid high blood pressure or control hypertension once it has developed. In the advanced stages of kidney disease, it may be necessary to consume less protein, as protein is broken down into several waste products that build up in the bloodstream when the kidneys aren’t working well. People with advanced kidney disease may also need to limit their intake of foods containing high levels of potassium and phosphorus. The human body needs both minerals for survival, but too much potassium or too much phosphorus in the bloodstream can be dangerous. High levels of phosphorus are found in dairy products, poultry, fish, red meat, oatmeal, nuts, and beans. Bananas, oranges, potatoes, dairy products, whole-wheat bread, beans, brown rice, and nuts are considered high in potassium. Several types of medications are used to prevent CKD complications and protect the kidneys from further damage. Some people with CKD take medication to reduce their blood pressure, which can prevent damage to the blood vessels in the kidneys. Statins may be used to lower cholesterol, reducing the risk of cardiovascular complications of CKD. In people with CKD who also have a high risk of heart attack, aspirin and other blood thinners may be used to prevent heart attacks. Finally, some people with CKD take medicine to reduce the amount of uric acid in their blood. If too much uric acid builds up in the bloodstream, it can cause a painful condition known as gout. When the kidneys stop working, it may be necessary to start hemodialysis or receive a kidney transplant. Hemodialysis, commonly shortened to dialysis, is a process in which blood is removed from the body, filtered by a machine, and then returned to the individual’s bloodstream. The filtering process removes excess fluid and waste products from the blood, which can relieve some of the symptoms of CKD. Hemodialysis also helps control the individual’s blood pressure and prevent the complications that can occur when too much sodium, calcium, and potassium build up in the bloodstream. Kidney transplantation involves taking a kidney from a donor and implanting it in a person with chronic kidney disease. Not everyone with kidney disease qualifies for a kidney transplant, so it’s important to work closely with a nephrologist (kidney specialist) to determine if transplantation is a viable option. The donor kidney can come from a living or deceased donor. If the organ comes from a deceased donor, the recipient must go to the hospital immediately after being notified that a kidney is available, as there’s a limit on how much time can pass between when the donor dies and when the kidney is transplanted into the recipient. Not necessarily. A high BUN level can be caused by several conditions other than CKD. These conditions include heart attack, dehydration, urinary tract infections, gastrointestinal bleeding, and congestive heart failure. If a single test shows an elevated BUN level, more tests are usually needed to determine the underlying cause. Not necessarily. It’s possible to have kidney disease and still have a normal BUN level. In addition to the BUN test, a health care provider may order a creatinine test, a urine protein test, a microalbuminuria test, or a GFR test to determine if the kidneys are functioning normally. Like urea nitrogen, creatinine is a waste product that forms during protein metabolism. It’s possible to have a normal BUN level and a high creatinine level, indicating that the kidneys aren’t working at their full capacity. The urine protein test is used to determine how much protein is in an individual’s urine. A high urine protein level is an indicator of kidney damage. The microalbuminuria test is more sensitive than the protein test, as it detects small quantities of protein in an individual’s urine. The results of this test can be used to gauge the amount of damage that has been sustained by the kidneys. GFR stands for glomerular filtration rate and is a measure of how well the kidneys are working. In healthy adults, a normal GFR is 90 or above. A GFR of 60 to 89 may be normal in some people, but a GFR below 60 is abnormal. Anyone with a GFR below 30 should see a kidney specialist regularly. A GFR below 15 indicates that an individual is in kidney failure and may need hemodialysis or a kidney transplant. It depends on how much urea nitrogen is in the blood and whether other test results are abnormal. Not everyone with an elevated BUN level needs treatment right away. A healthcare provider may decide to order the test again in a few months to see if the individual’s BUN level is still high. A high BUN level is usually anything over 20 mg/dL, however, some laboratories have different reference ranges. It’s important to discuss the results with a medical professional to determine what’s normal for a particular lab. Yes. People who follow a high-protein diet may have higher levels of urea nitrogen in their blood. Certain medications may also cause elevated BUN levels. These medications include high-dose aspirin, diuretics, some antibiotics, and some drugs used to treat high blood pressure. People with severe burns may also have higher-than-normal amounts of urea nitrogen in their blood. With venipuncture, there is a slight risk of infection once the skin has been pierced with a needle. A phlebotomist will reduce the risk of infection by cleansing the skin thoroughly before inserting a needle. After a blood draw, some people experience bruising or mild tenderness at the insertion site. Dizziness and fainting may also occur during a blood draw. Anyone with a history of fainting during this procedure should alert the phlebotomist before having blood drawn for a BUN test. |The Cleveland Clinic||my.clevelandclinic.org/health/articles/15641-renal-diet-basics||The Cleveland Clinic offers tips for following a renal diet, which is the diet recommended for people with chronic kidney disease.| |MedlinePlus||www.medlineplus.gov/chronickidneydisease.html||MedlinePlus provides a detailed overview of chronic kidney disease and its symptoms.| |NIDDK||www.niddk.nih.gov/health-information/kidney-disease/chronic-kidney-disease-ckd/managing||The National Institute of Diabetes and Digestive and Kidney Disease explains how to keep kidney disease under control.| |National Kidney Foundation||www.kidney.org/atoz/content/kidneytests||The National Kidney Foundation provides details on several blood tests and imaging tests used to diagnose kidney disease and monitor kidney function in people with CKD.| |National Kidney Foundation||www.kidney.org/atoz/atozTopic_Transplantation||The National Kidney Foundation offers a list of resources aimed at preparing people with CKD to receive a kidney transplant and take care of their new kidneys once transplantation has occurred.|
<urn:uuid:b79bc70a-c2ee-49c9-b738-bf3a486c3d80>
CC-MAIN-2021-43
https://www.testing.com/blood-urea-nitrogen-bun-test/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00070.warc.gz
en
0.937911
2,624
3.546875
4
LOSS COMPENSATION METHODS FOR STONE JOHN GRISWOLD, & SARI URICHECK 7 ORGANIC BINDERS Often, a repair system that incorporates an organic binder with transparent or translucent fill materials is found to be the most successful method of emulating a translucent stone like alabaster. A search through the chemical industry's current technical literature on “artificial stone” reveals that most industrial approaches incorporate a multiple organic resin system with organics as binder and filler. Although some treatments using this approach have been successful in the outdoors (Colton 1996), this organic binder-filler system is most widely used in museum contexts where the organic components are not subject to intense UV exposure. Organic systems can cure either by solvent evaporation (for example, a methacrylate in solution in acetone), phase transition (the cooling of a melted wax), or chemical reaction (the cross-linking of an epoxy). Each type of system has its advantages and disadvantages. Solvent-based systems are often stringy, difficult to work, and exhibit high shrinkage upon evaporation of the solvent. Proper hardness and texture can be difficult to achieve with thermoplastic systems and organic reaction-cured systems. Reaction-cured resins are often excessively strong, toxic, have high shrinkage, are difficult to reverse, and unstable to environmental exposure. Many combinations of organic resins in organic solvents have been attempted. Shellac diluted in alcohol was popular before the advent of modern materials. Plenderleith and Werner (1988) describe a putty made with nitrocellulose in acetone and amyl acetate, plus added white sand. This recipe was recommended where an anhydrous, nonshrinking fill was needed. “AJK Dough” is a traditional putty made with polyvinyl acetate resin as a binder (Cornwall 1965) used by archaeologists on a variety of artifacts where nonaqueous fills are required. Today, thermoplastic resin and solvent-based fills often use a more stable acrylic resin such as Paraloid B-72 (an ethyl methacrylate/methylacrylate copolymer) or polymethyl methacrylate (PMMA) as a binder. Variation of the solvent composition can tailor the evaporation and thus the working time of the mix. Unless extensively bulked with an appropriate filler, shrinkage is still a problem. Mixing dry resin powder with the aggregate, then adding an appropriate solvent is one means of achieving minimal shrinkage (Domaslowski and Strzelczyk 1993). Experimentation with other resins continues among conservators. Cyanoacrylate mixed with granulated methacrylates and stone flour was reported as having “the density closest to the stone”(Yakhont 1991) of many adhesives mixtures tested. Aqueous organic binders have also been used for fill materials on stone. “Gesso” coatings and putties made of glue and added chalk or stone dust have been found on polychrome stone sculpture from a number of periods and cultures. Traditional scagliola recipes are based on a glue binder (Ashurst 1979). Modern acrylic emulsions are used in commercially made artists' modeling pastes such as Liquitex (Pocobene 1994). The slight resiliency afforded such a fill by the acrylic emulsion binder makes it useful for specific applications, but shrinkage and introduction of water to the substrate can be problematic. An alternative to the aqueous or solvent-based organic mixes is the use of a solvent-free organic binder applied thermoplastically. With no solvent present, shrinkage upon evaporation of a carrier is not a concern. A traditional example of this type of fill is the use of tinted shellac sticks, applied with a torch to preheated, dark-colored stones where the yellow-brown color is not a distraction (Kibby 1996). Hempel (1968) introduced the use of a polyvinyl acetate melted directly onto a stone object for fills, a method which has been modified (Burke 1996; Colton 1996) and published since (G�nsicke and Hirx 1997). The revised method consists of a mix of ethylene acrylic acid copolymers (Allied Signal AC-540 and AC-580) with the PVA AYAC. To impart light-stability to the mix, an antioxidant (Irganox) is added. The mix produces a transparent and colored fill material, which can be manipulated by the addition of other fillers like pigments and marble dust. This mix has been used on marble and alabaster objects in indoor and outdoor contexts over the last 15 years(Colton 1996). The only significant deficiencies appear to be the inability to fill a small-scale hairline loss or shallow, spalled surface and the potential for cold-flow when applied without support in large-scale losses. Epoxy systems may also be utilized when a strong fill is required for a translucent stone. Epoxies are a class of synthetic resins characterized by a molecular structure with a highly reactive oxirane ring. The oxirane ring acts as the mechanism for cross-linking the polymer chains when catalyzed by an amine hardener. Epoxy resins are generally more expensive than other thermosetting resins. They resist common solvents, oils, and chemicals, are inert, have high mechanical strength, exhibit negligible shrinkage, and can be formulated to have a wide variety of properties, such as resiliency and heat resistance (Brady 1991). Much has been written regarding their use in conservation (Selwitz 1993; Kotlik 1983). The most light-stable epoxies must be used for stone fills, even if the stone is dark. There is a great risk of excess epoxy staining the stone and discoloring, and invisible residue may darken with time. HXTAL NYL-1 has been shown to be the least likely to yellow (Down 1986), but its extremely slow curing time (48 to 72 hours) makes it difficult to work with. In spite of this fact, it may be the most widely used among conservators for strong, translucent fills. Other epoxies have also been used extensively because of their reasonably good resistance to yellowing and their faster cure times. These epoxies include some of the Araldite AY series and Epotek (Down 1986). The use of epoxy in plastic repairs has also been favored because of the potential range of its optical properties when modified with various fillers. Repairs that incorporate microcrystalline wax along with fumed silica as a filler for epoxy can effectively emulate large-crystal translucent marbles (Craft 1996). The technique of first casting the fill in place with the use of a polyethylene film barrier and then adhering the cast fill with a reversible adhesive improves reversibility and greatly reduces the problem of migration of the hardener or other components into the substrate. Epoxy fills are sometimes colored with organic dyes because their color disperses more readily than pigments. Epoxy solutions in alcohol show superior reticulation capacity (the ability to form a strong network on dilution) compared with solutions in aromatic hydrocarbons (Domaslowski 1990). If a highstrength epoxy binder can be used in dilute form with a carrier solvent evaporating out of the intergranular spaces, a high degree of porosity can be maintained in the fill, and concerns about yellowing are significantly reduced. One serious problem encountered in “homemade” formulations of fill materials based on epoxy resin is migration of resin or hardener out of the bulked fill and into the surrounding substrate during curing. An isolating coating of a stable resin such as Paraloid B-72 is generally used to mitigate this problem. However, several conservators report they found this barrier layer to be insufficient to prevent staining of the substrate (Burke 1996). This disadvantage of the epoxies leads some to use polyester resins, the most popular being the Akemi Marmorkit 1000, a particularly light-stable polyester resin. It has a faster setting time and greater resistance to penetration into the substrate due to its viscosity (Burke 1996). Polyester resins have traditionally been used by stonemasons for repairs. They became commonly used for restoration after their adoption by the marble industry after World War II (Brady 1991). Polyesters include a large group of synthetic resins made by the condensation of maleic, phthalic, or other acids with an alcohol or glycol to form an unsaturated polyester. The resin mix is composed of this polymerized polyester, which is copolymerized by an unsaturated hydrocarbon such as styrene monomer. The reaction is catalyzed by other additives such as benzoyl peroxide. The styrene hardener is added in small amounts to catalyze the copolymerization and cross-linking of the resin (Werner 1959; Brady 1991). Thixotropic additives are used to make specific formulations with “knife grade” or “flowing” consistencies. Because of their strength and very quick setting time, polyester resins are often used for adhering large, heavy sections of broken stone. They are still used in many applications because of their lower cost than epoxies, their quick setting time, and their translucency. Polyesters are subject to deterioration on exposure to weathering, with resulting embrittlement, shrinkage, yellowing, crazing, and failure of adhesion. These problems have been ameliorated by the addition of light stabilizers to compositions such as Akemi's Marmorkit 1000. Their common usage warrants the continual comparative studies of their properties and degradation (Shashoua 1992). 7.1 FILLERS FOR ORGANIC SYSTEMS There are numerous organic materials commonly used as fillers for organic binders. Broken chunks of precast tinted epoxy or polyester, Plexiglas crumbs, wax, PVAs, and methacrylates can be mixed with adhesive binders and cut to resemble a composite stone, breccia stone, granite, or crystalline marble. Hollow phenolic microballoons can be added to epoxy or polyesters to lower the overall strength of the fill or to induce a degree of porosity (Brady 1991). Epoxy with an organic “blowing agent” additive to induce foaming can be used for lightweight filling of voids (Blackshaw and Cheetham 1982; Sturge 1987). Paper pulp has recently been used successfully by several conservators with a number of different binders including Polyfilla (a cellulose-based plaster), PVA emulsion, or methyl cellulose (Podany et al. 1995). The high strength achievable with variations of this technique make it valuable for some structural applications. The most common combination of binder and filler types used by sculpture conservators is an organic binder with inorganic fillers. The use of colloidal fumed silica, alumina, or titania preserves translucency while increasing viscosity and can reduce or increase the overall strength. Fumed silica also lowers the weight of fill material per unit volume (Berrett 1996; Byrne 1996; Vine 1996). Fumed titanium oxide has also been used to great advantage by conservators in achieving a white, translucent effect while thickening the mixture (Berrett 1996). Super-loading epoxy with fumed silica (e.g., 10:1 v:v) creates a pliable, marblelike dough (Barenne-Jones 1989). In addition to fumed silica, numerous inorganic additives are commonly used. Stone flour, sand (e.g., washed silver sand), crushed stone, calcium carbonate, aluminum oxide, and pigments are some of the more common additives. Silica beads have been used in architectural fills, and glass microspheres from 3M Corporation are used in a range of fills in conservation (Hatchfield 1986). Successful results have been achieved with ceramic microspheres (Maltby 1996), powdered fired clay with glass microballoons (Higuchi and Setsuo 1984), and white glass enamel powder (Griswold 1990b) as fillers for epoxies. The glass enamel allows the epoxy to achieve a warm white, highly translucent appearance for outdoors use (Griswold 1990a). Stone aggregate and sand are commonly added to epoxy or polyester for cast replacements and plastic repair in architectural and monumental contexts, particularly in Europe. This type of polyester cast replacement was used for the restoration of Michelangelo's vandalized Pieta(Wihr 1986), with the original crushed bits of stone as the filler. Fills on monumental sandstone sculpture in the Czech Republic have been made with epoxies after consolidation with dilute epoxy under vacuum (Selwitz 1993). Recipes for filling losses in marble and alabaster based on polyester resin with marble and alabaster powder have proved effective over time (Larson 1978). In an interesting note, Larson recommends heating water-soaked alabaster to increase whiteness before crushing it into powder. For a stronger fill, commercial epoxy putties may be used. These include Pliacre, Milliput, and Martin Carbone AB123, which are based on an epoxy resin with alumino-silicate ceramic fillers, titanium dioxide, and other inorganic pigments. Conservators often use these putties “straight” for gap-filling or supportive shells. They are often tinted with artist's pigments or textured with skim coats of other materials and painted to reintegrate the surrounding surface.
<urn:uuid:e4f9736d-f5b6-4c89-99db-4225668c6a71>
CC-MAIN-2021-43
https://cool.culturalheritage.org/jaic/articles/jaic37-01-007_7.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00229.warc.gz
en
0.924356
2,875
3.28125
3
Late in the first century A.D., a group of Christians is gathered to worship God and tell the stories of their Lord Jesus. They come together regularly, especially on the first day of the week. They sing songs of praise. They recount the stories of Jesus that have been passed on for decades – the things he said, the wonders he did, and especially the way he gave his life for theirs. Then they share a meal together – a simple meal of bread and wine, and they tell each other of the meals that Jesus once shared with his followers and the way he told them to remember him in the breaking of the bread. Today, it is Matthew’s turn to tell the story, and he recounts a parable that Jesus once told to the chief priests and the Pharisees. It’s another parable about a vineyard. They’ve heard a lot of those recently, but Matthew assures them that this one is different. There’s a landowner who decides to plant a vineyard. He puts a fence around it, digs a wine press in it, and builds a watchtower. Then he leases the land to some tenants who are supposed to work the land, produce fruit, and hand over the fruit to the owner at harvest time. But they don’t. When the landowner sends his servants to collect the produce, they are beaten, stoned, and killed. In the middle of the story, Matthew is interrupted… “He’s got to kick them out of there!” a young man hollers, “Take them by force, maybe at night when they’re not expecting it, and reclaim the vineyard!” But Matthew continues with the story as Jesus told it. “That’s not quite what happens,” he says, “This is what the landowner does… He sends his own son to collect the harvest, saying, ‘Surely they will respect my son.’ But they don’t. They seize him and they kill him, hoping that they may get his inheritance. “Oh!” cries a young woman in the group, “Why did he send his son? Didn’t he know that they would kill him? How could he send someone that he loves into a situation like that? Wouldn’t it have been better just to let them have the vineyard and forget about them?” “Wait, I’m not finished yet,” Matthew answers, “The story isn’t just a story about a landowner and some wicked tenants. You see, when Jesus told the story he was talking to the religious leaders who were trying to get rid of him. He was trying to tell them something by telling them the story. I think that everything in the story represents something else. “Oh, I get it,” says the young man, “like the landowner has got to be God, because God makes the world for people to live in and trusts them to be good and faithful to him.” “That’s the idea,” Matthew responds excitedly, “So who are the tenants and the slaves and the son?” “You said that Jesus was talking to the Pharisees and the chief priests, right?” the man continues, “So they’ve got to be in it somewhere. Couldn’t they be the tenants, because God did a lot of good things for the Jews, even giving them a land to live on and work on? “So that means that the son has got to be Jesus himself, the one that they rejected and killed, and the slaves before him must be the prophets who spoke the words of the Lord and called the Jews to be good and just and faithful to the covenant with God.” “You’re on the right track now,” Matthew encourages him, “In fact, the vineyard represents the kingdom of God and those in it have the responsibility to produce the fruit of the kingdom.” “That reminds me of the garden of Eden,” the woman muses, “Didn’t God plant a garden for people to live in, and they rebelled and disobeyed him?” “You’re absolutely right,” answers Matthew, “Even the mythic stories of the book of Genesis show how humanity has disobeyed God, squandered the gifts God has given, and used God-given freedom to rebel against God. Remember the story of Cain and Abel? It shows that humans become capable of killing each other. Eventually, the wickedness of humanity is so great that God decides to wipe us out, flooding the earth with water and saving only the few people who have been good.” “But Matthew,” interrupts the man, “I thought we were talking about the Jews being the wicked tenants. Now you seem to be talking about people in general, maybe even about us!” “Yes, I do think the story’s about us too. The Jews may have become tenants in the kingdom of God before us, and some of them may even have been kicked out, but if we’re tenants now, we’ve got a responsibility to produce fruit for the kingdom. Don’t you think?” The young woman speaks up again, trying to fit the pieces of the story together in her mind, “Matthew, I’m not sure that I completely understand yet. Can you tell me more about how the landowner is like God?” “Okay. In the stories of the People of Israel, we see God making covenants with the people, just as the landowner agreed with his tenants that they would live and work on the land, and that he would come to collect the produce at the harvest time. Although God provides generously for the people, they cannot manage to follow the law that is set out for them. Ten simple rules God has given them, but over and over, they rebel, breaking the covenant and God’s heart. Prophets are sent and speak on God’s behalf. They cry out for justice, calling the people back to faithfulness to God. They challenge the people to look after the poor and to return to the ways of God, but each one of them is rejected. Finally, God sends Jesus into the world. ‘Surely they will respect my son,’ God says.” Matthew’s face falls as he thinks of Jesus. “We all know what happens to him,” he says sadly. Matthew pauses then, looking defeated. In the silence, most of the group is thinking about the same thing – the scene they’ve all heard about so many times from the men and women who were there to see it…. A lonely wooden cross stands at the top of a hill. On it hangs a wretched, rejected, abandoned man – the one who was supposed to be their leader. Blood drips down his face from the thorny crown that’s digging into his forehead. His hands and feet are held in place by thick nails that pierce his flesh and the wood behind. Although his arms are spread wide to embrace the world, his body is crumpled. His spirit is crushed. Quietly, someone suggests that it’s time for supper, and a few slowly begin to make the preparations. They speak only in hushed voices because others are still lost in thought. Some are crying as they remember their Lord, and others are gently comforting them. Soon the meal is ready. The bread is broken. The wine is poured, and both are distributed among the people. As Matthew receives a hunk of bread, he thinks of Jesus again, giving his body for the life of the world. This is the foundation of Matthew’s faith. Taking a swig of the wine, he feels it burn the back of his throat, and he thinks of the pain and sorrow that Jesus endured for him. Looking down into the cup, he sees the wine’s deep red colour, and he thinks of Jesus’ blood, poured out so that Matthew’s sin might be forgiven. Even as he eats the simple meal, Matthew knows that he is receiving the gifts of God: Nourishment from the grain and the grape. New life and forgiveness from Jesus his Saviour. This is the foundation of Matthew’s faith. The atmosphere around the table is subdued. People are eating quietly, not saying much to each other, except what is necessary to get through it. But finally, Matthew breaks the gloomy silence. “I’ve thought of something!” he announces in a voice too loud for the quiet room, “Remember the psalm that we sang the other day, the one about God’s victory over the enemies? I think it’s got the answer for us. It says, ‘The stone that the builders rejected has become the cornerstone. This is the Lord’s doing; it is marvellous in our eyes.’ We’re all so sad right now because our Lord was rejected and killed, but he didn’t stay rejected. He was like a rejected stone that became the cornerstone of the building. He was defeated, but he became victorious. He was killed, but he rose again. We were lost in our sin and our sadness, but he has given us new life and forgiveness. That’s all that Matthew says, and then he lets everyone get back to their eating. And soon the noise level in the room goes up. People are talking, laughing, and eating. They haven’t forgotten about Jesus, but now when they look at the bread in their hands, they see more than the crucified one. The stone that the builders rejected has become the cornerstone. This is the Lord’s doing; it is marvellous in our eyes.
<urn:uuid:ae262c94-cfe7-4b25-8138-c5add785dbe5>
CC-MAIN-2021-43
https://curriejesson.ca/october-5-2014/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00590.warc.gz
en
0.98008
2,118
2.75
3
Real Life Superheroes: Comic Books and Pro Wrestling by Jerry Whitworth Some of the earliest fighting systems created by mankind were boxing and wrestling. While intended for use on the battlefield, these forms would also become competitive sports as evident in the Olympic games of ancient Greece. Mythological heroes of this time period would include the Argonauts, such as the Gemini. Brothers Castor and Polydeuces/Pollux, the Dioscuri (better known today as the Gemini) were reportedly two of history’s greatest wrestlers. However, according to Greek myth, the greatest wrestler (and also an Argonaut) was Heracles, arguably the greatest mythological hero in human history. Heracles would represent the most prevalent archetype for heroes in our history in the strongman, such heroes that have existed in this vein include Gilgamesh, Thor, Beowulf, Hercules, Hanuman, Sha Wujing, Raijin, Samson, Superman, and Captain Marvel. It’s little wonder piecing all of this information together of the similarities shared today by comic books and professional wrestling. In essence, both derive from mythology. For pro wrestling, it’s some amalgamation of wrestling competition and hero plays (another ancient Greek tradition). For comic books, it’s the modern storytelling of heroes, performing great feats of bravery while combating the nigh-unimaginable threats that plague humanity. Therefore, it should be of little wonder how these elements have intermingled through the years since their mutual advents. While challenging to establish a viable timeline describing at what point the two forms of media converged, an early example interestingly enough is with Superman. Often credited as the first superhero, Superman was published by National Allied Publications in 1938 where he proved to be a great success with the emerging audience for comic books. Following this, Fleischer Studios was tasked with adapting the character for a series of animated shorts that premiered in 1941 (which National invested a great deal of money into because of the quality of product Fleischer was known for creating). When the time came to transfer the static, two-dimensional figure of Superman to a dynamic, three-dimensional figure rendered for animation, animators employed a real life model for the Man of Steel. Their choice was none other than future professional wrestler Karol Krauser. While this career path came some years after modeling for Fleischer, Krauser would go on to join three other wrestlers to form the Kalmikoffs, a fictitious stable of Russian brothers. The Superman comics themselves would infrequently involve itself in the world of professional wrestling such as in Superman’s Girlfriend, Lois Lane #8 (April 1959) where Lois must deal with a wrestler suitor in the so-called Ugly Superman and Superman #155 (August 1962) where Superman must battle real life wrestler Antonino Rocca who was supposedly empowered by Mister Mxyzptlk. The same month Superman battled Rocca, another wrestler would play a critical role in the origin of another world famous superhero. In the pages of Amazing Fantasy #15 (August 1962), Peter Parker is empowered after being bitten by a radioactive spider and uses these newfound abilities to try and earn money. His earliest bid was in battling Crusher Hogan, the star wrestler of his league who offered a cash prize to those that could last three minutes in the ring with him. Parker, who Hogan dubbed “a little Masked Marvel,” easily defeated the wrestler which lead to Parker being discovered by a television producer and some time later his fateful encounter with the unnamed Burglar at a television studio he failed to stop that murdered his Uncle Ben (leading to becoming the hero Spider-Man). Marvel Comics would also infrequently dabble in the world of professional wrestling as time passed. Spider-Man would again battle another wrestler in Amazing Spider-Man #139 (December 1974), this time one empowered by one of his greatest enemies in the Jackal who gives disgruntled wrestler Maxwell Markham an exoskeleton to become the Grizzly in order to enact revenge on J. Jonah Jameson for costing him his job. In Thor #290 (December 1979), the Norse god becomes embroiled in a feud between wrestlers El Vampiro and El Toro Rojo, secretly an Eternal and a Deviant respectively. The latter would return to align with the Avengers in 1994 as part of the Delta Force. The same year Thor would tussle with El Toro Rojo, a group of female wrestling supervillains that would infrequently battle Earth’s heroes emerged in Marvel Two-in-One #54. Originally a group involved with the Femizon time traveler Thundra, one of the Grapplers’ members in Screaming Mimi would go on to trouble the Avengers (before later becoming their ally). In 1985, Marvel comics would introduce the Unlimited Class Wrestling Federation (UCWF) which essentially featured characters with super powers battle each other that once counted Fantastic Four’s the Thing as its champion. One of the organization’s wrestlers, Demolition Man (or D-Man), would become an unofficial partner to Captain America. In 1992, Marvel would adapt the characters of real life promotion World Championship Wrestling (WCW) in its own comic book series which lasted twelve issues. DC Comics and Marvel wouldn’t be the only game in town to produce professional wrestling comics. Image Comics would also be involved with several wrestling books over the years. In 1999, WCW talent and high-level New World Order (nWo) wrestler Kevin Nash created and co-wrote a comic series called Nash through Image about a character based on him in a post-apocalyptic future which lasted two issues and a preview book. In 2002, two issues of Holy Terror would be published featuring wrestling blended with the supernatural (something not unfamiliar with adaptations of luchadores, or Mexican wrestlers, into other media). For 2007, Image would publish another wrestling story by translating and reprinting French comic creator Jerry Frissen’s Lucha Libre (the term for Mexican wrestling). In 2007, the publisher printed Rob Zombie Presents: The Haunted World of El Superbeasto featuring a former luchador trying to save the world which was adapted into an animated film two years later. While not beginning with Image, Rob Zombie would earlier produce the adventures of El Superbeasto in Rob Zombie’s Spookshow International in 2003 through MVCreation before going to Crossgen and eventually ending up at Image. Archie Comics would introduce its own fictional wrestling organization in Intergalactic Wrestling in the pages of Teenage Mutant Ninja Turtles Adventures #7 (December 1989). In the story, the Ninja Turtles are kidnapped in order to drum up business and television ratings for promoters Stump and Sling (not unlike what Mojo did with the X-Men). The promotion would become a fairly frequent aspect of the series employing characters like Leatherhead, Ace Duck, Cudley the Cowlick, Cryin’ Houn’, and Trap. In 1991, Valiant Comics would begin producing licensed comics based on the World Wrestling Federation (WWF) beginning with WWF Battlemania with a focus on the Ultimate Warrior, who was suppose to take Hulk Hogan’s mantle as the company’s protagonist only to eventually fizzle out, and the Undertaker, pushed as the brand’s antagonist. The series would last five issues and four “illustrated action books.” Creator Richard Dominguez as part of his Azteca Productions company published El Gato Negro in 1993 about a vigilante luchador who inherits his grandfather’s mantle to avenge the death of his best friend. In Love and Rockets #46 (November 1994), Rena Titañon is introduced as a female wrestler who the Hernandez brothers would occasionally return to throughout their career across various series. In 1996, Rafael Navarro would begin publishing the adventures of crime-fighting luchador Sonámbulo. That same year, the Ultimate Warrior would begin self-publishing a series based on his character through his Ultimate Creations studio that lasted four issues with a Christmas special. For 1999, Chaos! Comics began producing comics based on the WWF featuring characters like Stone Cold Steve Austin, Mankind, Chyna, and the Rock. The Undertaker would garner an ongoing series through the publisher written by Beau Smith (Guy Gardner: Warrior, Wynonna Earp) that featured various wrestling characters such as Paul Bearer, Kane, and the Ministry of Darkness. Dark Horse Comics in 2004 released two series based on wrestling in The Nail (by Rob Zombie and Steve Niles) and El Zombo Fantasma. In 2007, the critically-received Headlocked from Visionary Comics Studio began publication first as a one-shot and then a three issue mini-series the following year. World Wrestling Entertainment (WWE, formerly WWF) wrestler, commentator, and on-air personality Jerry “the King” Lawler would provide covers for the one-shot and first issue of the mini-series (one of the series’ artists, Michel Mulipola, wrestles under the name Kid Liger). The comic book world wouldn’t be alone in adapting the world of professional wrestling into its stories. In fact, wrestling personalities like A.J. Lee, Alex Shelley, Bryan Danielson, Christopher Daniels, CM Punk, Cody Rhodes, Daffney, Gregory Shane “Hurricane” Helms, Jeff and Matt Hardy, Jerry “The King” Lawler, Kid Liger, Kofi Kingston, Leva Bates, “Lightning” Mike Quackenbush, Matt Striker, Mick Foley, Raven, Rey Mysterio Jr, Rob Van Dam, Stevie Richards, and Velvet Sky are all admitted comic fans. Interviews for some of these performers and others have become an infrequent feature on the Marvel Comics website and WWE champion CM Punk would pen the introduction to the Avengers vs. X-Men trade collection. The line between superhero/supervillain and pro wrestler was rather blurred around the 1980s/1990s when the so-called gimmick wrestlers were at their peak, especially at the WWF. Characters like Ultimate Warrior, the Undertaker, Legion of Doom, Kane, Doink, Demolition, the Patriot, Big Van Vader, Goldust, Mankind, Papa Shango, Aldo Montoya the Portuguese Man O’ War, Damien Demento, Mantaur, Max Moon, Dungeon of Doom, and the Brood looked like they hopped off the printed page into reality. In 1996, WCW wrestler Sting would begin appearing at shows based on the suggestion of fellow wrestler Scott Hall in an appearance loosely based on James O’Barr’s the Crow. That same year, <a href=”http://www.kaiju.com/”>Kaiju Big Battel</a> premiered which focused on tokusatsu (Japanese live action program with superheroes and/or special effects), lucha libre, comic books, and Japanese pop culture. WWF wrestler “Hollywood” Gregory Helms in 2001 began performing at the promotion as the Hurricane, a superhero wrestler with a costume heavily influenced by the Green Lantern. The following year, wrestler and comic book fan Mike Quackenbush formed wrestling promotion Chikara in Philadelphia (a city famous for its Extreme Championship Wrestling, or ECW) with his tag team partner and friend Reckless Youth featuring American lucha libre with a heavy emphasis on geek culture (predominantly comics and video games), so far as many of their posters and <a href=”http://www.chikarapro.com/store.shtml#!/~/category/id=666544”>DVD covers</a> as homages of famous comic book covers (Chikara premiered its own webcomic in 2012). Also in 2002, wrestler Raven would co-write Spider-Man’s Tangled Web #14 in a story featuring Crusher Hogan. In 2003, Rey Mysterio, Jr. would begin a mostly annual tradition of appearing at WrestleMania in a costume inspired by a superhero. Such characters have included Daredevil, the Flash, the Joker, Captain America, Iron Man, Silver Surfer, and Spider-Man. In 2010, WWE would self-publish their own comics about their characters through Titan Publishing in the WWE Heroes series with an accompanying mini-series called TimeQuake: Dead Man Walking featuring the Undertaker and another planned centering around John Cena called TimeQuake: Gladiator. Of course, professional wrestling is not unique to the United States. In fact, it’s significantly more popular in Mexico and Japan. Mexico wasted even less time than the USA to adapt one of its heroes to the printed page. The closest identification of what luchador El Santo means to Mexico in wrestling terms is to mention what Hulk Hogan means to the US. However, this doesn’t do justice to what Santo means to Mexico. It would be more accurate to take Hulk Hogan, Superman, “Davy” Crockett, and Ronald Reagan and blend them together to give some idea of what kind of fame Santo has in Mexico. In 1952, Santo would be adapted to a comic book and his exploits would span continuously for thirty-five years across four series. Santo would go on to not only be Mexico’s greatest luchador but also one of its biggest action stars starring in fifty separate films. A fellow luchador who shared similar fame was the Blue Demon who would get his in own comic series in 1970 in El Increìble Blue Demon and later in La Leyenda de Blue Demon. Another luchador who made the jump to comics was Huracán Ramírez who starred in Huracán Ramírez El Invencible beginning in 1968. Tinieblas would get his own comic in 1976 for El Imperio De Las Tinieblas that lasted three years with a second series called Tinieblas, El Hijo de la Noche in 1991 that lasted four years and another series that started in 2000 using just his name as the title. In 1986, Sensacional de Luchas would begin which featured stories of various luchadores over its nine year run including El Santo, Blue Demon, Rayo de Jalisco, Tinieblas, Black Shadow, Lizmark, Fray Tormenta, Ángel Blanco, Los Villaños, El Canek, El Satánico, La Bestia, the Killer, Bello Greco, Kung Fu, Atlantis, Super Ratòn, César Curiel, Enrique Vera, Fabuloso Blonde, Butch Masters, Rayo de Jalisco Jr, El Perro Aguayo, Konan, El Médico Asesino, Cavernario Galindo, El Matemático, Black Shadow Jr, El Solitario, Kato Kung Lee, El Indómito, Murciélago Velázquez, Dos Caras, Tarzán López, Ringo Mendoza, Espectro Jr, Octagón, Fuerza Guerrera,Vampiro, Ray Mendoza, Volador, Máscara Sagrada, Dr. Wagner, Blue Panther, Love Machine, Último Dragón, Pierroth, Kendo Star, Black Magic (Norman Smiley), Super Porky, Masakre, Jaque Mate, Cien Caras, Mano Negra, and many more. Some of these luchadores and several of their children and nephews went on to work for WCW in the 1990s generally in its burgeoning cruiserweight division (bringing lucha libre to the North American market). Following the demise of Sensacional de Luchas, some years would pass before new luchador comics reached the hands of its fans. Místico (known today as Sin Cara in the WWE) would break this dry spell with Místico El Principe de Plata y Oro in 2003. For 2005, the adopted son of the Blue Demon would get a comic series in Blue Demon Jr. El Legado as well as the youngest son of El Santo in El Hijo del Santo for Santo, la Leyenda de Plata. Cibernético would star in his own comic in 2007 in El Ojo Cibernético, e Historias de Carretera. Japan has had a long love affair with wrestling (or puroresu) and comic books (or manga). In fact, some of its most famous wrestlers started out as comic characters. In 1968, Ikki Kajiwara and Naoki Tsuji began producing Tiger Mask through Kodansha about a wrestler who made a career famously as a villain that was inspired to become instead a hero when a young boy who reminded him of himself wanted to grow up to be like his in-ring persona. The series would be adapted into animation, films, and video games and New Japan Pro Wrestling (NJPW) would license the character in the early 1980s and have wrestlers don the costume throughout the years. A similar story followed with Jushin Liger, likely the most popular wrestler in Japan’s history. Created by famous mangaka (comic creator) Go Nagai (Cutie Honey, Devilman, Mazinger Z) for famous studio Sunrise (Mobile Suit Gundam), Jushin Liger was an animated series in 1989 about a boy who could summon a suit of biomechanical armor to battle the evil forces of the Dragonites. Nagai would adapt the series for Kodansha’s Comic Bom Bom and the property’s popularity saw NJPW license the character with wrestler Keiichi Yamada assuming the identity which he has remained as since. Many comic series in Japan have featured pro wrestlers including Airmaster, Baki the Grappler, Dragon Ball Z, Tenjho Tenge, and Tough/Shootfighter Tekken (one of the Four Great Ones in Cromartie High School models his face make-up after famous wrestler the Great Muta) but a series that featured pro wrestling and is arguably the comic most known for its use is Yudetamago’s Kinnikuman for Shueisha’s Weekly Jump in 1983. Originally a parody of Ultraman, the series evolved to revolve around wrestling featuring superheroes and aliens. Toys from the series would make its way to the USA in the M.U.S.C.L.E. line and an anime (animation) based on the series’ sequel Kinnikuman II would air in America as Ultimate Muscle: The Kinnikuman Legacy. The sequel series would go so far as to parody the nWo with the dMp (Demon Manufacturing Plant) which included Kevin Mask (based on Kevin Nash). Two other comic series involving wrestling are Toshimichi Suzuki’s Wanna-Be’s and Ikki Kajiwara and Kunichika Harada’s Pro Wrestling Superstar Retsuden consisting of semi-biographical stories of wrestlers like Dory Funk Jr, Terry Funk, Stan Hansen, Abdullah the Butcher, André the Giant, Mil Máscaras (one of the original “Big Three” luchadores along with El Santo and Blue Demon), Tiger Jeet Singh, Shohei Baba, Antonio Inoki, Karl Istaz, Ric Flair, Tiger Mask, Hulk Hogan, Bruiser Brody, and Great Kabuki (one of the first Japanese wrestlers to compete in the United States in the National Wrestling Alliance, or NWA). [zstore contributorhandle=”kaijubigbattel” showhowmany=”12″]
<urn:uuid:191d8514-430e-41cb-ac7a-240278d4e5c7>
CC-MAIN-2021-43
http://comicartcommunity.com/comicart_news/real-life-superheroes-comic-books-and-pro-wrestling/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585504.90/warc/CC-MAIN-20211022084005-20211022114005-00070.warc.gz
en
0.943639
4,079
2.8125
3
Only two techniques can completely account for total plant water use and behaviour: the heat ratio method (HRM); and the heat field deformation (HFD) method. This conference proceeding outlines these techniques and provides several case studies where total water use and plant behaviour have been measured in Australia, North and South America, and Europe. There is increasing recognition in the forestry industry that water is a vital resource. Not only is water vital for tree growth but increasingly catchment management authorities are required to account for every drop of water in the landscape. Trees transpire a large volume of water however in a multi species forest certain species transpire more than others. Even in a monoculture plantation tree water use is not uniform and must be accurately measured. Over the last 20 years there has been increasing evidence showing water movement in trees is complex. It is now recognised that water does not just move from soil to roots, stem, leaves and then atmosphere. Water can move upwards, downwards and laterally depending where there is greatest demand leaving the impression that plants exhibit “behaviour”. This article outlines the mechanism of water movement in plants and why simple principles of physics leads to plant “behaviour”. Known as hydraulic redistribution, it is an important aspect of tree water use that must be accounted for in total tree water use. Various case studies of hydraulic redistribution are discussed that highlight the importance of this phenomenon. A number of sap flow methods are available to measure total tree water use however this article emphasises that only two of the many methods available can account for hydraulic redistribution. The passage of water through soil, plants and into the atmosphere is based purely on principles of physics. Plants can have some influence over the passage of water, for example by closing stomata during periods of moisture deficit to lower transpiration rate, however they are largely passive systems influenced by environmental conditions. A gradient in water potential is the physical mechanism that drives water movement in plants. Water potential is the amount of work, or energy, available when compared with a reference condition. The simplest analogy is a ball that is sitting at the crest of a hill. At this point the ball has a large potential to do work – i.e. roll down the hill; if the ball was placed at the bottom of the hill it has low potential to do work – i.e. there is little energy available for the ball to move up the hill. Similarly, water moves from areas of high potential to low potential; water always moves downhill and never uphill. Pure water has the largest potential to do work and is equivalent to a ball at the crest of a hill. At this point a value of zero is given. The SI unit for water potential is pascal (Pa) therefore water at its reference point has a value of 0Pa (many other terms are also used, interchangeably, such as bar, joules, relative humidity, and also kilopascal and megapascal – here we will use megapascal, MPa). The gradient of water potential becomes more negative from the reference point of 0MPa . At a value of -0.001MPa there is a small gradient in water potential, the equivalent of rolling the ball down a slight decline. At a value of -100MPa there is a large gradient in water potential, the equivalent of rolling the ball off a sharp cliff. Values found in soil that are important to plants range from -0.033MPa (field capacity) to -1.5MPa (wilting point). Leaf water potential can be as low as -3.0MPa for a drought stressed plant. Air dry, or atmospheric, values are in the order of -100MPa (or 47.76% relative humidity). There is a strong water potential gradient through the soil-plant-atmosphere continuum from -1.5MPa to -3.0MPa to -100MPa (for very dry conditions). This is the physical principle that allows water movement through plants and ultimately for transpiration to take place. Over the last 20 years it has been increasingly recognised that not only does the water potential gradient allow moisture to move from soil to the atmosphere, but it allows water to move from any part of the plant to any other part of the plant. There is now a large amount of evidence documenting water movement down stems, across stems, from stems to roots, and across various parts of the root profile. The term is hydraulic redistribution and it is an extremely important mechanism that allows plants to cope with moisture deficits. Stem-mediated Hydraulic Redistribution Douglas-fir (Pseudotsuga menziesii) is a coniferous tree in the family Pinaceae that can tolerate low rainfall environments. Nadezhdina et al. (2009) were interested in how a tree growing in a low rainfall zone (560mm rainfall per annum) coped with moisture deficit. A representative tree from a mixed forest experimental stand (0.5ha) was selected for experimentation. The tree was 53 years old and 35.6cm in diameter at breast height. Sap flow instrumentation (heat field deformation, HFD, method) consisted of four sensors installed in the tree. Two sensors were installed in the stem (north face and south face) and two sensors installed in large roots at least 16cm from the trunk (northern root and southern root). Monitoring of sap flow began during dry conditions and then the researchers irrigated the southern side of the tree only. In this exploratory experiment, the researchers closely monitored sap flow at various locations around the tree as well as at various depths of sapwood within each location. Nadezhdina et al. found a strong pattern of hydraulic redistribution in this Douglas-fir. Water was transported from irrigated soil, through the root system on the southern side of the tree, towards the south facing stem. Water was then transported across the trunk to the north facing stem, then in reverse flow down to the root system on the northern side of the tree. The researchers did not have any instrumentation to account for water potential gradients in this process and it would have been interesting to see results from a stem psychrometer or soil water potential sensor. Nevertheless, they concluded that due to water potential gradient water moved from an irrigated portion of the tree to the non-irrigated portion. Foliage-mediated Hydraulic Redistribution The most common condition for plants is leaf water potential to be lower (more negative) than stem water potential – a condition necessary for the passage of moisture from roots to the atmosphere. However there are certain environmental conditions that allow for leaf water potential to be higher than stem water potential allowing for the movement of moisture from leaves to the stem. Burgess and Dawson (2004) hypothesised that moisture may even travel from the atmosphere to the leaves, stem and roots; a remarkable condition of reverse sap flow where the soil-plant-atmosphere continuum challenges conventional theory. Sap flow instrumentation in this study was based on the heat ratio method (HRM) as it has the ability to detect low and reverse flow rates (Burgess et al . 2000). The researchers studied a number of trees however the installation for a single tree consisted of three sets of probes at breast height, two at approximately 50m (trees were 60 to 70m in height), and six sets of probes placed in three separate branches high in the tree (a probe on the lower side of the branch and another on the upper side). Burgess and Dawson found a number of interesting sap flow patterns. Nocturnal sap flow was observed during a night of low relative humidity (between 20 and 40%) supporting the notion that redwood has porous stomata. On Day 1 a typical diurnal course of sap flow was observed with maximum transpiration rates around midday. On Day 2, however, there was heavy fog and transpiration ceased altogether. Instead, reverse sap flow was observed at branch, 50m and breast height sensors. The rate of reverse flow was as high as 7% of the previous day’s transpiration. The mechanism by which moisture enters the leaf is via the hyphae or hairs that extend from stomata that act as wicks to draw the water back in. The authors concluded that they observed a clear pattern of reverse sap flow and water uptake by the leaves. Water moved from the leaves, into branches, stem and, possibly (it was not measured), roots and soil. This observation is considered to be an adaptation to moisture deficits experienced by the coast redwood. Additionally, it is an important component of the water balance of this forest that needs to be taken into account in hydrological models. Root-mediated Hydraulic Redistribution The redistribution of moisture by roots is the most widely documented pattern of hydraulic redistribution. Initially, the pattern was discovered by using soil moisture sensors (Richards and Caldwell 1987; Caldwell and Richards 1989). Termed hydraulic lift, roots deep in the soil profile with access to moisture transport water to roots in the shallow soil profile where the soil is drier. The physics behind the process is a water potential gradient from moist to dry soil. Not only does hydraulic lift contribute to the water balance of the entire root system, but roots in shallow soil are able to maintain access to higher nutrient levels. Root hydraulic lift is only observed at night as the daytime water potential gradient is far stronger than any gradient that can be produced in the soil. As hydraulic lift became more widely documented, more interesting research questions were posed. For example, in certain parts of the Amazon Basin there is a strong wet-dry season. However trees of the luxuriant rainforest rarely show signs of drought stress towards the end of the dry season. Oliveira et al. (2005) suspected hydraulic lift enabled trees to maintain a favourable water status during the dry season. Further, Oliveira et al. hypothesised that at the onset of the wet season soil moisture would be redistributed from wetter shallow soil to drier deep soil via the root system. This pattern of hydraulic redistribution would be particularly apparent as this study was conducted on heavy clay soil where the infiltration of rainfall would be slow. Sap flow instrumentation was again based on the heat ratio method (HRM) for its ability to measure reverse flow (Burgess et al . 2000). The experiment was carried out at the Floresta Nacional do Tapajo ́s, Brazil, where control and treatment trees were established. The treatment plot consisted of a “rainout” where a shelter was erected above the soil surface to exclude rainfall. Soil was excavated down to 1m and sap flow sensors installed on all tap roots and up to four lateral roots. During the dry season, there was reverse nocturnal sap flow in the roots indicating moisture was moving from roots into the soil – a pattern consistent with hydraulic lift. With the onset of wet season rainfall there was positive nocturnal sap flow in the lateral roots yet there was still reverse flow in the tap root indicating moisture movement from wet top soil, into lateral roots, towards the tap root and down into deeper soil profile. This pattern continued for approximately seven days into t he wet season. In the treatment plot hydraulic lift was observed at all times. With changes in climate and rainfall patterns it is important for forestry to understand how plants respond to moisture gradients. Results from studies on hydraulic redistribution clearly demonstrate that any water balance model, particularly catchment models, must account for reverse sap flow. Knowledge of the total amount of water use by trees is still critically important. Phytoremediation and Groundwater Monitoring at Kamarooka Salinity is a major economic and environmental problem, particularly in Australia. In north central Victoria, north of Bendigo, salinity has had a large impact on the landscape. In 2003 the Northern United Forestry Group (NUFG) decided to undertake a project to reclaim a barren land and in the process restoring functional ecosystem processes and increasing agricultural production (Figure 1). There was considerable interest in establishing salt tolerant trees in order to provide income from forestry projects and provide habitat for fauna. Sugar gum (Eucalyptus cladocalyx), flat-top yate (E. occidentalis), willow wattle (Acacia salicina) and Eumong (A. stenophylla) were planted at the saline site, Kamarooka. Fourteen monitoring bore holes were established under the trees and in non-forested areas to monitor groundwater. Throughout 2007 a 3m groundwater depression was formed under the forested area. To ascertain whether the trees were the causal effect behind the lowering of the groundwater it was essential to established total tree water use. Sugar gum and flat-top yate were instrumented with sap flow sensors using the heat ratio method (HRM). Sensors were installed in mid-January 2008 and monitoring commenced. Between April and September 2008 average tree water use was 5 litres per day. On a plantation scale tree water use was approximately 25,000 litres per day per hectare. At the start of this period the groundwater was at a depth of approximately 4m and by March 2010 it had dropped to approximately 6m – in spite of a few heavy rainfall events. Monitoring sap flow and total tree water use at Kamarooka has clearly demonstrated that trees can be used as a phytoremediation tool on saline land. Forestry can be established to provide income and environmental benefits. For more information please visit the NUFG website: http://nufg.org.au/index.html Establishing a Forest in a Desert: Antamina Mine, Peru One of the world’s largest copper and zinc mines is Antamina located in the Andes of Peru. It is a multi-billion dollar investment located 4,300m above sea-level. Copper and zinc is gravitationally transported in slurry via a 300km long pipe to a seaport. At the seaport copper and zinc is separated from the slurry. The mine operators then had a major problem to overcome. The leftover water could be dumped into the ocean at considerable environmental cost, a filtration plant could make the water fit for agriculture or human consumption at considerable economic cost, or a plantation could be established and irrigated with the leftover water. The trees would act as “biopumps”, transpiring the leftover water into the atmosphere and preventing through drainage of contaminated water into the underlying aquifers that provide much of the potable water for the surrounding population. This last option was additionally attractive as the area around the seaport is desert and it would be an afforestation project (rather than displacing existing forest or agricultural land). A 174 acre forest was established with 190,000 individual trees consisting of eight species (Figure 2) and mine production continued. It quickly became obvious that tree transpiration was not uniform throughout the year and changed with season. In order to avoid flooding the forest during periods of low transpiration (and thereby risking through drainage and pollution of aquifers) it was necessary to know how much water the trees transpired. Moreover, the mine operators wanted to increase production which entailed increasing irrigation and forest area to maintain equilibrium with effluent input to transpired output. The most efficient method of removing the extra leftover water was to plant trees that transpired the most amount of water. Sap flow sensors employing the heat ratio method (HRM) were installed on nine trees of three species: Acacia spp, Tamarix spp, and Algorrobo spp. The results over an 18 month period showed Acacia, Tamarix and Algorrobo transpired approximately 35,000, 16,500 and 6,000 litres of water respectively. Quantifying total tree water use allowed more precise management decisions to be made in terms of adherence to environmental regulations plus, the data to accurately expand the mines capacity whilst continuing to meet current and potentially future modifications to environmental regulations.
<urn:uuid:e4fddf60-7717-49fa-91c5-3d67ee5bca0c>
CC-MAIN-2021-43
https://ictinternational.com/casestudies/instrumentation-for-total-tree-water-use-and-behaviour/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00311.warc.gz
en
0.95325
3,289
3.53125
4
Regulatory requirements for lead in public water systems This article comes from our Winter 2016 Water & Wastewater Newsletter View or print the entire Newsletter The recent drinking water crisis in Flint, Michigan, and the hardship facing the Village of Sebring in Mahoning County, Ohio, serve as reminders of the challenges that public water systems face when attempting to implement complex regulations to protect public health and service the needs of the communities they serve, while balancing cost considerations and aging infrastructure. This article highlights the manner in which the Safe Drinking Water Act regulates contaminants, particularly lead, and briefly summarizes Ohio’s regulation of lead in drinking water. Lead is undeniably a contaminant of great concern that presents a challenge for many operators of public water systems. According to the U.S. Environmental Protection Agency (EPA), many factors influence the manner and extent to which lead contaminates drinking water. These factors include the chemistry of the source water, the amount of lead the water comes into contact with, the temperature of the water, the amount of wear in the pipes transporting the water, the amount of time the water spends in the pipes, and the presence of protective scales or coatings inside the plumbing materials. In most cases, treatment options are available to prevent the corrosion of lead into drinking water and avoid replacement of all service lines containing lead. Regulation of contaminants under the Safe Drinking Water Act The Safe Drinking Water Act (SDWA) authorizes the EPA to set national health-based standards for drinking water. The agency has set standards for more than 90 contaminants, including lead. The SDWA standards apply to all public water systems (PWS). A PWS is any system that supplies water for human consumption through constructed conveyances (such as a pipe, ditch or hose) to at least 15 service connections or regularly serves at least 25 individuals. For each contaminant, the EPA first sets a maximum contaminant level goal (MCLG). These are non-enforceable public health goals that represent the acceptable level of a contaminant. At or below this goal level, there would be no known or expected risk to human health. The EPA then sets an enforceable standard for each contaminant through either a numeric maximum contaminant level (MCL) or a treatment technique. An MCL is the highest level of a contaminant that is permitted in drinking water. A treatment technique is developed when it is not economically or technologically feasible to set a numeric standard. A treatment technique requires adherence to a process rather than a number. The EPA’s MCLG for lead is zero, as there is no safe exposure level for humans to lead. Rather than setting a numeric MCL for lead, the EPA set a treatment technique that requires PWSs to control the corrosiveness of the water. To implement the treatment technique, the EPA set an action level for lead of 0.015 mg/L. The action level is the concentration of lead in water that determines the treatment requirements that a PWS must complete. Ohio’s lead rules The EPA has delegated primary implementation authority of the SDWA in Ohio to the Ohio EPA. Ohio’s lead rules apply to all PWSs that serve at least 25 of the same residents either year-round (classified as “community water systems”) or for at least six months of the year (classified as “non-transient non-community water systems”). These types of PWSs include cities, mobile home parks, nursing homes, schools, hospitals or factories. In Ohio, a PWS exceeds the lead action level if the concentration of lead in more than 10 percent of tap water samples collected during any monitoring period exceeds 0.015 mg/L. Monitoring and public notice requirements The first step in complying with Ohio’s lead rules is to sample for lead. Then, the PWS must use the sampling results to calculate whether the samples are below the lead action level. Every PWS in Ohio has a monitoring schedule, which includes the contaminants a PWS must sample for and the monitoring period during which to sample. Factors such as population size of the PWS and whether previous lead samples tested above the lead action level determine the required frequency of lead monitoring and the number of required sample sites. PWSs may be eligible for reduced monitoring schedules for lead after demonstrating multiple satisfactory sampling results across monitoring periods. Where to sample Lead samples must be taken from single family residences or buildings that contain lead service lines, lead pipes or copper pipes with lead solder installed after 1982 and that do not utilize water softeners. Samples must be collected after the water has stood motionless in the line for at least six hours. From residences, the samples must be collected from the cold water kitchen tap or the cold water bathroom sink tap. PWSs may allow residents to collect the tap samples upon providing proper instruction. From nonresidential buildings, samples must be collected from interior taps typically used for water consumption. Samples collected from outside spigots or mop sinks are not acceptable. What to do with the results PWSs must provide the Ohio EPA with detailed sampling information and results within ten days of the end of the monitoring period. Regardless of whether a PWS exceeds the lead action level, it must also provide a Lead Consumer Notice to all consumers from whose taps the samples were taken within 30 days of receiving the results. The Lead Consumer Notice must contain specific information, including the monitoring results, the MCL goal and lead action level, the effects of lead on human health, and the steps to take to reduce exposure to lead in drinking water. PWSs must issue the notice by mail or hand delivery. If the PWS is a school or daycare facility, parents or guardians must be notified by newsletter or email. The Ohio EPA also requires verification of the Lead Consumer Notice to be submitted to the Ohio EPA. What to do if you exceed the lead action level If a PWS exceeds the lead action level in the tap water samples, in addition to issuing a Lead Consumer Notice, it must sample the tap water of any customer who requests it. PWSs that exceed the lead action level must also provide written public education materials to consumers within 60 days of the end of that monitoring period. Public education materials must contain specific information including the health effects of lead, the sources of lead, the steps consumers can take to reduce exposure, and what the PWS is doing to reduce lead levels in homes and buildings in the area. If the PWS is a school, day care, nursing home or correctional institution, the parent, legal guardian or power of attorney is required to be directly notified. Verification of the public education materials must also be submitted to the Ohio EPA. PWSs with tap water samples that exceed the lead action level are also required to monitor the water for lead at the entry points to the distribution system. Large PWSs (and small and medium PWSs whose tap water samples exceed the lead action level) are required to monitor for other water quality parameters, in addition to lead, both at taps and at the entry points to the distribution system. Corrosion control treatment The treatment technique for lead requires PWSs to install and operate “optimal corrosion control treatment.” The goal of corrosion control treatment is to minimize lead concentrations in the water coming out of consumers’ taps, while still ensuring that the water is in compliance with all other primary drinking water regulations. Various corrosion control treatment steps may be employed, depending on the characteristics of the water. Not all PWSs must take additional steps, beyond monitoring, to achieve optimal corrosion control treatment. For example, small and medium PWSs are deemed to already have optimized corrosion control if the systems’ samples comply with the lead action level during two consecutive six-month monitoring periods. Generally, these PWSs then only need to monitor for lead once every three calendar years. However, these PWSs must notify Ohio EPA before making any change or modification in treatment or before changing the water source. PWSs that fail to meet the lead action level must conduct corrosion control studies and come up with a plan to provide for corrosion control. This plan must balance the effect of chemicals used for corrosion control treatment on other water quality treatment processes. Corrosion control treatments that PWSs may employ include the use of a phosphate inhibitor; the use of a silicate inhibitor; a pH and alkalinity adjustment; or a calcium hardness adjustment. The Director of Ohio EPA must approve the corrosion control plan, and follow-up monitoring is required. Source water treatment If a PWS has implemented optimal corrosion control treatment and still exceeds the lead action level, the Ohio EPA may require a PWS to implement source water treatment at the entry point of the water to the distribution system. Similar to the process required for corrosion control, a PWS must develop a treatment recommendation. Examples of source water treatment for lead include ion exchange, reverse osmosis, lime softening or coagulation/filtration. Following approval by the Ohio EPA and implementation of source water treatment, PWSs must comply with the maximum permissible lead concentration for the finished water entering the distribution system, as determined by the Ohio EPA. Service line replacement While configurations vary from system to system, water mains typically transport drinking water through the distribution system and connect to a smaller pipe. The smaller pipe transports water to the water meter at each residence or business and then through household pipes to the tap. Often, the PWS owns the distribution mains and water meter, and the property owner owns the household pipes leading from the water meter to the tap. Generally, only if optimal corrosion control treatment and/or source water treatment prove unsuccessful is a PWS required to replace service lines that contain lead. Ohio’s rules generally require a PWS in this instance to replace annually at least seven percent of the initial number of lead service lines in the distribution system. However, a PWS is only required to pay for replacement of the portions of the lead service lines that it owns. A PWS is not required to bear the cost of replacing privately-owned lead service lines. Regulatory changes are anticipated in light of the concerns with lead regulation highlighted by the lead contamination in Flint, Michigan. Recently, multiple bills have been introduced in the U.S. Congress to increase funding for grants and loans for lead reduction projects, increase funding for health programs to address lead exposure, and increase requirements for the EPA to notify the public when it identifies unsafe lead levels in a community’s drinking water. In Ohio, the Ohio EPA has indicated that it intends to strengthen its rules in order to drastically reduce the amount of time PWSs have when issuing Lead Consumer Notices and public education materials to consumers and when providing public notice verifications to the Ohio EPA. Each PWS (and its source water) is unique and requires independent evaluation. PWSs experiencing issues with lead should consult with a regulatory professional who can work with the PWS and the proper regulatory authority, such as Ohio EPA, to resolve the issue. With careful attention to monitoring and treatment options and prompt notification to consumers of monitoring results and treatment steps taken, PWSs should succeed in serving healthy drinking water and maintain the public trust while doing so. Ohio’s lead rules can be found in Ohio Administrative Code Chapter 3745-81. For more information, see Ohio Adm. Code 3745-81-85 or Ohio EPA’s Checklist for Completing Lead and Copper Sample Monitoring Requirements. This is for informational purposes only. It is not intended to be legal advice and does not create or imply an attorney-client relationship.Download PDF
<urn:uuid:82e150f9-db63-451e-9982-a45549ed6228>
CC-MAIN-2021-43
https://www.bricker.com/insights-resources/publications/regulatory-requirements-for-lead-in-public-water-systems
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00670.warc.gz
en
0.92963
2,398
3.0625
3
« السابقةمتابعة » GRAHĀRAS are meant the fertile lands of the present Gupo kār, between the north foot of the Takht hill and the Dal. The name Gupakăr may be, in fact, the direct phonetic derivative of the term used by Kalhaņa.' Our surmise is supported by the reference which Kalbaņa in the verse immediately following makes to the village BAŪKSTRAVĀȚIKĀ. This place is identified by the old glossator A, with Buchivor, a small hamlet situated on the narrow strip of land at the rocky north-west foot of the Takht hill. The modern name is clearly derived from Kalhaņa's form. Gopāditya is said to have removed to this confined and secluded spot Brahmans who had given offence by eating garlic. The combined mention of Gopādri, Gopāgrahāra and Bhūkşiravātikā in Rājat. i. 341 sq. suggests that Kalhaņa has reproduced bere local traditions collected from the sites immediately adjoining the hill. Whether the connection of these localities with King Gopāditya's reign was based on historical fact, or only an old popular etymology working upon the word Gopa found in the first two names, can no longer be decided. Continuing our route along the eastern shore of the Dal we come, at a distance of about one mile from Gupakār, to the large village of Thid, prettily situated amid vineyards and orchards. It is the THEDĀ of the Rājatarangiņi, mentioned as one of the places which the pious King Saṁdhimat or Aryarāja adorned with Mathas, divine images, and Lingas. Abū-l-Fazl speaks of Thid as "a delightful spot where seven springs unite; around them are stone buildings, memorials of by-gone times."3 The remains here alluded to can no longer be traced, but the seven springs (Saptapuskarini) which are also referred to in the Haracaritacintāmaņi (iv. 40 sqq.), are still pointed out. The cluster of villages which we reach about one and a half miles beyond This, and which jointly bear the name Bron, can be safely identified with BHIMĀDEVI which Kalhaņa notices along with Thedá. The Nilamata knows the sacred site of Bhimadevi in conjunction with the Sureśvari Tirtha which we shall next visit, and in the Haracaritacintāmaņi it is named with the seven springs of Thedā. The Tirtha of Bhimādevi is no longer known, but may be located with some probability at the fine spring near Dāmpār marked now by a Muhammadan shrine. I Gupokär may go back to a form * Gup@gār, with assimilation of g to the preceding tenuis. In Kś. the hardening of 9 to k is by no means unknown, see Dr. Grierson's remarks, Z.D.M.G., I., p. 3. * Gupogår could easily be traced back to Gopågrahara through Pr, forms like * Gupagrár. $ See Rajat. ii. 135 note. 8 Ain-s-Akb., ii. p. 361. 103. A sacred site of far greater fame and importance is that of the present village of Isobar which lies about Tīrtha of Sureśvarī. two miles further north on the Dal shore and a little beyond the Mughal garden of Nishāt. The site was known in ancient times as Sureśvarikşetra (“the field of Sureśvari’).! It was sacred to Durgā-Sureśvari who is still worshipped on a high crag rising from the mountain range to the east of Isabar village. The seat of the goddess is on a rugged rock some 3000 feet above the village, offering no possible room for any building. The numerous shrines erected in her honour were hence built on the gently sloping shore of the lake below. The Tirtba of Sureśvari is often referred to in Kalhaņa's Chronicle and other Kaśmirian texts as a spot of exceptional holiness. It was particularly sought by the pious as a place to die at. The pilgrimage to Sureśvari is connected with visits to several sacred springs in and about Isibạr. One of them, Satadhārā, is already mentioned by Kşemendra. It is passed in a narrow gorge some 1500 feet below the rock of Sureśvari. Isa bạr derives its present name from the shrine of IseśvarA which King Samdhimat-Aryarāja according to the Rājatarangiņi erected in honour of his Guru Iśāna. An earlier form, Isabror, which is found in an old gloss of the Chronicle and evidently was heard also by Abū-l-Fazl, helps to connect Isa bạr and Tšeśvara. 4 Isa bạr is still much frequented as a pilgrimage place. The chief attraction is a sacred spring known as Guptaganyā which fills an ancient stone-lined tank in the centre of the village. This conveniently accessible Tirtha is the scene of a very popular pilgrimage on the Vaišākhi day and has fairly obscured the importance of the mountain seat of Sureśvari. A ruined mound immediately behind the tank is popularly believed to mark the site of the Iseśvara shrine. Numerous remains of ancient buildings are found around the sacred springs and elsewhere in the village. They probably belong to the various other temples the erection of which is mentioned by Kalhaņa at the site of Sureśvari.5 Passing round the foot of the ridge on which Sureśvari is worshipped, we come to the small village of Şadarhadvana; Hārvan which the old glossator of the RājataTripureśvara. rangiņi identifies with SADARHADVANA (the wood of the six Arhats '). This place is mentioned by Kalhaņa as the residence of the great Buddhist teacher Nāgārjuna. The name Hārvan may well be derived from Şaďarhadvana, but in the absence of other evidence the identification cannot be considered as certain. On the hill-side south of the village I observed already in 1888 fragments of ornamented bricks. Since then remarkable remains of ancient brickpavements have come to light on occasion of excavations made for the new Srinagar waterworks. | Compare for Suresvari and the site of Isą bạr, note v. 37. + -bạr is a modern contraction for -bror, from Skr. bhatțăraka, which in Kasmir local names has often taken the place of its synonym -isvara ; comp. e.g., Skr. Vijayesvara > Kś. Vijobror. 6 See Rüjat. v. 37, 40 sq.; viii. 3365. J. 1. 21 Proceeding further up the valley of the stream which comes from the Mār Sar lake, we reach, at a distance of about three miles from the Dal, the village of Triphar. Evidence I have discussed elsewhere, makes it quite certain that it is the ancient TRIPUREÁVARA (Tripuresa).? The latter is repeatedly mentioned as a site of great sanctity by Kalhaņa as well as in the Nilamata and some Māhātmyas. But it has long ago ceased to be a separate pilgrimage place. A little stream known as the Tripuragangā near Triphar is, however, still visited as one of the stations on the Mahādeva pilgrimage. Kşemendra in the colophon of his Daśāvatāracarita refers to the hill above Tripureśa as the place where he was wont to find repose and where he composed his work. In Zain-ul-'ābidin's time Tripureśvara seems yet to have been a Tirtha much frequented by mendicants.3 Tripureśvara too possessed its shrine of Jyeştheśvara, and to this King Avantivarman retired on the approach of death. A legend related by the Sarvāvatāra connected the site of Tripureśvara with the defeat of the demon Tripura by Siva and with the latter's worship on the neighbouring peak of Mahādeva. I have not been able to examine the site and am hence unable to state whether there are any ancient ruins near it. The whole mountain-ridge which stretches to the south of Triphar and along the Dal, bore in ancient times the name of SRIDVĀRA. On the opposite side of the Valley rises the great peak of MAHADEVA to a height of over 13,000 feet. Numerous references to it in the Nilamata, Sarvāvatāra, and other texts, show that it was in old times just as now frequented as a Tirtha. We may now again descend the valley towards the north shore of the Dal. On our way we pass close to Hārvan the village of Tsatea where the convenience of modern worshippers has located a substitute for the I See Rājat. i. 173 note. ancient Tirtha of the goddess Sāradā (see below $ 127). Leaving aside the famous garden of Shälimār of which our old texts know nothing, we come to a marshy extension of the Dal knowp as Tēlabal. The stream which flows through it and which forms a branch of the river coming from the Mār Sar, bore the old name of Tilaprastha. 104. The road which takes us from Tēlabal to the mouth of the Hiranyapura. Sind Valley is the same which was followed by the pretender Bhiksācara and his rebel allies on a march to Sureśvari described in the Rājatarangiņi. The narrow embankment on which they fought and defeated the royal troops, leads across the Tēlabal marshes. At the south foot of the ridge which runs down to the opening of the Sind Valley, we find the village of Rạnyil, the ancient HIRANYAPURA. The place is said by Kalhaņa to have been founded by King Hiraṇyākşa. As it lies on the high-road from the Sind Valley to Srinagar it is repeatedly mentioned also in connection with military operations directed from that side against the capital. The victorious Uccala when marching upon Srinagar, had the Abhişeka ceremony performed en route by the Brahmans of Hiraṇyapura. It seems to have been a place of importance, since it figures in a fairy-tale related in the Kathāsaritsāgara as the capital of Kaśmir. A spring a little to the south of the village is visited by the pilgrims to the Haramukuțagangā and bears in Māhātmyas the name of Hiranyākșanāga. From near Rạnyil several old water-courses radiate which carry the water of the Sind River to the village lying Juşkapura; between the Anchiār and the Dal lakes. One Amareśvara. of these canals passes the village of Zukur. A tradition recorded already by General Cunningham identifies this place with the ancient JUŞKAPURA. Kalhaņa names the place as a foundation of the Turuşka (i.e. Kuşana) King Juşka who also built a Vihāra there. The Muhammadan shrines and tombs of the village contain considerable remains of ancient buildings. | The first reference to this somewhat over-praised locality which I can find, is in Abū-l-Fayl who mentions the waterfall or rather the cascades of 'Shālahmar's see ii. p. 361. The Vitastā", Iśālaya-, and Mahādeva-Māhātmyas which are of very modern origin, show this faet also by their references to 'Salamūra' and the whimsical etymologies which they give for the name (Måraśālā, etc.). We might reasonably expect that Jonarāja and Sriyara in their detailed accounts of the Dal would have mentioned the place if it had then claimed any importance. % See Rājat. v. 46 note ; Sriv. i. 421. To the west of Juşkapura and on the shore of the Anchéār lies the large village of Amburhör. It is the ancient AMAREŚVARA often mentioned in the Rājatarangiņi in connection with military operations to the north of Srinagar. This is easily accounted for by the fact that the place lay then as now on the high road connecting the Sind Valley with the capital. It took its name from a temple of Siva Amareśvara which Suryamati, Ananta's queen, endowed with Agrahāras and a Matha. The ancient slabs and sculptured fragments which I found in 1895 in and around the Ziārat of Farrukhzād Şāhib, may possibly have belonged to this temple. Continuing on the road towards Srinagar for about two miles further we come to the large village of Vicār Nāg prettily situated in extensive wallnut groves. A fine Nāga near the village forms the object of a popular Yātrā in the month of Caitra. It is supposed to be an epiphany of the Ailāpattra Nāga who is mentioned also in the Nilamata, An earlier designation seems to be MUKTĀMŪLAKANĀGA which is given to the locality by Srivara and in the Tirthasaṁgraha. To the west of the village and near an inlet of the Anchéār are the ruins of three ancient temples now converted into Ziārats and tombs.3 Only a quarter of a mile to the east of Vicār Nāg and on the other side of the old canal called Lacham Kul Amitabhavana. (*Lakşmikulyā) stands the hamlet of Anta. baran. In my “Notes on Ou-k'ong's account of Kaśmir” I have proved that Ântabavan derives its name from the ancient Vibāra of A MẶTA BHAVANA which Amộtaprabhā, a queen of Meghavāhana, is said to have erected.* Ou-k’ong mentions the Vihāra by the name of Nyo-mi-t'o-po-wan which represents a transcribed Prakrit form * Amitabhavana or Amitabhavana. An ancient mound with traces of a square enclosure around it, which is found between the canal and the hamlet, may possibly belong to the remains of this Vibāra. Proceeding to the east of Ântabavan for about a mile we come to the large village of Sulare bal situated on a Tirtha of Sodara. deep inlet of the Dal, known as Sudara khun. The name of the village and the neighbouring portion of the lake make it very probable that we have to place here the sacred spring of SODARA.5 It formed the subject of an ancient legend related by 1 See Rojat. vii. 183 note. 3 See Sriv. iv. 65. On his authority the name Muktāmülakanaga ought to have been shown on the map. & Compare for a view of these remains, COLE, Ancient Buildings, p. 31. + See Rajat, iii. 9 note, and Notes on Ou-k’ong, pp. 9 899. See Rūjat, i. 125-126 note. Kś. ·bal in Sudarabal means morely 'place.'
<urn:uuid:004b619a-24f6-4d17-8edc-c57c2d64c002>
CC-MAIN-2021-43
https://books.google.ae/books?id=8QngAAAAMAAJ&pg=RA2-PA163&focus=viewport&vq=stand&dq=related:ISBN1558604421&hl=ar&output=html_text
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.954112
3,656
2.90625
3
President Barack Obama discusses the long-term potential of renewable energies and sees the global energy transition as irreversible. Now more than ever, the world needs to embrace the opportunity of clean energy and cooperate on its climate goals. The release of carbon dioxide (CO2) and other greenhouse gases (GHGs) due to human activity is increasing global average surface air temperatures, disrupting weather patterns, and acidifying the ocean (1). Left unchecked, the continued growth of GHG emissions could cause global average temperatures to increase by another 4°C or more by 2100 and by 1.5 to 2 times as much in many midcontinent and far northern locations (1). Although our understanding of the impacts of climate change is increasingly and disturbingly clear, there is still debate about the proper course for U.S. policy—a debate that is very much on display during the current presidential transition. But putting near-term politics aside, the mounting economic and scientific evidence leave me confident that trends toward a clean-energy economy that have emerged during my presidency will continue and that the economic opportunity for our country to harness that trend will only grow. This Policy Forum will focus on the four reasons I believe the trend toward clean energy is irreversible. Economies grow, emissions fall The United States is showing that GHG mitigation need not conflict with economic growth. Rather, it can boost efficiency, productivity, and innovation. Since 2008, the United States has experienced the first sustained period of rapid GHG emissions reductions and simultaneous economic growth on record. Specifically, CO2emissions from the energy sector fell by 9.5% from 2008 to 2015, while the economy grew by more than 10%. In this same period, the amount of energy consumed per dollar of real gross domestic product (GDP) fell by almost 11%, the amount of CO2 emitted per unit of energy consumed declined by 8%, and CO2 emitted per dollar of GDP declined by 18% (2). The importance of this trend cannot be understated. This “decoupling” of energy sector emissions and economic growth should put to rest the argument that combatting climate change requires accepting lower growth or a lower standard of living. In fact, although this decoupling is most pronounced in the United States, evidence that economies can grow while emissions do not is emerging around the world. The International Energy Agency’s (IEA’s) preliminary estimate of energy-related CO2 emissions in 2015 reveals that emissions stayed flat compared with the year before, whereas the global economy grew (3). The IEA noted that “There have been only four periods in the past 40 years in which CO2 emission levels were flat or fell compared with the previous year, with three of those—the early 1980s, 1992, and 2009—being associated with global economic weakness. By contrast, the recent halt in emissions growth comes in a period of economic growth.” At the same time, evidence is mounting that any economic strategy that ignores carbon pollution will impose tremendous costs to the global economy and will result in fewer jobs and less economic growth over the long term. Estimates of the economic damages from warming of 4°C over preindustrial levels range from 1% to 5% of global GDP each year by 2100 (4). One of the most frequently cited economic models pins the estimate of annual damages from warming of 4°C at ~4% of global GDP (4–6), which could lead to lost U.S. federal revenue of roughly $340 billion to $690 billion annually (7). Moreover, these estimates do not include the possibility of GHG increases triggering catastrophic events, such as the accelerated shrinkage of the Greenland and Antarctic ice sheets, drastic changes in ocean currents, or sizable releases of GHGs from previously frozen soils and sediments that rapidly accelerate warming. In addition, these estimates factor in economic damages but do not address the critical question of whether the underlying rate of economic growth (rather than just the level of GDP) is affected by climate change, so these studies could substantially understate the potential damage of climate change on the global macroeconomy (8, 9). As a result, it is becoming increasingly clear that, regardless of the inherent uncertainties in predicting future climate and weather patterns, the investments needed to reduce emissions—and to increase resilience and preparedness for the changes in climate that can no longer be avoided—will be modest in comparison with the benefits from avoided climate-change damages. This means, in the coming years, states, localities, and businesses will need to continue making these critical investments, in addition to taking common-sense steps to disclose climate risk to taxpayers, homeowners, shareholders, and customers. Global insurance and reinsurance businesses are already taking such steps as their analytical models reveal growing climate risk. Private-sector emissions reductions Beyond the macroeconomic case, businesses are coming to the conclusion that reducing emissions is not just good for the environment—it can also boost bottom lines, cut costs for consumers, and deliver returns for shareholders. Perhaps the most compelling example is energy efficiency. Government has played a role in encouraging this kind of investment and innovation: My Administration has put in place (i) fuel economy standards that are net beneficial and are projected to cut more than 8 billion tons of carbon pollution over the lifetime of new vehicles sold between 2012 and 2029 (10) and (ii) 44 appliance standards and new building codes that are projected to cut 2.4 billion tons of carbon pollution and save $550 billion for consumers by 2030 (11). But ultimately, these investments are being made by firms that decide to cut their energy waste in order to save money and invest in other areas of their businesses. For example, Alcoa has set a goal of reducing its GHG intensity 30% by 2020 from its 2005 baseline, and General Motors is working to reduce its energy intensity from facilities by 20% from its 2011 baseline over the same timeframe (12). Investments like these are contributing to what we are seeing take place across the economy: Total energy consumption in 2015 was 2.5% lower than it was in 2008, whereas the economy was 10% larger (2). This kind of corporate decision-making can save money, but it also has the potential to create jobs that pay well. A U.S. Department of Energy report released this week found that ~2.2 million Americans are currently employed in the design, installation, and manufacture of energy-efficiency products and services. This compares with the roughly 1.1 million Americans who are employed in the production of fossil fuels and their use for electric power generation (13). Policies that continue to encourage businesses to save money by cutting energy waste could pay a major employment dividend and are based on stronger economic logic than continuing the nearly $5 billion per year in federal fossil-fuel subsidies, a market distortion that should be corrected on its own or in the context of corporate tax reform (14). Market forces in the power sector The American electric-power sector—the largest source of GHG emissions in our economy—is being transformed, in large part, because of market dynamics. In 2008, natural gas made up ~21% of U.S. electricity generation. Today, it makes up ~33%, an increase due almost entirely to the shift from higher-emitting coal to lower-emitting natural gas, brought about primarily by the increased availability of low-cost gas due to new production techniques (2, 15). Because the cost of new electricity generation using natural gas is projected to remain low relative to coal, it is unlikely that utilities will change course and choose to build coal-fired power plants, which would be more expensive than natural gas plants, regardless of any near-term changes in federal policy. Although methane emissions from natural gas production are a serious concern, firms have an economic incentive over the long term to put in place waste-reducing measures consistent with standards my Administration has put in place, and states will continue making important progress toward addressing this issue, irrespective of near-term federal policy. Renewable electricity costs also fell dramatically between 2008 and 2015: the cost of electricity fell 41% for wind, 54% for rooftop solar photovoltaic (PV) installations, and 64% for utility-scale PV (16). According to Bloomberg New Energy Finance, 2015 was a record year for clean-energy investment, with those energy sources attracting twice as much global capital as fossil fuels (17). Public policy—ranging from Recovery Act investments to recent tax credit extensions—has played a crucial role, but technology advances and market forces will continue to drive renewable deployment. The levelized cost of electricity from new renewables like wind and solar in some parts of the United States is already lower than that for new coal generation, without counting subsidies for renewables (2). That is why American businesses are making the move toward renewable energy sources. Google, for example, announced last month that, in 2017, it plans to power 100% of its operations using renewable energy—in large part through large-scale, long-term contracts to buy renewable energy directly (18). Walmart, the nation’s largest retailer, has set a goal of getting 100% of its energy from renewables in the coming years (19). And economy-wide, solar and wind firms now employ more than 360,000 Americans, compared with around 160,000 Americans who work in coal electric generation and support (13). Beyond market forces, state-level policy will continue to drive clean-energy momentum. States representing 40% of the U.S. population are continuing to move ahead with clean-energy plans, and even outside of those states, clean energy is expanding. For example, wind power alone made up 12% of Texas’s electricity production in 2015 and, at certain points in 2015, that number was >40%, and wind provided 32% of Iowa’s total electricity generation in 2015, up from 8% in 2008 (a higher fraction than in any other state) (15, 20). Outside the United States, countries and their businesses are moving forward, seeking to reap benefits for their countries by being at the front of the clean-energy race. This has not always been the case. A short time ago, many believed that only a small number of advanced economies should be responsible for reducing GHG emissions and contributing to the fight against climate change. But nations agreed in Paris that all countries should put forward increasingly ambitious climate policies and be subject to consistent transparency and accountability requirements. This was a fundamental shift in the diplomatic landscape, which has already yielded substantial dividends. The Paris Agreement entered into force in less than a year, and, at the follow-up meeting this fall in Marrakesh, countries agreed that, with more than 110 countries representing more than 75% of global emissions having already joined the Paris Agreement, climate action “momentum is irreversible” (21). Although substantive action over decades will be required to realize the vision of Paris, analysis of countries’ individual contributions suggests that meeting medium-term respective targets and increasing their ambition in the years ahead—coupled with scaled-up investment in clean-energy technologies—could increase the international community’s probability of limiting warming to 2°C by as much as 50% (22). Were the United States to step away from Paris, it would lose its seat at the table to hold other countries to their commitments, demand transparency, and encourage ambition. This does not mean the next Administration needs to follow identical domestic policies to my Administration’s. There are multiple paths and mechanisms by which this country can achieve—efficiently and economically—the targets we embraced in the Paris Agreement. The Paris Agreement itself is based on a nationally determined structure whereby each country sets and updates its own commitments. Regardless of U.S. domestic policies, it would undermine our economic interests to walk away from the opportunity to hold countries representing two-thirds of global emissions—including China, India, Mexico, European Union members, and others—accountable. This should not be a partisan issue. It is good business and good economics to lead a technological revolution and define market trends. And it is smart planning to set long-term emission-reduction targets and give American companies, entrepreneurs, and investors certainty so they can invest and manufacture the emission-reducing technologies that we can use domestically and export to the rest of the world. That is why hundreds of major companies—including energy-related companies from ExxonMobil and Shell, to DuPont and Rio Tinto, to Berkshire Hathaway Energy, Calpine, and Pacific Gas and Electric Company—have supported the Paris process, and leading investors have committed $1 billion in patient, private capital to support clean-energy breakthroughs that could make even greater climate ambition possible. We have long known, on the basis of a massive scientific record, that the urgency of acting to mitigate climate change is real and cannot be ignored. In recent years, we have also seen that the economic case for action—and against inaction—is just as clear, the business case for clean energy is growing, and the trend toward a cleaner power sector can be sustained regardless of near-term federal policies. Despite the policy uncertainty that we face, I remain convinced that no country is better suited to confront the climate challenge and reap the economic benefits of a low-carbon future than the United States and that continued participation in the Paris process will yield great benefit for the American people, as well as the international community. Prudent U.S. policy over the next several decades would prioritize, among other actions, decarbonizing the U.S. energy system, storing carbon and reducing emissions within U.S. lands, and reducing non-CO2emissions (23). Of course, one of the great advantages of our system of government is that each president is able to chart his or her own policy course. And President-elect Donald Trump will have the opportunity to do so. The latest science and economics provide a helpful guide for what the future may bring, in many cases independent of near-term policy choices, when it comes to combatting climate change and transitioning to a clean-energy economy. Our thanks to President Obama and Science.org for permission to republish this article. References and Notes - ↵ F. Stocker et al., in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, T. F. Stocker et al., Eds. (Cambridge Univ. Press, New York, 2013), pp. 33–115. - ↵Council of Economic Advisers, in “Economic report of the President” (Council of Economic Advisers, White House, Washington, DC, 2017), pp. 423–484; http://bit.ly/2ibrgt9. - ↵International Energy Agency, “World energy outlook 2016” (International Energy Agency, Paris, 2016). - ↵ Nordhaus, The Climate Casino: Risk, Uncertainty, and Economics for a Warming World (Yale Univ. Press, New Haven, CT, 2013). - ↵The result for 4°C of warming cited here from DICE-2016R (in which this degree of warming is reached between 2095 and 2100 without further mitigation) is consistent with that reported from the DICE-2013R model in (5), Fig. 22, p. 140. - ↵S. Office of Management and Budget, Climate Change: The Fiscal Risks Facing the Federal Government(OMB, Washington, DC, 2016); http://bit.ly/2ibxJo1. - ↵Burke, S. M. Hsiang, E. Miguel, Nature527, 235 (2015). doi:10.1038/nature15725pmid:26503051 CrossRef , Medline - ↵ Dell, B. F. Jones, B. A. Olken, Am. Econ. J. Macroecon. 4, 66 (2012). doi:10.1257/mac.4.3.66, CrossRef - ↵S. Environmental Protection Agency, U.S. Department of Transportation, “Greenhouse gas emissions and fuel efficiency standards for medium- and heavy-duty engines and vehicles—Phase 2: Final rule” (EPA and DOT, Washington, DC. 2016), table 5-40, pp. 5-5–5-42. - ↵DOE, Appliance and Equipment Standards Program (Office of Energy Efficiency and Renewable Energy, DOE, 2016); http://bit.ly/2iEHwebsite. - ↵The White House, “Fact Sheet: White House announces commitments to the American Business Act on Climate Pledge” (White House, Washington, DC, 2015); http://bit.ly/2iBxWHouse. - ↵BW Research Partnership, S. Energy and Employment Report (DOE, Washington, DC, 2017). - ↵S. Department of the Treasury, “United States—Progress report on fossil fuel subsidies” (Treasury, Washington, DC, 2014); www.treasury.gov. - ↵S. Energy Information Administration, “Monthly Energy Review, November 2016” (EIA, Washington, DC, 2015); http://bit.ly/2iQjPbD. - ↵DOE, Revolution…Now: The Future Arrives for Five Clean Energy Technologies—2016 Update (DOE, Washington, DC, 2016); http://bit.ly/2hTv1WG. - ↵ McCrone, Ed., Clean Energy Investment: By the Numbers—End of Year 2015(Bloomberg, New York, 2015); http://bloom.bg/2jaz4zG. - ↵ Hölzle, “We’re set to reach 100% renewable energy—and it’s just the beginning” (Google, 2016);http://bit.ly/2hTEbSR. - ↵Walmart, Walmart’s Approach to Renewable Energy (Walmart, 2014); http://bit.ly/2j5A_PDF. - ↵ R. Fares, “Texas sets all-time wind energy record” [blog].Sci. Am., 14 January 2016;http://bit.ly/2iBj9Jq. - ↵United Nations Framework Convention on Climate Change, Marrakech Action Proclamation for our Climate and Sustainable Development (UNFCCC, 2016); http://bit.ly/2iQnUNFCCC. - ↵ A. Fawcett et al, Science350, 1168 (2015). doi:10.1126/science.aad5761pmid:26612835 Abstract/FREE Full Text - ↵The White House, United States Mid-Century Strategy for Deep Decarbonization (White House, Washington, DC, 2016); http://bit.ly/2hRSWhiteHouse. ACKNOWLEDGMENTS: B. Deese, J. Holdren, S. Murray, and D. Hornung contributed to the researching, drafting, and editing of this article.
<urn:uuid:e1676a92-f2ca-422b-8991-ec93b76cfb72>
CC-MAIN-2021-43
https://energytransition.org/2017/01/the-irreversible-momentum-of-clean-energy/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00270.warc.gz
en
0.925602
3,971
2.859375
3
From predicting a sales forecast to predicting the shortest route to reach a destination, Data Science has a wide range of applications across various industries. Engineering, marketing, sales, operations, supply chain, and whatnot. You name it, and there is an application of data science. And the application of data science is growing exponentially! The situation is such that the demand for people with knowledge in data science is higher than academia is currently supplying! Starting with this article, I will be writing a series of blog posts on how to solve a Data Science problem in real-life and in data science competitions. While there could be different approaches to solving a problem, the broad structure to solving a Data Science problem remains more or less the same. The approach that I usually follow is mentioned below. Step 1: Identify the problem and know it well: In real-life scenarios: Identification of a problem and understanding the problem statement is one of the most critical steps in the entire process of solving a problem. One needs to do high-level analysis on the data and talk to relevant functions (could be marketing, operations, technology team, product team, etc) in the organization to understand the problems and see how these problems can be validated and solved through data. To give a real-life example, I will briefly take you through a problem that I worked on recently. I was performing the customer retention analysis of an e-learning platform. This particular case is a classification problem where the target variable is binary i.e one needs to predict whether a user would be an active learner on the platform or not in the next ‘x’ days, based on her behavior/interaction on the platform for the last ‘y’ days. As I just mentioned, identification of the problem is one of the most critical steps. In this particular case, it was identifying that there is an issue with the user retention on the platform. And as an immediate actionable step, it is important to understand the underlying factors that are causing users to leave the platform (or become non-active learners). Now the question is how do we do this. In the case of Data Science challenges, the problem statement is generally well defined, and all you need to do is clearly understand it and come up with a suitable solution. If needed, an additional primary and secondary research about the problem statement should be done as it helps in coming up with a better solution. If needed additional variables and cross features can be created based on the subject expertise. Step 2: Get the relevant data: In real-life scenarios: Once the problem is identified, Data scientists need to talk to relevant functions (could be marketing, operations, technology team, product team, etc) in the organization to understand the possible trigger points of the problem and identify relevant data to perform the analysis. Once this is done, all the relevant data should be extracted from the database. Continuing my narration of the problem statement I recently worked on, I did a thorough audit of the platform and the user journey and what actions users performed when on the platform. I did the audit with the help of the product and development team. This audit gave me a thorough understanding of the database architecture and potential data logs that were captured and could be considered for the analysis. An extensive list of data points (variables or features) were collated with the help of relevant stakeholders in the organization. In essence, usually, this step not only helps in understanding the DB architecture and data extraction process, but it would also help in identifying potential issues within the DB (if any), missing logs in the user journey that were not captured previously, etc. This would further help the development team to add the missing logs and enhance the architecture of the DB. Now that we have done the data extraction, we can proceed with the data pre-processing step in order to prepare the data for the analysis. Data Science Challenge: In the case of Data Science Challenges, a dataset is often provided. Step 3: Perform exploratory data analysis: To begin with, data exploration is done to understand the patterns of each of the variables. Some basic plots such as histograms and box plots are analyzed to check if there are any outliers, class imbalances, missingness, and anomalies in the dataset. Data exploration and data pre-processing have a very close correlation and often they are clubbed together. Step 4: Pre-process the data: In order to get reliable, reproducible and unbiased data analysis certain pre-processing steps are to be followed. In my recent study, I followed the below-mentioned steps – these are some of the standard steps that are followed while performing any analysis: - Data Cleaning and treating missingness in the data: Often data comes with missing values and it is always a struggle to get quality data. - Standardization/normalization (if needed): Often variables in a dataset come with a wide range of data, performing standardization/normalization would bring them to a common scale so that it could further help in implementing various machine learning models (where standardization/normalization is a pre-requisite to apply such models). - Outlier detection: It is important to know if there are any anomalies in the dataset and treat them if required. Else you might end up getting skewed results. - Splitting the data into test data and training data for model training and evaluation purpose: The data should be split into two parts - Train dataset: Models are trained on the training dataset - Test dataset: Once the model is built on the training dataset, it should be tested on the test data to check for its performance. The pre-processing step is common for both real-life data science problems and competitions alike. Now that we have pre-processed the data, we can move to defining the model evaluation parameters and exploring the data further. Step 5: Define model evaluation parameters: Arriving at the right parameters to assess a model is critical before performing the analysis. Based on various parameters and expressions of interest of the problem, one needs to define model evaluation parameters. Some of the widely used model evaluation performance are listed below: - Receiver Operating Characteristic (ROC): This is a visualization tool that plots the relationship between true positive rate and false positive rate of a binary classifier. ROC curves can be used to compare the performance of different models by measuring the area under the curve (AUC) of its plotted scores, which ranges from 0.0 to 1.0. The greater this area, the better the algorithm is to find a specific feature. - Classification Accuracy/Accuracy - Confusion matrix - Mean Absolute Error - Mean Squared Error - Precision, Recall The model performance evaluation should be done on the test dataset created during the preprocessing step, this test dataset should remain untouched during the entire model training process. Coming to the customer retention analysis that I worked on, my goal was to predict the users who would leave the platform or become non-active learners. In this specific case, I picked a model that has a good true positive rate in its confusion matrix. Here, true positive means, the cases in which the model has predicted a positive result (i.e user left the platform or user became a non-active learner) that is the same as the actual output. Let’s not worry about the process of picking the right model evaluation parameter, I will give a detailed explanation in the next series of articles. Data Science challenges: Often, the model evaluation parameters are given in the challenge. Step 7: Perform feature engineering: This step is performed in order to know: - Important features that are to be used in the model (basically we need to remove the redundant features if any). Metrics such as AIC, BIC are used to identify the redundant features, there are built-in packages such as StepAIC (forward and backward feature selection) in R that help in performing these steps. Also, algorithms such as Boruta are usually helpful in understanding the feature importance. In my case, I used Boruta to identify the important features that are required for applying a machine learning model. In general, featuring has following steps: - Transform features: Often a feature or a variable in the dataset might not have a linear relationship with the target variable. We would get to know this in the exploratory data analysis. I usually try and apply various transformations such as inverse, log, polynomial, logit, probit etc that closely matches the relationship between the target variable and the feature - Create cross features or new relevant variables- We can create cross features based on domain knowledge. For example, if we were given batsmen profile (say Sachin or Virat) with the following data points: name, no. of matches played, total runs scored — we can create a new cross-feature called batting average = runs scored/matches played. Once we run the algorithms such as Boruta, we would get feature importance. Now that we know what all features are important, we can proceed to model building exercise. Step 8: Build the model: Various machine learning models can be tried based on the problem statement. We can start fitting various models, some of the examples include linear regression, logistic regression, random forest, neural networks etc and enhance the fitted models further (through cross-validation, tuning the hyper-parameters etc). Step 9: Perform model comparison: Now that we have built various models, it is extremely important for us to compare them and identify the best one based on defined problem and model evaluation parameters (defined in step 5). In my example, I experimented with logistic regression, random forest, decision tree, neural networks, and extreme gradient boosting. Out of all, extreme gradient boosting turned out to be the best model for the given data and problem at hand. Step 10: Communicate the result: Data visualization and proper interpretation of the models should be done in this step. This would provide valuable data insights that would immensely help various teams in an organization to make informed data-driven decisions. The final data visualization and communication should be very intuitive such that anyone can understand and interpret the results. Further, the end-user who consumes the data should be able to turn them into actionable points that could further enhance the growth of the organization. Well, this summarizes the steps for solving a data science problem. Become a guide. Become a mentor. I welcome you to share your experience in data science – learning journey, competition, data science projects, and anything that is related to Data Science. Your learnings could help a large number of aspiring data scientists! Interested? Submit here.
<urn:uuid:807c05a0-e785-470b-9cae-2d5ce3e0c9ee>
CC-MAIN-2021-43
https://dphi.tech/blog/step-by-step-process-to-solve-a-data-science-challenge-problem/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00671.warc.gz
en
0.935566
2,210
2.59375
3
The priorities of modern educators are crystal clear — and backward. Nearly two generations in the past, the terrific French political analyst Alexis de Tocqueville noticed in Democracy in The united states that religion was the “first” of this country’s political institutions. By this he meant that popular, but tolerant and noncoercive, spiritual observance presented the foundation of American mores, which in switch had been the precondition of the liable exercising of self-government. Recognizing the helpful effect of religion on mores, even intelligent atheists noticed the encouragement of religion as serving the country’s, and for that reason their have, extended-phrase pursuits. In this, Tocqueville was echoing the observation George Washington designed in his farewell tackle, to the influence that ethical behavior, in most citizens, presupposed religious beliefs, which really should therefore be inspired by govt (for occasion, by way of periodic proclamations of thanksgiving to God for our bounty). For most of America’s background, there was normal settlement with Tocqueville’s and Washington’s sentiments. As late as 1952 the left-libertarian Supreme Court docket justice William O. Douglas, speaking on behalf of a 7–2 greater part in the scenario of Zorach v. Clauson, which upheld the constitutionality of a “released-time” application enabling general public-university pupils to be excused from course at their parents’ request in buy to get spiritual instruction at their respective residences of worship, observed that “we are a religious folks whose establishments presuppose a Supreme Being. . . . When the point out encourages spiritual instruction or cooperates with religious authorities by altering the program of general public events to sectarian wants, it follows the very best of our traditions” by respecting the people’s “religious character.” This attitude was to change in subsequent decades at the amounts of both judicial rulings and “elite” intellectual view. The public screen of the 10 Commandments outside the house a courthouse was located to violate the Constitution’s institution clauses. In Lee v. Weisman (1992), the Court docket, by a 5–4 the greater part, dominated that the Very first Modification proscribed a nondenominational benediction at a Rhode Island center-faculty graduation, shipped by rotating clergy of various sects, on the ground that merely enduring social strain to stand for the benediction may well trigger a graduate to violate her conscience, thus as soon as once more constituting an unconstitutional institution. (The invocation shipped at the 1989 commencement that provoked the suit, offered by a rabbi, thanked God “for the legacy of The usa the place range is celebrated and the rights of minorities are secured,” when his benediction named on God’s blessing on the school’s employees, although exhorting all all those current “to do justly, to love mercy, to wander humbly.”) As Justice Scalia observed in his dissent, the Courtroom therefore proceeded to “lay waste” to “a tradition . . . as outdated as public-school graduation ceremonies by themselves, and that is a element of an even far more longstanding American tradition of nonsectarian prayer . . . at general public celebrations frequently,” ceremonies in harmony with the appeal to God in the Declaration of Independence, with Washington’s very first inaugural tackle, and with the custom of opening the Court’s individual classes with the invocation “God preserve the United States and this honorable court.” Considering that 1976 a nonprofit group, the “Freedom from Religion Foundation” (FFRF), boasting 32,000 associates, has worked tirelessly to develop the defense of the American individuals from any intimation of community encouragement of faith. In a person of its most recent victories, the FFRA induced the government of Ashburnham, Mass., to take away from the playground of its public library “a turning game” portraying the tale of Noah’s ark, which a Foundation spokesman termed a “vengeful” tale, whose placement aimed at “young children” it located especially “troublesome.” “Enlightened mom and dad currently,” the agent noticed, regard the story of the Flood as “barbarous” — all the much more so due to the fact “many Individuals consider it is basically real.” But even as the FFRF and sympathetic users of the judiciary get the job done to defend youthful people today from risky intimations of spiritual dogma, aid has grown for a different type of public “education” ostensibly worried to encourage their very well-getting: so-known as in depth sexuality training. Advertised as wanted to overcome the spread of sexually transmitted diseases and undesired teenager pregnancies, the movement is promoted by this kind of prominent institutions as the Centers for Condition Command, Prepared Parenthood, and the Earth Health and fitness Group. “RRR: Legal rights, Respect, and Accountability,” or 3R, is a curriculum created by two previous Planned Parenthood employees and readily available from Advocates for Youth. The “comprehensive sex education” method has just been adopted by the school committee of Worcester, Mass. A summary of the method, provided by the nonprofit group Household Enjoy International, suggests that the scope, and fundamental intent, of so-identified as in depth intercourse-ed packages increase significantly over and above individuals objectives. Instead, running in the course of the curriculum is a plan for hard “traditional” gender norms. It encourages children as younger as 10 or eleven, moving into puberty (at a time when inner thoughts of ambivalence are usual), to reconsider whether their “real” gender is different from the just one they had been (biologically) “assigned” at start in the seventh grade, pupils are instructed about their “right to express their gender as it would make most sense to them.” As component of that lesson, students are instructed to consider to explain to a hypothetical extraterrestrial visitor “what a ‘boy’ and ‘girl’ are applying normally held stereotypes about gender.” In the ninth quality, continuing the concept of the arbitrariness of gender “assignments,” they are invited to consider the condition of a person who rejects the woman gender assigned at start: You detest all of the bins that culture puts people in and identify as genderqueer. You work difficult to have a gender-nonconforming physical appearance and style. You appreciate gender-bending and you experience like with Sydney [another student with whom you are invited to role-play] you have eventually achieved a person who truly ‘gets you.’” As the foregoing excerpts show, couple of of these situations have anything at all to do with the mentioned purposes for implementing a intercourse-education and learning curriculum: protecting against being pregnant or STD’s. As a substitute, the purpose is to motivate pupils to rethink their gender identification and sexual orientation, to challenge standard sources of ethical authority, and to regard sexual activity, as early as the tenth grade, as a “right” totally free of parental interference. Nor is the curriculum reliable even in its meant guidance of students’ very own decisions: In the seventh-grade curriculum, learners are to be led by what is explained as a “forced alternative activity assessing their views about homophobia in their universities,” enabling them to “be the change.” But of system, even in this “enlightened” age, community-university programming cannot (nevertheless) be envisioned to maintain speed with the a lot more outré instruction made available at personal universities these kinds of as Manhattan’s Columbia Grammar and Preparatory School, which this earlier May possibly extra to the curriculum “a fourth R, “raunch,” as noted in the New York Submit. Juniors at the $47,000-a-calendar year college showed up for a “health and sexuality” workshop, expecting, as just one student described, that it would just “be about condoms or beginning handle.” Instead, they were being manufactured to go to “something termed ‘Pornography Literacy: An Intersectional Focus on Mainstream Porn,” taught by the director of health and fitness and wellness at the Dalton College, another elite prep school. Involved in the slide presentation and lecture have been classes on how porn usually takes treatment of “male vulnerabilities” statistics supposedly displaying that “straight ladies have far fewer orgasms than homosexual gentlemen or women” and illustration of many porn genres these kinds of as “incest-themed,” consensual or “vanilla,” “barely authorized,” and “kink and BDSM” (including “waterboard electro” torture porn). Numerous many years ago the astute social critic Mary Ebersadt mentioned a putting reversal in social attitudes concerning taking in and sexual intercourse involving the 1950s and nowadays. In the 1950s, Us citizens tended to be significantly from finicky about the food items they ate (e.g., Tv set dinners), even though currently being much far more picky about with whom, and beneath what instances, they experienced sex. Now, she noticed, well-liked attitudes, at least among the enlightened, have reversed: There are couple limits on acceptable sexual practices and companions, even as People mature ever a lot more finicky about their eating (organic and natural, vegan, nearby). A equivalent reversal, it would appear, has transpired in our attitudes towards the ethical training of our youth, and toward the general specifications held up to citizens frequently, with regard to matters religious and sexual. On the 1 hand, the nature of religious liberty has been reinterpreted, from staying safeguarded against coercion to preserving susceptible youth (and grownups) from any general public exhibit of support for religion, lest they really feel “offended” or “pressured.” At the exact time, not only have basic sexual attitudes been liberalized around recent decades a significant physique of “experts” needs to get cost of reworking children’s views of sexuality and “gender,” with small input or lively consent from their dad and mom. The Worcester Telegram and Gazette pointed out that, at the faculty-committee community assembly where the 3Rs curriculum was adopted, a around equivalent range of citizens spoke on every facet of the difficulty, with opponents which includes a sizeable quantity of racial- and ethnic-minority customers as properly as clergy. But the committee’s vote simply just disregarded all objections from mom and dad protesting that the community colleges had no appropriate to have interaction in endeavoring to renovate their kids’ sexual practices or gender “identities” in the identify of “tolerance.” A the greater part of the college committee simply just dismissed these worries. It’s possible Tocqueville, Washington, and Douglas ended up on to some thing. Whom would you fairly have supervising our children’s moral development: moms and dads, associates of the clergy, and statesmen who recognize the dependence of political liberty on widespread religious belief and standard morality, or self-styled, progressive “experts” who count, as the producers of the 3Rs curriculum worry, on the “tenets of social learning theory” and “social cognitive theory”?
<urn:uuid:532060bd-6001-4bb9-a54e-a084c025ebe1>
CC-MAIN-2021-43
https://grubboston.com/progressive-education-replaces-religion-with-polymorphous-sexuality.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00150.warc.gz
en
0.958011
2,429
2.578125
3
Festivals in Ancient Egypt The ancient Egyptian festivals were all based on religion, which is similar to the majority of our modern feasts. The ancient Egyptian gods were viewable as the supreme god Ra ” the Sun ” emerged from the underworld every day, while animal deities surrounded the ancient Egyptians with all their mighty powers. Egyptians awaited the festivals with bated breath every year to connect with their beloved divines in the most intimate way possible, visiting temples all at once, presenting offerings, dancing, and singing, and praying for their dreams and desires to come true. Despite the fact that no texts from the time period exist with information on the processes followed in any of these festivals during the Old Kingdom (about 2686-2181 BC), we still have monuments that teach us a lot about it. We can know about the Heb-Sed (Jubilee celebration) from the scale and style of King Djoser’s open-courtyard complex, while in the New Kingdom, unimaginable details of ancient Egyptian festivities are documented on the walls of temples and sacred sanctuaries. Many festivals known as “Heb” were held throughout the year to pay gratitude to the gods and request divine graces in ancient times. The Egyptians would offer sacrifices, offerings, and rejoice the ‘ divine’ might at festivals in ancient Egypt, but the true purpose of these festivals was for the Egyptians to see the sons of gods “Kings” with their own eyes and maintain the belief structure that the world is run by the gods’ will as interpreted by the priests and implemented by the King. Festivals in Ancient Egypt were simply expressions of the divine in human existence, and as such, they established a life pattern for the Egyptians. The Ancient Egyptian calendar is divided into 12 months of exactly 30 days and the entire year was divided into three seasons, the first being Akhet, the season of flooding, the second being Peret, the season of irrigation and growth, and the third being Shemu, the season of harvesting, plus they added five extra days (epagomenal days) to celebrate a different occasion of each day with their own special festival. A parade of a god through a particular path, such as the one seen at Karnak temple, was part of ancient Egyptian festivals. They would conduct a celebration at the start of each year and again at the end to emphasize the concept of life’s everlasting, cyclical nature. Throughout the year, Egypt hosted a plethora of events and festivals. Festivals on the epagomenal days (epagomenae) The calendar year for ancient Egyptians consisted of 12 months of 30 days each, leaving five additional days, or “epagomenal days,” at the end of each year. The sky goddess Nut was supposed to have given birth to her offspring Osiris, Horus, Seth, Isis, and Nephthys on these days, which were honored as “days out of time.” - Day 1: Osiris’ birthday Festival - Day 2: Horus’ birthday Festival (Hormas) - Day 3: Seth’s birthday Festival - Day 4: Isis’ birthday Fistival - Day 5: Nephthys’ birthday Festival 5 Major Ancient Egyptian Festivals Birthday of God Horus on the second epagonal day (Hormas, and later Christmas): one of the most important ancient Egyptian festivals. Every ancient Egyptian temple, even those erected by the Greek Ptolemies, contains a particular chapel “portion” dedicated to the divine birth of Horus; the hieroglyphic term for this chapel is “ma-mse,” which translates as “birth chapel.” This chapel is known as the “birth chapel” in English. Every year, the ancient Egyptians celebrated the birth of Horus, honouring the miraculous birth of the saviour who represents the struggle between good and evil and who maintains the delicate balance of existence. - Ancient Egyptian’s New Year Day (Wepet-Renpet Festival): The New Year’s Day ceremony in ancient Egypt was called “The Opening of the Year.” Because the celebration was dependent on the Nile River’s flood, it was a type of mobile feast. It commemorated Osiris’ death and rebirth, as well as the renewal and rebirth of the land and people. It is solidly established as beginning in the late Old Kingdom of Egypt (c. 2613 – c. 3150 BCE) and is clear proof of the Osiris cult’s prominence at the period. This event included eating and drinking, as it did for most others, and the celebration would extend for days, depending on the time period. Osiris’ death was commemorated with solemn ceremonies, as well as singing and dancing to commemorate his rebirth. The Lamentations of Isis and Nephthys, a call-and-response poem, was read at the beginning to summon Osiris to his feast. - The Opet Festival (Wedding of Amun & Mut): At the second month of the Egyptian calendar, the Opet festival took place in Akhet. It is the most significant festival in Egyptian history, and it is also the longest celebration in the Theban festival calendar, lasting anywhere from 11 to 15 or even 20 days. At Thebes, the king was revived by the deity Amun as part of the celebration. The celebration would begin with the journey of the Theban God Amun from the Karnak temple to the temple of Luxor, where he would be married to the goddess Mut in a holy ceremony. After the divine wedding ritual took place at Luxor temple, Amun-Re of Karnak would relocate to Luxor temple and oversee the re-creation of the universe on an annual basis. The heavenly Journy continues in the company of his wife goddess Mut, and they return from Luxor Temple to Karnak Temple, where they announce the birth of the newly born deity Khonso to the people. It was also the King who was a part of this union and who had a role in the rebirth of this heavenly force. - The Festival of the Dead (Wag festival): This festival is dedicated to the death of Osiris and the honouring of the spirits of the departed as they go through the afterlife on their journey. This celebration was held in conjunction with the Wepet-Renpet, although the date of the event shifted according to the lunar calendar. As with Wepet-Renpet, it is one of the oldest holidays celebrated by the Egyptians and occurs for the first time during the Old Kingdom. During this event, people would construct little paper boats and place them on graves in the direction of the west to signify Osiris’ death, and they would also float shrines made of paper on the waters of the Nile for the same reason. - Sacred Marriage of Hathor: It all started on the 18th of the tenth month, Paoni, when the figure of Hathor Goddess was removed from her sanctuary at Dendera to sail upriver to Horus’ temple at Edfu. She and her disciples arrived in Edfu on the new moon day at the end of summer. Horus left his temple and greeted his spouse on the seas at the anniversary of his victory against Seth. The heavenly couple arrived at the temple at the Opening of the Mouth and the Offering of the First Fruits. This odd mix of funeral and harvest rites is presumably due to Horus’ connection with Osiris, the deity of both. They spent the night in the Birth House. The next day’s celebrations were different. The Festival of Behdet consisted of ceremonies to assure the people of Horus’ reign and full authority. Visits to the necropolis and memorial services were among the events. It was said that Horus the Behdetite had retaken the Upper and Lower Egyptian crowns by sacrificing an animal and a goat. “Praise to you, Ra, praises to you, Khepri, in all these lovely names. I saw you slain the monster and ascended beautifully.” His adversaries were metaphorically stomped underfoot, and their names were written on papyrus for everyone to see. After the enemy was defeated, the celebrants enjoyed a night of delight. Assumedly, this element of the ceremony was a signal to the priests, priestesses, king, queen, and most commoners to do the same. One of the main motivations for the celebration was presumably for mortals to “drink before the god” and “spend the night gaily.” After two weeks of fun and games, Hathor Goddess returned to Dendera. Other Ancient Egyptian Festivals The Heb-Sed (Jubilee Festival) This is a very specialized celebration, as it is observed by the king every thirty years of his reign to verify that he is in complete conformity with the gods’ wishes. As a sign of his authority over the country and his capacity to conquer other countries and enhance Egypt’s influence, riches, and strength, the king was also supposed to run around a contained enclosure to prove he was fit and shoot fire arrows into the four cardinal directions. Festivals drew people closer to the divine, brought the past and present together, and paved the path for the future, or just provided opportunities for people to unwind and enjoy themselves. Tekh Festival: The Feast of Drunkenness This celebration was held in honor of Hathor (‘The Lady of Drunkenness,’) and honored the moment when alcohol saved mankind from extinction. Ra, according to legend, had grown tired of people’s incessant cruelty and foolishness and had dispatched Sekhmet to kill them. She threw herself into her work with zeal, ripping them apart and devouring their blood. Ra is content with the devastation until the other gods remind him that if he really intended to teach mankind a lesson, he should halt it before there was no one left to learn from it. Ra then commands Tenenet, the goddess of beer, to dye a huge quantity of the beverage crimson and bring it to Dendera, directly in Sekhmet’s path of devastation. She discovers it and swallows it all, believing it to be blood. She then falls asleep and awakens as the compassionate and benevolent Hathor. Worshippers became intoxicated, slept, and were then awoken by drummers to communicate with the goddess Mut [who was intimately associated with Hathor]” in the Hall of Drunkenness. Alcohol would lower inhibitions and prejudices, allowing them to experience the goddess more deeply as they awoke to the holy drums. In Egypt’s Early Dynastic Period (c. 3150 – c. 2613 BCE), Sokar was an agricultural god whose attributes were subsequently adopted by Osiris. The Sokar Festival was combined with the somber Khoiak Festival of Osiris, which commemorated his death, in the Old Kingdom. It began as a solemn event, but it evolved to incorporate Osiris’ resurrection and was celebrated for over a month during the Late Period of Ancient Egypt (525-332 BCE). During the ceremonies, people planted Osiris Gardens and crops to commemorate the deity as the plants sprang from the soil, representing Osiris’ rebirth from the grave. Planting crops during the event most likely dates back to Sokar’s early devotion. Bast Festival / Bastet Festival Another prominent event was the worship of the goddess Bastet at her cult center of Bubastis. It commemorated the birth of Bastet, the cat goddess who was the defender of women, children, and women’s secrets, as well as the guardian of hearth and home. According to Herodotus, Bastet’s celebration was the most extravagant and well-attended in Egypt. According to Egyptologist Geraldine Pinch, who quotes Herodotus, “During the yearly festival in Bubastis, women were liberated from all restrictions. They drank, danced, made music, and displayed their genitals to commemorate the goddess’s festival ” (116). The women’s “lifting of the skirts,” as reported by Herodotus, showed the liberation from usual restraints seen during festivals, but it also had to do with reproduction in this case. Although Herodotus claims that over 700,000 people attended the celebration, there is little question that the goddess was one of the most popular in Egypt among both sexes, thus this figure might be accurate. The event included dancing, singing, and drinking in honor of Bastet, who was thanked for her gifts and requested future blessings. Nehebkau was the deity who, at birth, connected the ka (soul) to the khat (body) and, after death, bound the ka to the ba (the soul’s wandering aspect). As the people celebrated rebirth and renewal, the celebration marked Osiris’ resurrection and the restoration of his ka. In many ways, the celebration was comparable to the Wepet-Renpet New Year’s Festival. From the Predynastic Period in Egypt (c. 6000 – c. 3150 BCE) forward, Min was the deity of fertility, vigor, and reproduction. He is generally shown as a guy with an erect penis and a flail in his hand. The Min Festival is said to have begun in some form in the Early Dynastic Period, although it is best documented in the New Kingdom and afterward. The statue of Min was brought out of the temple by priests in a procession that featured holy singers and dancers, much as it was at the Opet Festival. When they arrived in the king’s position, he would cut the first sheaf of grain ceremonially to represent his relationship to the gods, the land, and the people, and then sacrifice the grain to the deity. In the expectations of a happy reign that would bring fertility to the country and people, the celebration honored both the monarch and the deity. You will get the opportunity to learn about ancient Egyptian history by touring beautiful temples, tombs, pyramids, and other monuments with an Egyptologist tour guide from the most trusted travel agency Egypt Fun Tours. Travel to Egypt and take a Nile river cruise around the Nile valley to see everything the country has to offer.
<urn:uuid:98b76ff2-a9c7-4322-bd54-8706800f7b87>
CC-MAIN-2021-43
https://egyptfuntours.com/travel-info/ancient-egyptian-festivals/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00510.warc.gz
en
0.970972
2,966
3.84375
4
- Before 1500 BCE - 1500 BCE to 500 BCE - 500 BCE to 500 CE - Sixth to Tenth Century - Eleventh to Fourteenth Century - Fifteenth Century - Sixteenth Century - Seventeenth Century - Eighteenth Century - Nineteenth Century - Twentieth Century - Twenty-first Century - Geographic Area - Central America - Central and North Asia - East Asia - North America - Northern Europe - South America - South Asia/South East Asia - Southern Europe and Mediterranean - West Asia - Subject, Genre, Media, Artistic Practice - African American/African Diaspora - Ancient Egyptian/Near Eastern Art - Ancient Greek/Roman Art - Architectural History/Urbanism/Historic Preservation - Art Education/Pedagogy/Art Therapy - Art of the Ancient Americas - Artistic Practice/Creativity - Asian American/Asian Diaspora - Ceramics/Metals/Fiber Arts/Glass - Colonial and Modern Latin America - Conceptual Art - Decorative Arts - Design History - Digital Media/New Media/Web-Based Media - Digital Scholarship/History - Drawings/Prints/Work on Paper/Artistc Practice - Fiber Arts and Textiles - Folk Art/Vernacular Art - Graphic/Industrial/Object Design - Indigenous Peoples - Installation/Environmental Art - Islamic Art - Material Culture - Museum Practice/Museum Studies/Curatorial Studies/Arts Administration - Native American/First Nations - Patronage, Art Collecting - Performance Art/Performance Studies/Public Practice - Queer/Gay Art - Sound Art - Visual Studies As viewers enter Carolina Caycedo’s solo exhibition at the Museum of Contemporary Art (MCA) Chicago, they are greeted by a sculptural ofrenda, or offering, that suspends in absolute stillness from the ceiling. Composed of vibrantly colored fishing nets that stack to form a conical-shaped tent or skirt, the sculpture Limen (2019) welcomes viewers with the scent of fresh flowers that hang almost at their feet. Reminiscent of the Mexican marigolds seen in Día de los Muertos altars, red, yellow, and orange flowers rest on a wooden gold-panning bowl suspended from the sculpture, evoking the greed of colonial enterprise. The Spanish legend of El Dorado, a New World land so rich and resplendent that kings would cover their bodies in gold dust, inspired countless European expeditions in search of a city of gold that never materialized, but that led to the exploration, mapping, and resource extraction of much of South America. As a counterpoint, the sculpture is meant to be placed at the entrance for protection. Limen, Latin for “threshold,” introduces the audience to Caycedo’s thematic interests—namely, how an Indigenous and woman-centered worldview can resist, and perhaps even restore, the destruction wrought by extractive economies. Caycedo (who was born in 1978 in London and lives and works in Los Angeles) is better known to international audiences, but the midcareer retrospective organized by the MCA’s Carla Acevedo-Yates provides an in-depth overview of her practice for American audiences. The twenty-year survey features Caycedo’s videos, artist books, sculptures, textiles, and photographs. Given how much of her work speaks to environmental concerns, Chicago is a particularly significant location. On the one hand, the city is home to a large Latinx community—approximately a third of its residents—and on the other, the region will soon be profoundly impacted by climate change. Chicagoland, and the Midwest more generally, is home to the largest body of freshwater lakes in the country, which might become subject to commodification, and their resources will surely attract climate migrants and refugees in the years to come. The protection of our waterways is central to most of the work on display. Much as Limen does with its fishing net construction, Caycedo’s intricate and precarious sculptures from the Cosmotarrayas series speak to the way traditional fishing villages have resisted the privatization of their rivers. The cast fishing net, or atarraya in Spanish, gestures to sustainable ecosystems that are under threat as hydroelectric dams continue to proliferate. The delicate fishing net sculptures that hang from the MCA’s high ceilings weave together the stories of these communities and how they believe in water as a common good. In addition to Caycedo’s sculptures, her artist books occupy a central place in the show. The Serpent River Book (2017) spreads its folds over a snake-like table. The seventy-two-page accordion contains archival images, maps, poetry, drawings, and texts that examine the life of rivers and water protectors. In her video installation Spaniards Named Her Magdalena, but Natives Call Her Yuma (2013), Caycedo dwells on the controversial El Quimbo Dam. The two-channel projection, shown in a separate room over a reflective pool, sets footage of the Magdalena River in Colombia against images of waterways in Germany and crowds of protestors being controlled by German police. These disparate images in two seemingly random geographies, some of which were filmed during the artist’s residency in Berlin, reflect the forces of control regulating nature and our bodies. Throughout the video, Caycedo’s voice can be heard narrating stories about the river in a hushed whisper. She recalls childhood memories of being frightened by the grandness of this body of water, seeing her uncle dive without fear, slowly making her way across and feeling how her body became one with the water as the current helped her reach the shore. She also discusses meeting with a Native elder from one of the Indigenous communities resisting the dam, who told her that damming was the equivalent of tying your veins or plugging your anus. This attention to Indigenous knowledge permeates Caycedo’s works. The artist urges viewers to see what critic Macarena Gómez-Barris calls “submerged perspectives” in her book The Extractive Zone: Social Ecologies and Decolonial Perspectives. These are framed as local knowledges that resist the “colonial extractive gaze by seeing the river as a place of subtle yet staggering social and ecological sustenance rather than merely as moving water to be harnessed for electricity” (Duke University Press, 2017, 15). Gómez-Barris’s theory gets at the heart of the exhibition, and indeed helps to clarify its title. The “submerged perspective” demands undoing the Western binaries of nature versus culture, human versus nonhuman, so viewers can begin to sense other ways of knowing, or the view From the Bottom of the River. The exhibition is accompanied by the first major publication on the artist, a true testament to the MCA’s Ascendant Artist initiative. The bilingual, full-color catalog contains three thoughtful essays on Caycedo’s oeuvre. The first, by curator Acevedo-Yates, discusses the artist’s methodology as “spiritual fieldwork.” She provides a detailed account of Caycedo’s working process, particularly with the series Be Dammed, which drove her to work and live alongside marginalized Indigenous communities whose spiritual philosophies and land stewardship fueled much of her art. The project, importantly, has brought international attention to the political struggles of these key witnesses. The Lucas Museum’s chief curator, Pilar Tompkins Rivas, authored the second essay, which examines the participatory aspects of “geochoreographies,” a name Caycedo uses for public actions and performances in which collectives enact symbolic resistance using their bodies as a medium. Tompkins Rivas notes how this repertoire of moving a social body, and bridging the gap between art and life, has its roots in the Brazilian Neo-Concrete Movement, as seen in the work of Hélio Oiticica and Lygia Pape. Caycedo’s Feminist Histories: Artists after 2000 (2019)—a floor-to-ceiling tapestry composed of clothing embroidered with the names of significant artists, also on display in the MCA exhibition—makes clear that the artist chooses to identify with a matriarchal lineage that includes Pape, along with figures like Fanny Sanín, Judith Baca, and Tania Bruguera. The matriarchal genealogy outlined in the tapestry likewise extends to dozens of environmental activists, such as the late Berta Cáceres and Winona LaDuke, whose portraits the artist drew carefully on the surface of a large banner. In a world that still stubbornly rejects the histories of these radical women, Caycedo revels in their unstoppable and heroic actions with the slogan Ni Dios, Ni Patrón, Ni Marido (Neither God, nor boss, nor husband) written on a nearby campaign banner. The showstopper of the MCA’s survey is a video piece titled Apparitions (2018), which unfortunately remains underexplored in the catalog. Commissioned as a collaboration between the Vincent Price Art Museum (formerly directed by Tompkins Rivas) and the Huntington Library and Botanical Gardens, the video installation epitomizes Caycedo’s geochoreography as dancers perform within the lavish halls of the institution to embody the ancestors wronged and forgotten by the Eurocentric impulse of Enlightenment science. Viewers watch their mesmerizing movements in a small projection room, which was consistently filled to capacity during my visit. The visual spectacle is clearly indebted to Caycedo’s collaborator Marina Osthoff Magalhães and her striking choreography. Magalhães sets her dancers alight; they roam the grounds of the Huntington as if preparing to exorcise its demons or awaken the river goddess Oshun, a Yoruba deity who survived the Middle Passage. The third essay, by Venezuelan filmmaker David Hernández Palmar, provides a South American peer’s perspective on the social commitments of Caycedo’s art. Hernández Palmar has personally participated in Caycedo’s direct actions and geochoreography trainings in Colombia, where they have mobilized communities for the defense of their territories and rivers. He discusses how Caycedo coordinates public actions to encourage large numbers of protestors to invoke everyday gestures such as the casting of nets in the river. Hernández-Palmar describes further how these forms of art and civil disobedience nurture historical memory for these communities and foster ongoing dialogues on political agency. In addition to viewing this important and timely survey on Caycedo, museum visitors through July could also see Acevedo-Yates’s solo exhibition of Puerto Rican artist Omar Velázquez (b. 1984) in the third-floor galleries. Velázquez, a musician and painter who splits his time between the island and Chicago, draws on the tropical landscapes and music histories of Puerto Rico to create enigmatic and surreal large-scale paintings and string instruments. Human faces fill the verdant fields of his mountainscapes. Birds balance precariously over plein air still lifes. Velázquez plays with intensity and hue so much that at times it is difficult to discern foreground from background. But although the colors are warm and inviting, a sense of anxiety pervades the work, with small reminders of our recent troubles in a discarded mask or Lysol bottle. With these two concurrent shows, the MCA (hopefully) renewed its commitment to develop exhibitions that address the ongoing effects of the Anthropocene and that center the voices of BIPOC artists who remind us that another future is possible. Assistant Professor of Art History, Department of Art, Art History, and Design, University of Notre Dame
<urn:uuid:7b61c599-6dd3-4ec6-9013-93c74bc42d0a>
CC-MAIN-2021-43
http://caareviews.org/reviews/3927
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00591.warc.gz
en
0.921585
2,506
2.53125
3
Testosterone is the hormone associated with masculinity as estrogen is for women. Did you know even a male body produces estrogen, and it is an equally important hormone that needs to be balanced for better health? Although known as a female hormone, estrogen is required in smaller quantities in men as well, for a variety of functions. But there are many reasons why your estrogen levels may spike up, leading to a hormonal imbalance that can result in many serious health problems such as obesity, erectile dysfunction, and even infertility. There many ways to get your hormones back in balance and your diet plays a major role. There are certain foods that you can include in your diet which help lower your estrogen levels and then there are some other foods that raise your estrogen levels, so you need to avoid those. Read on to learn about the importance of balancing your estrogen, foods you should eat and foods that should be avoided to lower your estrogen levels. TABLE OF CONTENTS Estrogen Dominance in Men Estrogen and testosterone are two hormones that are naturally produced in both men and women. These hormones are chemical messengers of our body, and they play a significant role in fertility, libido, mood, health, and several other functions. Teenage boys and young men in their twenties have high testosterone and low estrogen levels. But with aging, the testosterone level decreases, and estrogen increases. This results in some unpleasant symptoms like gynecomastia (enlarged breasts), reduced muscle mass, fatigue, water retention, bloating, erectile dysfunction, and low libido. Excessive estrogen levels can also lead to an increase in body fat or obesity, contributing to high lipids and diabetes. Prostate and urinary problems experienced by middle-aged men are also associated with estrogen dominance. There are several factors that lead to estrogen dominance and hormonal imbalance in men. Such factors that increase estrogen levels include aging, excessive consumption of alcohol, obesity and when you get exposed to estrogen from foreign sources. High estrogen can also be the result of some medications you are taking. This rise in your estrogen levels can also be due to unknown factors and it starts affecting various functions of your body, especially your reproductive health. To avoid such health issues you should learn to balance your estrogen levels. 15 Foods For Lowering Estrogen in Men Making some changes in your diet is a great way to manage your estrogen level. Specifically, there are certain foods that can help to reduce the estrogen level in your body and some foods that raise it. First let’s discuss some estrogen blocking foods that you can add to your diet and then the ones you need to avoid, to lower your estrogen levels and bring hormonal balance. The whole foods made of soy contain a high amount of plant estrogen known as phytoestrogen. These phytoestrogens mimic estrogen and push out your body’s own estrogens. Thus by consuming phytoestrogen, you can reduce the estrogen levels in your body. Another compound in soy, isoflavones have anti-estrogen properties. The isoflavones present in soy-based products bind to estrogen receptors that weaken the estrogen. According to a scientific study conducted in 2006, plant estrogens seen in soy can block the estrogen effects and also provide a protective effect against cancer. Low estrogen level is associated with reduced risk of prostate cancer in men. One cup serving of soy-based food per day is enough to experience its health benefits. Go for unprocessed edamame, soybeans, or unsweetened soy milk. You can add soymilk to your soups or cook your vegetables in it. Summary: Phytoestrogens present in soy helps in the removal of excess estrogen from your body. Soy also contains isoflavones that block estrogen activity. Adding soy to your daily diet can also reduce the risk of prostate cancer in men. Cruciferous vegetables, such as, cauliflower, arugula, cabbage, bok choy, radish, and Brussel sprouts have high amounts of phytochemicals that help to block estrogen in your body. Indol-3-Carbinol is a compound present in large amounts in these cruciferous vegetables, which helps in the conversion of stronger estrogen to a less active and weaker version. It helps in estrogen metabolism and prevents estrogen build up in your body. It also reduces free radical formation. Cruciferous vegetables also contain bioactive compounds called isoflavones, which bind to estrogen, reducing their effect. Scientific studies have also revealed that these isoflavones prevent your body from converting testosterone to estrogen. These studies have stated that cruciferous vegetables reduce the risk of prostate cancer in men. Aim to have at least two bowls of such cruciferous vegetables every day. In some of these vegetables, the benefits of phytochemicals are more when cooked and in some, it is when eaten raw. Make sure you add both types of cruciferous vegetables to your diet. Summary: Phytochemicals and isoflavones present in cruciferous vegetables are estrogen blocking agents. A compound called Indol-3-Carbinol, found in high amounts in these vegetables, make estrogen less active. Flax seeds are filled with micronutrients called polyphenols that are known to reduce estrogen levels. Flax seeds are rich sources of lignans,a type of polyphenol found in pants. These lignans act as phytoestrogens that bind with estrogen to reduce their effect and sometimes even block the estrogen. According to Oregon State University, flax seeds contain a high amount of polyphenols, which result in reduced estrogen in the bloodstream. A scientific study conducted in 2004 has stated that flax seeds contain phytoestrogens, which alter the metabolism of estrogen, thus helping in the prevention of chronic diseases. Though in some people, flax seeds reduce estrogen, the composition of phytoestrogen is similar to that of estrogen. Sometimes it may mimic the symptoms of estrogen predominance. If you experience such symptoms, consult your doctor or dietician. Flax seeds are easily accessible and can be added to your diet. Add them to your everyday cooking to reduce estrogen levels. Sprinkle two tablespoon of ground flaxseeds on your salad or add them to your favorite smoothies. Summary: Polyphenols present in flax seeds reduce the estrogen concentration in your blood. Lignans present in these seeds block the estrogens and mitigate their effects. They also contain phytoestrogens that not only lower estrogen levels but also helps to avoid chronic diseases. Phytochemicals present in mushrooms are known for their medicinal effect. These phytochemicals block the enzyme aromatase from producing estrogen. Aromatase helps in the conversion of androgen into estrogen. By including mushrooms in your diet, you can prevent the production of new estrogens in your body. The anti-estrogen property of mushrooms can also reduce the risk of cancer. There are different kinds of mushrooms available such as white button, baby button, shiitake, portobello, and crimini, which can help to block the production of estrogen in your body. Make sure you select organic mushrooms for your diet. Mushrooms also contain a good amount of vitamin D. A healthy salad can be made with raw mushrooms and other veggies. Add your favorite sauce to make this healthy snack delicious. A daily serving of the size of your thumb is also enough to experience its health benefits. Summary: Enzyme aromatase blocking phytochemicals are present in mushrooms. This prevents the production of estrogen leading to reduced estrogen levels and also reduces the risk of cancer. Another food that helps lower estrogen is red grapes. In the skin of the red grape, there is a chemical known as “resveratrol” while the seeds of the red grapes contain “proanthocyanidin”. Both these chemicals block the production of estrogen. To get all the benefits of these chemicals, you should eat red grapes with skin and seeds. Choosing seedless grapes will not help much with your estrogen issues. A study conducted by Northwestern University has revealed that the health benefits of red wine are associated with this estrogen-blocking property of red grapes. It confirms that the resveratrol present abundantly in the skin of red grapes helps to reduce estrogen and it also reduces the occurrence of heart disease. Add red grapes to your diet; they are easy to clean and consume. Either you eat them alone or add them to your salad with other vegetables to make it delicious. Also, try to choose organic fruits free from added chemicals. Summary: Skin of red grapes contain resveratrol and seeds contain proanthocyanidin, both known for estrogen blocking properties. For better results eat grapes with skin and seeds. Fiber is the main ingredient of all plant-based foods, and it carries some major health benefits. Foods rich in fiber, such as unprocessed whole grains and fruits, reduce the concentration of serum estrogen in your body. It is recommended to have 10 grams of fiber per day, and this can be achieved by eating fibrous vegetables and fruits. So make sure to add fiber-containing foods such as oats, wheat, corn rice rye, millets, avocados, and berries to your daily diet. When you add more fiber to your diet, you will not only lower your estrogen levels but it also greatly benefits your overall health. A high fiber diet is associated with various health benefits including reduced risks of heart diseases, cancer, and diabetes. Summary: Fiber-filled foods can reduce the serum estrogen levels in your body. It is recommended to have 10 grams of fiber daily. Green tea is known for offering many health benefits. It has antioxidant properties. Green tea is an abundant source of polyphenols which inhibits the enzyme aromatase, an enzyme that converts androgen to estrogen. By inhibiting the formation of estrogen, it reduces estrogen level in your body. When you add green tea to your diet, it helps you to overcome the symptoms of estrogen dominance. A scientific study suggests that intake of green tea modifies estrogen metabolism, significantly reduces estrogen concentration, and thus reduces the risk of cancer. Two to three cups of green tea every day is enough to reap its most health benefits. Summary: Polyphenols present in green tea inhibits the aromatase enzyme, which reduces the formation of estrogen. Having green tea every day can help you to lower estrogen levels along with other health benefits. Pomegranates have impressive anti-inflammatory effects and are a great source of nutrition. When it comes to estrogen, it is believed that pomegranates change the way your body responds to estrogen. Polyphenols present in pomegranates block the activity of the enzyme aromatase. This aromatase enzyme helps in the synthesis of estrogen and pomegranates by blocking this enzyme, reduces the estrogen level. Scientists during a cancer study found out that pomegranates exhibit anti-aromatase activity. It is also believed that pomegranates greatly help in reducing the risk of cancer. Pomegranates also provide fiber. Summary: This nutritious fruit pomegranate contains polyphenols that block estrogen production. Its anti-aromatase activity may help reduce the risk of cancer. Curcumin is the main chemical present in turmeric. It is well known for its medicinal effects and is used abundantly by Ayurvedic practitioners. Turmeric is used in the Indian Ayurvedic Healing System for its anti-inflammatory and antioxidant properties. A study conducted in 2013 reveals that curcumin can affect estrogen levels. Mainly the anti-inflammatory effect of turmeric improves the metabolism of estrogen in your body. Turmeric also supports the liver, thus helping in the removal of excess estrogen from your body. This detoxification ability of turmeric helps to balance your hormones. Turmeric is an ayurvedic spice that can be easily added to any of your dishes. Drink warm milk by adding a pinch of turmeric to support your overall health. Always look for organic, non-irradiated, and USDA certified turmeric to avoid any toxic adulterants. Summary: Turmeric reduces the estrogen level by increasing its metabolism. It strengthens your liver, helping it to remove excess estrogen and other toxins from your body. Kale is the king of leafy vegetables and is known for its super powers when it comes to health. It is considered one of the healthiest foods on the planet as it has many beneficial compounds that boost your health. What interests us here though is that Kale is quite high in phytochemicals that reduce estrogen levels. Their anti-estrogen property also helps reduce the risk of cancer. Phytochemicals may prevent cancer cells from multiplying. Kale belongs to the cabbage family. There are different varieties of kale such as those with green or purple leaves and they have smooth edges or curly shapes. Kale is a very low-calorie food that is densely packed with nutrients. It is rich in antioxidants such as omega-3 fatty acids, vitamin C, and beta-carotene along with various flavonoids and polyphenols. Summary: Kale is a leafy vegetable that’s rich in phytochemicals which helps reduce estrogen levels. It also has plenty of antioxidants such as omega-3, vitamin C, and beta-carotene. 5 Foods to Avoid to Lower Estrogen Levels The above mentioned 10 foods should be included in your diet to reduce your estrogen levels. Given below are certain foods that may increase your estrogen levels and are best avoided while on an anti-estrogen diet. Alcohol consumption does no good for your body and results in many health consequences. Drinking alcohol raises the estrogen concentration in your blood which also increases the risk of cancer. Researchers have claimed that chronic alcoholism leads to hypogonadism, infertility, and testicular atrophy, which are all signs of increased estrogen in your body. If you are experiencing any such symptoms, you should stop or reduce your alcohol intake and seek medical help. Summary: Alcohol is known for many health consequences; it may cause hypogonadism and infertility by increasing estrogen levels. Dairy and Meat All animal products tend to have some amounts of estrogen in them. Many antibiotics and hormones are used by the dairy and meat industry. Female animals are given a high amount of estrogen so that they produce more milk. Eating these products can increase estrogen levels in your body. A 2013 study revealed that estrogen levels are high in meat-eaters and they also had an increased risk of cancer. If you need to reduce your estrogen levels, then switch from cow’s milk to soy milk or other healthy sources of protein. Be careful about the meat you buy, check labels carefully for added preservatives. Summary: The present dairy and meat industry uses many antibiotics and preservatives that leads to estrogen dominance in your body. Legumes such as Chickpeas Legumes like chickpeas, red beans, and green peas are otherwise considered healthy but have more estrogen than you think. Hummus made of chickpea can worsen your estrogenic symptoms. Reduce the consumption of these legumes to balance your estrogen levels. A study was conducted to evaluate the effects of legumes on estrogen activity, and the results show that several legumes are associated with high levels of estrogen activity. Summary: Though legumes are a healthy addition to your diet, certain types of legumes may increase your estrogen activity. It is better to avoid them if you are suffering from symptoms of estrogen dominance. Energy Drinks and Caffeine When you drink excessive caffeine or sugary drinks, your body can not focus on eliminating excess estrogen. It exhausts your adrenal glands leading to increased estrogen levels which get accumulated in your body. Replace these drinks with herbal teas, or healthy smoothies, which can support the detoxification process of your liver. Using natural health products is the key to overcome estrogen dominance in your body. Instead of using added sugar, use natural sweeteners such as honey, dates, or maple syrup but in small quantities. Summary: Excessive caffeine may reduce the elimination of estrogen from your body, which results in an increased concentration of estrogen in your body. A fungus known as zearalenone may be present in some grains, such as wheat, maize, and rice, and this can promote estrogen production. A 2014 study from Brazil noted that more than 32% of the 5000 cereal samples they tested were contaminated with this fungus. If you are having estrogen dominance, then it may be better to limit your grain intake as it is difficult to ensure that a product is zearalenone-free. Summary: Zearalenone is a type of fungus present in grains such as rice, wheat and maize. It encourages the production of estrogen in your body. So it’s best to limit your consumption of grains if you are estrogen dominant. Other Lifestyle Factors That Help Balance Estrogen Levels It is believed that with proper nutrition and a healthy lifestyle, you can help to lower your estrogen levels. Your diet, liver health, inflammation, and environmental factors all play a crucial role in your estrogen level. Few lifestyle changes and the right diet can help you to balance your estrogen levels. - Consume clean and healthy food and enough water to support your liver health. The liver helps to remove toxins from your body and is essential for preventing estrogen dominance. - Stress alters your body’s metabolism resulting in inflammation, obesity, and estrogen dominance. Reduce stress in your life by practicing meditation and mindfulness. - Many chemicals present in processed food mimic estrogen, resulting in symptoms of estrogen dominance. These chemicals cause toxicity and can also be carcinogenic. Try to avoid these chemicals and opt for natural and organic food. - Sleep well, ideally 8 hours of undisturbed sleep every night, is recommended. - Make some lifestyle changes like exercise or workout to get rid of the extra fat. Exercising to remove excess body fat helps reduce estrogen levels as estrogen is produced by not just the adrenal gland, but by your fat cells as well. The Final Note Some people think that estrogen is a female hormone, but estrogen plays an essential role in men too. Generally, women have higher levels of estrogen, and men have high testosterone. But for your body to work smoothly, you need to have a balance between both these hormones. These hormones help in the development of sexual functions and reproductive organs. But too much of this hormone causes many side effects too. Enlarged breasts, infertility, and erectile dysfunction are the main symptoms. If you are experiencing any changes due to this hormonal imbalance, do not fear; few simple lifestyle changes and diet that can help you to feel your best again. Consumption of food that blocks estrogen, and healthy habits like exercise, and de-stressing practices, can greatly help you to reduce your estrogen levels. By following the above mentioned anti-estrogen diet you can easily overcome the symptoms of estrogen dominance in your body. These foods can be included in a low-fat, high-fiber diet that also helps you reduce weight and improve your overall health.
<urn:uuid:c1cc143d-d322-4d45-8272-b04fecd325c5>
CC-MAIN-2021-43
https://www.bestofnutrition.com/anti-estrogen-diet-men/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00311.warc.gz
en
0.939522
3,983
2.765625
3
There might be affiliate links on this page, which means we get a small commission of anything you buy. As an Amazon Associate we earn from qualifying purchases. Please do your own research before making any online purchase. Have you ever wondered why you do the things you do? What drives you to pursue a certain career path? Why do you invest in relationships? What motivates you to achieve mastery in your field of work or study? The answer is – needs. Needs are the driving force behind our every action and decision. Whether we’re talking about our personal, social, or professional life, there’s always a core psychological need that prompts us to take action and achieve the life we believe will bring us happiness and fulfillment. But did you know there’s a psychological theory that seeks to explain human motivation and the quest for happiness by looking at our core needs? And we’re not talking about basic needs (food, water, shelter); we’re talking about the psychological needs that shape our personality and decisions. When I first read about Self-Determination Theory, I felt like I discovered a lost gem of psychology. In my opinion, this theory offers a simple and elegant blueprint for authentic happiness. What You Will Learn - What is a Need? - Needs Are the Pathway to Authentic Happiness - Self-determination Theory and Psychological Needs - Ways to Fulfill Your Need for Competence - Ways to Fulfill Your Need for Connection - Ways to Fulfill Your Need for Autonomy What is a Need? In essence, a need refers to something that is required or wanted. In a way, “needs” is one of those particular concepts that we’re all familiar with even though it’s difficult to put into words. It’s something we all share and know on a personal level, a universal human feature that defines our existence and purpose. Needs are behind every goal we set, every decision we make, and every action we take. We invest in our skills because we need to feel competent, we hang out with people because we need to feel connected, and we move out of our parents’ home because we crave autonomy and freedom. Psychologists believe our psychological needs hold the key to emotional well being, life satisfaction, and success. Many of the emotional difficulties we struggle with have something to do with unfulfilled needs. But self-determination theory doesn’t focus on the effects of unfulfilled needs but rather the amazing potential that we can achieve once we dedicate our lives to the fulfillment of our core psychological needs. For example, one paper suggests that the fulfillment of basic psychological needs (competence, connection, and autonomy) can improve students’ subjective well-being. So, what exactly happens when you decide to pursue your needs? How will your life change once you give up on chasing other people’s dreams and prioritize your needs, above all else? Needs Are the Pathway to Authentic Happiness For many of us, happiness and life satisfaction are the ultimate goals. But each person has his/her definition of happiness. Each of us knows exactly how a satisfying life should look like. Some strive for professional success, while others are looking for the comfort of a healthy family. Some wish to be the visionaries of their time while others want to be the best parents, and some want to achieve both. The point is, happiness comes in many shapes and sizes, but according to self-determination theory, there’s one guaranteed way to achieve it – by fulfilling three fundamental psychological needs. But fulfilling these needs isn’t a one-time job, but a lifelong journey. In other words, our core psychological needs are the driving force behind every project, relationship, or goal that we choose to pursue. And once the fulfillment of these goals satisfies our core needs, we experience true happiness. The fact that we can be in control of our lives and pursue whichever goals we believe are the right for us places happiness into our own hands – and that’s empowering! Long story short, needs are the pathway to authentic happiness because they’re powerful enough to inspire and motivate us, but flexible enough to allow us to find a personal version of happiness. Self-determination Theory and Psychological Needs Self-determination theory revolves around three fundamental needs – competence, connection, and autonomy. According to its founders, Richard Ryan and Edward Deci, human beings achieve their true potential when they fulfill these three fundamental needs. In other words, the need for competence, connection, and autonomy motivates us to change, adapt, and grow. Even though we are often motivated by external factors (money, status, prizes), our universal desire for growth is what inspires us from within. And growth can only be achieved through our core psychological needs. What started as a theory on human motivation ended up becoming a recipe for authentic fulfillment and happiness. But as you can probably imagine, becoming self-determined takes work and a shift in mindset. People who are high in self-determination believe they are in control of their actions and decisions, which makes them proactive. In other words, they take risks, own their mistakes, and are confident in their ability to create the future they envision. They know that failure is part of growth, and they don’t allow it to put an end to their journey toward personal and professional success. From an organizational perspective, research suggests that self-determination theory provides a framework for promoting autonomous motivation, performance, and wellness. In short, self-determination gives you control over your life and puts you in charge of finding authentic happiness. Let’s take a closer look at the fundamental needs we should pursue to become self-determined individuals: The need for competence refers to our abilities and skillset. Each of us strives to gain mastery in a given field of work or study; to become good at something and deliver actual results. And what happens when we invest in our skillset? We gain the confidence we need to put our skills to good use and achieve our goals. We become competent and motivated to pursue happiness and create the life we’ve always dreamed of having. In short, competence provides the tools we need to achieve personal and professional growth. We know for a fact that humans are social creatures that thrive in groups. One of the reasons why we climbed to the top of the food chain is that we were smart enough to collaborate and evolve as a group. As a result, each of us experiences a profound need for connection. We all wish to form attachments and experience that pleasant sense of belonging. Whether we’re talking about friendships, romantic relationships, or business partnerships, every bond we forge with another human is motivated by our need for connection. The need for autonomy reflects our desire for freedom. The kind of freedom that makes us feel in control of our actions, decisions, and behaviors. Knowing that you have control over who you are and who you want to become is a powerful feeling that cultivates optimism and motivates us to pursue our goals. When you feel like you have autonomy over your happiness and well-being, you gain a sense of clarity. In other words, you know exactly which path will take you to a happier life. Long story short, each of us has an innate desire (or need) to be free and explore all sorts of possibilities. It’s part of the reason why we’ve grown and developed as a society. Ways to Fulfill Your Need for Competence Invest in your skills Investing in your skills is one of the fundamental ways in which you can satisfy your need for competence. By taking the time to sharpen your skills or develop new ones, you make the first step towards becoming a competent individual. And I’m not just talking about work or school. The idea is to get to a point where you feel competent at something. For example, you can satisfy your need for competence by being a good cook for your family, even though you’re not a chef. But before you can prove your competence, you must be patient and determined enough to do the hard work – to read, study, exercise, and train. Just because you’ve sharpened your skills doesn’t mean you’ll suddenly feel competent. As I said earlier, competence is built on both theory and practice. But putting your skills to use involves a certain amount of risk, and that can generate anxiety. It’s one of the reasons why you might refuse an opportunity even though you might be competent enough to handle it. The only way to know – and succeed in satisfying your need for competence – is by putting your skills to the test. Learn how to handle failure So, what happens when you test your skills and realize you might not be competent enough? I think knowing how to handle failure is just as important as having the courage to take risks. Just because you failed at something you thought you were good at, doesn’t mean you can label yourself as incompetent. Keep in mind that fulfilling your need for competence is about learning, trying, failing, and repeating this cycle until you succeed. And that’s when you’ll experience authentic happiness. Ways to Fulfill Your Need for Connection Empathy is one of the foundations of authentic human connections and healthy relationships. This ability help you understand what the person in front of you is going through, thus allowing you to come us with an appropriate reaction. When it comes to fulfilling your need for connection, you need empathy is you wish to forge meaningful interactions with the people around you. Next time you have a conversation with a friend or family member, try to look beyond words and discover the emotion that the other person is looking to share with you. Be a good listener Being a good listener means being an empathic listener. In other words, you listen because you wish to understand, not just to have something to say when it’s your turn to speak. If you offer an empathetic ear or a shoulder to cry on, others will trust you enough to ‘open up’ and invite you into their words. And that’s the moment when you can establish a real connection with them. Thee are many ways in which we can fulfill our need for connection – from strengthening the relationships you already enjoy to cultivating new ones. Curiosity plays a significant role when it comes to expanding your social circles and satisfying your need for connection. It’s what prompts you to engage in conversations when you’re at a party. Furthermore, curiosity motivates you to ask the right questions not because you’re looking for something specific, but because you’re interested in knowing the person in front of you. Ways to Fulfill Your Need for Autonomy Do what you’re passionate about As I said before, autonomy means freedom, the freedom to do whatever you’re passionate about (if it doesn’t negatively impact others!). The simplest way to fulfill your need for autonomy is by pursuing a job, hobby, or activity that you’re genuinely interested in. When you invest your time and energy in something that you’re passionate about, good results will soon follow. That will give you a sense of confidence and control that motivates you to do more and become more. Don’t be afraid to explore Once again, curiosity proves to be a valuable tool when it comes to pursuing our core psychological needs and, ultimately, a happier life. By exploring new opportunities – both in your personal and professional life – you test your boundaries and limits. In other words, you understand yourself better by understanding what is within your control and what’s not. Make yourself a priority Lastly, always remember to make yourself a priority. And not just for the sake of satisfying your need for autonomy. Putting yourself first helps you prioritize your needs and goals above everything else. Only when you will have fulfilled your core needs will you be mentally and emotionally strong enough to help others. Autonomy gives you the power to shape your future and discover your version of a happy life. Long story short, self-determination theory offers an interesting perspective on human motivation and personal development. It’s an elegant system to revolves around our three fundamental needs – competence, connection, and autonomy. If you focus your personal and professional endeavors on fulfilling these needs, you can make happiness last for a lifetime. For more tips on how to cultivate a happier life, check out Happier Human: 53 Science-Backed Habits to Increase Your Happiness. Remember, even though we all share three core psychological needs, each of us has their way of pursuing them. That means there’s one path to happiness for every person on this planet. Have you found yours? Alexander Draghici is a licensed Clinical Psychologist, CBT practitioner, and content writer for various mental health websites. His work focuses mainly on strategies designed to help people manage and prevent two of the most common emotional problems – anxiety and depression.
<urn:uuid:f1ec401b-0424-4ff1-8b2e-bcfdb9fb2e80>
CC-MAIN-2021-43
https://www.happierhuman.com/psychological-needs/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00031.warc.gz
en
0.94194
2,751
2.609375
3
Greening Deserts visited the climate camp Klimacamp Leipziger Land in Pödelwitz and exchanged with many awesome people about climate changes, coal, environment, environmental protection, conservation, nature, system changes and a lot of alternatives and solutions for a fast and efficient coal exit. It’s not just possible but urgent to save the environment, animals, humans, plants and all life forms from extinction, especially in the concerned regions. Coal mining, burning or coal-fired generation causing a massive toxification of air, soils and water. The air pollution going around the world and like global warming it affects all humans and nations around the world. Mostly the poor people or communities suffer the most, because the coal industries destroying also their environments, even if they are on the other side of the world. This is not just unfair, it’s against any ethical and moral principles. It’s also a crime against humanity and violating many Fundamental Rights and Human Rights! Where is the justice and true rule of law. Another massive problem with the coal is the radiation. The Swiss environmental network and BUND Germany published important articles and scientific reports about this issue: “Coal mining produces radioactive excavated material, mine water and radioactive particulate matter that is released into the environment. Coal transport with uncovered railway wagons also contributes to further distribution. If the coal is burned, the radioactive substances with the ashes get into the environment. Although filters in large-scale plants reduce the amount of radioactive ash by 99.5 percent, certain radioactive isotopes – for example, radon, lead and polonium – are still released into the environment. They become gaseous during combustion and can therefore hardly be removed from the exhaust air. The filter dusts must be safely stored as highly hazardous waste; for example, in a repository. This is because when burning the radioactive substances accumulate in the ashes: If the burned coal contains an ash content of five percent, so at the end of the combustion, the concentration of radioactive substances massively increased..” Thorium and uranium are another nuclear waste products caused by coal mining and concentrated by coal burning, the radioactive contamination is immense. There are not just bad news. We have seen many good developments and news in so many fields. Two important things we exchanged about was that all the good alternatives and innovative solutions should be brought together (assembled or compiled) in one overview, so that everyone can understand and work better. The other thing is that Greening Deserts overworked the concept for the greening and research camps, it would be possible to establish a permanent climate camp in each bigger mining landscape. It would be a great platform for climate researchers and also for other scientists. The coal branch could see or understand that the potentials of a fast and effective coal exit are enourmous. More and better payed jobs could be created and the profits would be multiple times higher. It’s really complete nonsense to keep on with coal mining, not just because of the reasons stated here. Keep the coal in the ground and make peace with yourself and the humanity. To a free discussion round we exchanged with some kids about good ideas for the panel. We had the idea to establish conservation, climate and environmental protection as fixed school subject in schools, at least once a week. Another thing is to restore and recultivate old natural German landscapes which were very important for the water cycle and balancing the climate, environment and natural processes. We want to restore for example old wetlands like were destroyed by coal mining companies or other responsibles in the region of Leipzig. The region Leipzig was moorland landscape, alluvial or floodplain forest and now it’s much dryland or artifical lakeland. We need to create much more natural habitats and reservates there, more ancient plants and trees like bald zypresses and pin oaks. You must know wetlands are and were important for carbon capture or storage, a process known as carbon sequestration, holding up to 50 times as much carbon by area as rainforests! https://www.theguardian.com/environment/2017/feb/03/scientists-hope-wetland-carbon-storage-experiment-is-everyones-cup-of-tea ..there are much more serious and scientific evidences that the air pollution caused by coal burning and coal mining kills humans, not just nearby coal-fired power plants or in coal power nations like China, USA, Russia, Poland, Germany,.. also all other nations are affected. The killing of people caused by coal combustion, gasification and mining is not better than any other genocide – also if it is passive and during a long time. After the climate camp in the lignite mining region in Saxony nearby Leipzig the Klimacamp in the Rhineland started today and will run until 22th of August. The Climate Games Basel in Switzerland are still running. The climate camps are serious events with very diverse programms. It’s not just about climate change and global warming, but also about cultural, economic, ecologic, educational, social, scientific and much more imporant issues! Take a look on the websites for more details. Don’t ignore or misunderstand these events and movements. They inform and share important climate and environmental themes, especially in relation to conservation, environmental protection, human-made climate changes and pollution. The main goal is to stop or block the coal mining, coal burning and coal-fired generation which is responsible for so much many deaths and the massive destruction of our environment and nature. Support all the great movements and organisations working for a fast and effective coal exit! Clean air and a healthy environment are Human Rights, too. People wake up, finally – especially the responsibles! We need to establish environmental awarness and sustainability in so many fields or areas. It’s never to late to do so. There is a good Chinese proverb: “The best time to plant a tree was 20 years ago. The second best time is now.” Reduce and stop the worldwide ecocide and genocide by environmental pollution! Houston we have a problem! Human-made climate changes like global warming and air pollution (actual 9 million deaths in a year) killing millions of humans yearly! By coal plant emissions (coal burning and coal mining) and radiation of coal-fired plants and coal mines dying around a million. Is this not mass murdering or genocide? An interesting question, Human Rights organisations and international lawyers (bodies, courts, institutes and universities) for environmental rights, climate justice, business, health and Human Rights analysing and monitoring now the responsibles (key persons and companies). Special anti-corruption divisons are informed and criminal investigations will follow. Big thanks to Harvard and Washington University of Law and all the other universities with Human Rights departments who working on these issues, too. To all the politics and responsibles like the Coal Commission or Coal Exit Commission, start finally to act, work transparent and present your solutions how to replace the dirty coal fast and efficient – so fast as possible, before more humans die by the air pollution! You all are responsible, too. Current members of the German Coal Commission: Commission leaders – Stanislaw Tillich (CDU, former state premier of lignite mining state Saxony), Matthias Platzeck (SPD, former state premier of lignite mining state Brandenburg) & Barbara Praetorius (Climate economist, former deputy director at Agora Energiewende*) & Ronald Pofalla (CDU, former Chief of the Chancellery, now board member at Deutsche Bahn) Representatives of 8 federal ministries: economy & energy (BMWi, also hosts commission’s secretariat), environment (BMU), internal affairs (BMI, includes department for construction), labour (BMAS), transport (BMVI), finances (BMF), agriculture (BMEL) and education & research (BMBF) Representatives of 6 federal states: North Rhine-Westphalia (NRW), Saxony, Brandenburg, Saxony-Anhalt, Lower Saxony and Saarland Three members of parliament (without voting rights): Andreas Lämmel (CDU), Andreas Lenz (CSU) and Matthias Miersch (SPD) Source: Clean Energy Wire If you look at all the pictures of the mines and open heaps (stockpiles), why the responsibles don’t cover them? For years, the coal industry released tons of toxic and radioactive coal particulates, toxic substances and pollutants into the environment (air, soil and water cycle). Some of the hazardous substances are arsenic, lead, mercury, cadmium, chromium, selenium, aluminum, antimony, barium, beryllium, boron, chlorine, cobalt, manganese, molybdenum, nickel, thallium, vanadium and zinc. We demand immediate coverage of the tailings with tarpaulins, also the coal transport (assembly lines, dump trucks, transporters, trains, etc.) must be covered – because even the storage and transport is dangerous and should be treated as nuclear waste as dangerous goods and it must be legally regulated! It is really no problem or great effort, tarpaulins or durable foils do not cost much – it could be done in a few days. This would at least temporarily hold back a lot of fine dust, which is whirled up especially by strong winds in spring and autumn. Politicians and business leaders need to respond as quickly as possible to this issue and act accordingly, especially to avoid further illnesses, deaths and negative long-term effects (cancer and other serious diseases). All opencast mining regions will continue to be extensively scanned and recorded by satellites. It would be good if DLR, ESA and Nasa finally made the complete scientific data (especially with regard to air pollution and pollutants) available to researchers and the public. We have been calling for open access of such important satellite data to the public for years. During the last years the Greening Deserts founder reported many times such issues like explained here and on all the articles or pages to responsible authorities and institutions. The future greening camps and research camps will be set up outside of contaminated areas, like in the opencast mining region of Leipzig, maybe nearby the lakes in the post-mining area, close to the other open-pit mines. We will do also research on the detection and neutralization of radioactivity or radioactive particles. Together with nuclear experts and scientists from nations like America, Canada, China, India, France, Japan, Korea, Ukraine and Russia we can make it happen. With innovative methods and technics in this area, the entire nuclear waste could be neutralized in future. We strongly reject the current insecure use of nuclear energy and nuclear weapons, but there is nothing wrong with safe use of nuclear power in certain areas (research, medicine, space, etc.). It is similar to the ‘clean coal’ technology, if it should be ready in 15-20 years you can build new really clean power plants, but without open pit mining and the consequences of environmental degradation or destruction. Ever thought about underground drone mining? All truly sustainable and clean technologies in these relations need to be developed, and by then humanity should focus fully on renewable and clean technologies (cleantech) or sustainable energy and resources (renewables).
<urn:uuid:8c370654-9b3a-4948-98d6-bdc1b6b69198>
CC-MAIN-2021-43
http://www.greeningdeserts.com/climate-camp-in-leipzig-region-for-fast-coal-exit-and-to-save-towns-like-podelwitz/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00550.warc.gz
en
0.936511
2,375
2.8125
3
Nigeria, the most populous African nation, is a multi-religious, multi-ethnic and multi-cultural society, where adherents of Islam, Christianity and African Traditional Religion co-exist and are guaranteed the legal protection to practice and manifest the teachings of their respective religion. It is in recognition of this fact that the Nigerian Government, through the Federal Ministry of Education, approved the study of Islamic studies and Christian studies for Muslim and Christian leaners respectively, in both public and private basic and secondary schools in Nigeria. Given the presence of Muslim students of diverse backgrounds in both Nigerian public and private schools, and the imperative need for Nigerian educators to cope with issues of diversity, an idea to prepare a guide for the educators on some Islamic practices that affect daily school life was conceived. The objective of this guide is to proffer practical steps to educators on how they can accommodate the religious needs of Muslim students in daily school life. This guide, therefore, aims to provide principals, head-teachers, other teaching and non-teaching staff with information on basic and religiously-mandated practices of Muslim students, in five core aspects, that should, in the minimum, be accommodated. They include; food and drink consumption, modesty & gender issues, curriculum & textbooks, acts of worship and Muslim holidays. The guide has as its primary target, public and private basic and secondary schools in Nigeria, excluding private conventional Islamic schools (PCIS) otherwise referred to as “Muslim or Islamic schools” where Islamic education (teaching & practice) is a top priority. A GLIMPSE OF THE AFRICAN HERITAGE The African Heritage, as described by Prof. Ali Mazrui, are the three main influences that shaped the African view, namely; the spiritual and cultural influence of Islam spreading from the east, the colonial and imperialist legacy of the West, and Africa’s own indigenous legacy. These three legacies have come to be known as Triple African Heritage in African history, Discourse on African civilization and African studies. The concept has found its way into other fields of study. So, in religion, it is represented by the idea of Religious Pluralism or Plurality of Religions i.e Islam, Christianity and African Traditional Religion. In legal parlance, it is called Legal Pluralismwith Common law, Civil law, Islamic law and African Customary law as components. Legal pluralism defines the legal systems in African countries like Nigeria and Kenya. Although, the co-existence of the three systems of law reflects the concept of legal pluralism, the term “African Heritage Laws” is more appropriate to describe and contextualize the legal traditions of the African people. In the field of education, the Triple African Heritage is reflected by the concept of Balanced Education, which is defined as the total aggregate of functional learning in each of the three systems of education, to wit; traditional education, faith-based education (Christian or Islamic) and conventional (Western) education. And finally in language, the linguistic representation of the concept is connected to Trilingualism or Multilingualism. The former is more suitable, as a sizeable African population can speak three languages; two from Arabic, English and French languages and one indigenous language. The import of the foregoing for the educators is to remind ourselves of the peculiarities of the African society and the indispensable need to entrench the value of religious tolerance and accommodation, while keeping our collective heritage in mind. LEGAL PROTECTION OF RELIGIOUS RIGHTS IN EDUCATIONAL INSTITUTIONS IN NIGERIA The teaching and practice of religion are part of the fundamental right to freedom of thought, conscience and religion, guaranteed under the 1999 Nigerian Constitution (as amended). Section 38 (1) (2) of the Constitution provides; “Every person shall be entitled to freedom of thought, conscience and religion, including freedom to change his religion or belief, and freedom, either alone or in community with others, and in public or in private, to manifest and propagate his religion or belief in worship, teaching, practice and observance.” “No person attending any place of education shall be required to receive religious instruction or to take part in or attend any religious ceremony or observance if such instruction, ceremony or observance relates to a religion other than his own or a religion not approved by his parent or guardian.” (Emphasis Added) The right to receive religious instruction of one’s choice or that approved by one’s parent or guardian also includes the right of the educators to encourage and allow Muslim students to observe their religiously-mandated practices. The same way Christian students are encouraged and allowed to say their Lord’s Prayer (Our Father Who Art in Heaven), closing prayer in school (Now the Day is over) and prayer before meals (Bless this food, oh lord for Christ sake). It is pertinent to note that the details of Islamic religious practices are defined only in the context of Islam; hence, such practices do not accommodate interpretation from just any Muslim, let alone a non-Muslim, to say the least. Thus, it will be wrong for either a non-Muslim staff or a Muslim staff without sound knowledge of the pristine Islam, to determine what qualifies as virtue or vice in Islam. FIVE CORE ASPECTS OF ISLAMIC PRACTICES IN DAILY SCHOOL LIFE 1. FOOD AND DRINK CONSUMPTION Muslims are careful about transgressing the limits set by Allah. Those limits are His prohibitions. Thus, under Islamic law, the consumption of certain foods and drinks are forbidden (Haram) such as pork, pork by-products or its derivatives and alcohol. In contrast, permissible foods and drinks, known as Halal consumption are lawful. In addition, certain laid-down procedures are followed in slaughter and preparation of meat. Other examples of prohibited items include; hot dogs containing pork, food ingredients containing alcohol such as vanilla-extract. Muslim parents should ensure that launch items or food made available for consumption at the school’s cafeterias are Halal. The school management can as well give assurance to Muslim parents that foods and drinks in the school do not contain prohibited items as listed above, in addition to stating the hygiene condition under which they are prepared. 2. MODESTY & GENDER ISSUES Islam prescribes that both men and women behave and dress modestly. Muslims believe that an emphasis on modesty encourages society to value individuals for their wisdom, skills and contribution to the community, rather than for physical attractiveness. There are a number of ways in which Muslims express such teachings. Men and boys are always to be covered from the navel to the knee. When in public, Muslim women wear loose-fitting, non-revealing clothing, known as Hijab or Khimar. This attire, which may vary in style, includes cape or mini Hijab. Following judicial endorsements in contested cases over the use of Hijab in Nigerian educational institutions, at the Court of Appeal (Ilorin and Lagos Divisions) and the High Court of Osun state; the wearing of Hijab in schools has continued to receive Government’s approval at both federal and state levels. Thus, in Circular No. SAF.27/S.196/ II dated 10th December, 2019, the Federal Ministry of Education, through the office of Director of Basic and Secondary Education Department, approved the use of mini Hijab for Muslim female students at Federal Unity Colleges. It is also on record that Ekiti and Lagos states through Circular Nos. EK/SSG/01/375 and ED/DISTVI/CCST/HI/14/I/63 dated 12th December, 2013 and 13th November, 2018, respectively, had granted approval allowing Muslim female students to wear mini Hijab on school uniforms, in both primary and secondary schools of their respective states. In addition to the above, Examination bodies in Nigeria such as the Joint Admission Matriculation Board (JAMB), West African Examination Council (WAEC), National Business and Technical Examinations Board (NABTEB) and National Examination Council (NECO) allow the use of Hijab for registration and examination purposes. The school management may, however, regulate the wearing of multi-coloured Hijabs by introducing school-made Hijab that matches with the colour of a particular school uniform. In the event that the wearing of Hijab in schools leads to mocking or scornful remark by non-Muslim students, it is the duty of teachers to prevent them from pulling or removing a Muslim student’s Hijab, as this may lead to dispute between the parents of the students concerned, their teachers and the school management, and if escalated, may give rise to a religious crisis. Adolescence and Gender Relations Puberty is a major turning point in the life of a Muslim. For those who have reached puberty, Islam prescribes certain parameters for relations between genders. For example, many Muslims are reluctant to shake hands with the opposite gender, even with classmates, teachers, or school managers (proprietors, principals and head teachers). This should not be taken as an insult or disrespect but as a sign of personal modesty. Participation in School Social Programmes Muslims may raise religious objections to school organized social programmes, such as cultural dances or end of the session party, where students are grouped to participate in choreography, cultural or hip-hop dances which, more-often-than-not, reveal sensitive part of the body of the female gender. Muslim students should not be pressured to participate or penalized for not taking part in such activities, as it offends the ethics of personal modesty of the Muslims. For physical education activities, it is advised that school management should fashion alternative clothing for the Muslim students. The alternative clothing may be in form of knee-length shorts for boys and full tracksuits for girls. Muslim students should not be forced to participate in mixed-gender swimming or other sporting exercise, for it violates their religious conviction. 3. CURRICULUM & TEXT BOOKS Educational Policy, Curriculum & Textbooks Many Muslims in Nigeria feel their faith has been treated with bias in educational policies, curriculum and textbooks. This was primarily the motive for the establishment of private conventional Islamic schools (PCIS) by Muslims in Nigeria. Although, availability of more accurate and balanced instructional material is gradually increasing, the continued use of outdated and bias materials pants Islam with unworthy appellation and Muslim as enemies. Such divisive attribution has, worrisomely, contributed to incidents of harassment against Muslim students by their classmates, teachers and even management of some schools. For instance, in a number of cases, Muslim students have been mocked and humiliated by their own teachers and bullied by their mates for wearing Hijab. School boards or governing councils should, in the light of the foregoing, review policies and programmes from time to time with a view to eliminate all forms of bias against Muslims. Textbooks that contribute to religious prejudice are harmful for breeding young minds, who are future leaders. Books that lack reliable information should be removed from the list of recommended texts, as they may likely misrepresentation of basic Islamic tenets. An example is the definition of “Allah” as a particular Muslim god rather than the only deity worthy of worship. Qualified Muslim educators should participate in textbook selection process, particularly, for texts on social studies and short stories. Sex education material presented in schools is another sensitive matter to Muslim families. In Islam, individuals become religiously responsible for their deeds when they reach puberty. Islam puts great emphasis on modesty, chastity and morality and there is a specific set of teaching with regard to human development and its related issues. Muslim parents should have the option to remove their children from sex education classes. Textbooks containing pornographic, obscene contents should be removed from the list of recommended texts for students. 4. ACTS OF WORSHIP Islam prescribes for Muslims the performance of five daily obligatory prayers. Two such prayers are observed after the sun reaches its zenith (Zuhr) and approximately two hours after that (‘Asr), which may fall within regular school hours. Accordingly, Muslim students, in the spirit of religious tolerance and accommodation should be allowed to observe the prayers of Zuhr and ‘Asr. It usually takes less than 15 minutes to accomplish this religious obligation. Before each prayer, Muslims are required to wash their faces, hands and feet with clean water. This washing is normally performed in designated facilities where the students would have access to pure, uncontaminated water. Wash-hand basin could serve the purpose to some extent. Since these facilities are presumed to be in place, Muslim students should, then, be encouraged and allowed to fulfill their religious obligation. Prayer Space and Its Observance During the act of worship, which includes; specific recitations from the Qur’an, the Muslim will stand, bow and touch their foreheads to the ground. Worship may be performed in any quiet, clean room. During the prayer, the worshipper will face Qiblah direction in Makkah. Total privacy of worshippers is not absolutely sacrosanct; however, others should not walk in front of, interrupt or distract directly or indirectly through voice (speech). When the Muslim prays, he or she is fully engaged. He or she may not respond to a conversation. Students and teachers should not take offense if the worshipper does not answer their call during the prayer. However, in case of an emergency, the Muslim will respond to an announcement by stopping the prayer immediately. Organizing Prayers & Islamic Programmes through Extra-curricular Activities Muslim students, under the watch of Muslim teachers who provide leadership and guidance, can have a Muslim association, for the purpose of organizing Islamic programmes. Happily, a good number of public and private schools have Muslim Students’ Society (MSS) as one of the approved clubs and societies in those schools. Jum‘ah is the Friday prayer observed congregationally. It is preceded by a short sermon (Khutbah). It is an obligation that must be fulfilled. Jum‘ah may last for about thirty minutes or more and it takes place at the mosque during midday prayer (i.e the time of Zuhr). It can, however, be organized within a designated space within the school premises as is the practice in some public & private schools. The month of Ramadan, the ninth month of Islamic lunar calendar is the period when Muslims are obliged to fast. Ramadan fast is one of the five pillars of Islam (other pillars include declaration of faith, daily prayers, paying obligatory alms and pilgrimage to Makkah). Observing the Ramadan fast means abstinence from eating, drinking and immoral conduct from dawn to sunset. The dates of this fast change each year; as the lunar year is about 11 days shorter than the solar year. Ramadan is also a period to empathize with those who are less fortunate and appreciate what one has. Fasting is prescribed when children reach the age of puberty. However, Muslim families allow their young children to practice fasting. Thus, Muslim students observing Ramadan fast should be allowed to go to the school library instead of the cafeteria during lunch. Also, they should be excused from strenuous physical activities. The school management may also turn the diversity in the classroom to educational advantage by inviting a Muslim student to explain the practice of Ramadan fast. This will help the Muslim student avoid a feeling of discomfort about not having lunch with his or her classmates during the month. In the same manner, a non-Muslim student will appreciate the reason for the ‘sudden’ abstinence from eating and drinking by his classmates. By creating such avenue, the school helps to support parents and communities in their efforts to teach beneficial values. Such approach is also an important preparation for students, on how to deal with diversity. 5. MUSLIM HOLIDAYS There are several days in the Muslim calendar with special religious significance, but the major celebrations common to all Muslims are the two Eid days (festive holidays). The first Eid day is celebrated on the day after the month of Ramadan (the month of fasting). The second is celebrated on the tenth day of Dhul Hijjah, the twelfth Islamic month. The festivities include congregational prayer, gatherings with family, friends and well-wishers, gifts and mild entertainment, especially for children and women (among themselves). Celebrating Eid in Nigeria requires that Muslims take at least two days off from school as Eid day is declared a public holiday in the country by the Federal Government. Muslims would like to see that Eid receives recognition is some missionary schools, where Muslims constitute the minority of the student population. These schools should be accommodating and adopt live and let’s live approach by observing the Muslim holiday as a public holiday – an official break that requires educational institutions to close down. This Educators’ guide draws on An Educator’s guide to Islamic Religious Practices of the Council of American-Islamic Relations (CAIR). The author would therefore, like to thank the body for the innovative idea that has proven to be a potent tool in the promotion of mutual understanding, avoidance of religious crisis and elimination of tensions in public schools in the United States of America. It is hoped that Nigeria’s public and private conventional schools, offering basic and secondary education would appreciate this innovation, in the light of our Triple African Heritage and in the spirit of live and let’s live. Idris Alao is a holder of Bachelors of Arts Education Degree in Islamic Religious Studies Education from the prestigious citadel of academic excellence, the Lagos State University. He is an Edupreneur and Professional Teacher, registered with the Teachers Registration Council of Nigeria (TRCN). He bagged LL.B Degree in Common and Islamic Law from the most-sought-after Nigerian University, the University of Ilorin, Ilorin, and was called to the Nigerian Bar as Barrister and Solicitor of the Supreme Court of Nigeria. He possessed the expertise in practical approaches to achieving a robust combination of Conventional, Arabic and Islamic types of education. He is also a Certified Translator in Arabic-English-Arabic language clusters and a member of the Nigerian Association of Educational Administration and Planning (NAEAP), Commonwealth Council of Educational Management (CCEAM), Nigerian Institute of Translators & Interpreters (NITI) and Nigerian Bar Association (NBA). [email protected]
<urn:uuid:898c7aa5-2c85-4ad6-a759-b537ba7ae94a>
CC-MAIN-2021-43
https://mouthpiecengr.com/2020/04/educators-guide-to-some-islamic-practices-in-daily-school-life-by-idris-alao/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00311.warc.gz
en
0.949073
3,825
3.28125
3
Menopause occurs when a woman hasn’t menstruated in 12 consecutive months and can no longer become pregnant naturally. It usually begins between the ages of 45 and 55, but can develop before or after this age range. Menopause can cause uncomfortable symptoms, such as hot flashes and weight gain. For most women, medical treatment isn’t needed for menopause. Read on to learn what you need to know about menopause. Most women first begin developing menopause symptoms about four years before their last period. Symptoms often continue until about four years after a woman’s last period. A small number of women experience menopause symptoms for up to a decade before menopause actually occurs, and 1 in 10 women experience menopausal symptoms for 12 years following their last period. The median age for menopause is 51, though it may occur on average up to two years earlier for Black and Latina women. More studies are needed to understand the onset of menopause for women of color. There are many factors that help determine when you’ll begin menopause, including genetics and ovary health. Perimenopause occurs before menopause. Perimenopause is a time when your hormones begin to change in preparation for menopause. It can last anywhere from a few months to several years. Many women begin perimenopause some point after their mid-40s. Other women skip perimenopause and enter menopause suddenly. About 1 percent of women begin menopause before the age of 40, which is called premature menopause or primary ovarian insufficiency. About 5 percent of women undergo menopause between the ages of 40 and 45. This is referred to as early menopause. Perimenopause vs. menopause vs. postmenopause During perimenopause, menstrual periods become irregular. Your periods may be late, or you may completely skip one or more periods. Menstrual flow may also become heavier or lighter. Menopause is defined as a lack of menstruation for one full year. Postmenopause refers to the years after menopause has occurred. Every woman’s menopause experience is unique. Symptoms are usually more severe when menopause occurs suddenly or over a shorter period of time. Aside from menstruation changes, the symptoms of perimenopause, menopause, and postmenopause are generally the same. The most common early signs of perimenopause are: - less frequent menstruation - heavier or lighter periods than you normally experience - vasomotor symptoms, including hot flashes, night sweats, and flushing An estimated 75 percent of women experience hot flashes with menopause. Other common symptoms of menopause include: - vaginal dryness - weight gain - difficulty concentrating - memory problems - reduced libido, or sex drive - dry skin, mouth, and eyes - increased urination - sore or tender breasts - racing heart - urinary tract infections (UTIs) - reduced muscle mass - painful or stiff joints - reduced bone mass - less full breasts - hair thinning or loss - increased hair growth on other areas of the body, such as the face, neck, chest, and upper back Common complications of menopause include: Menopause is a natural process that occurs as the ovaries age and produce less reproductive hormones. The body begins to undergo several changes in response to lower levels of: One of the most notable changes is the loss of active ovarian follicles. Ovarian follicles are the structures that produce and release eggs from the ovary wall, allowing menstruation and fertility. Most women first notice the frequency of their period becoming less consistent, as the flow becomes heavier and longer. This usually occurs at some point in the mid-to-late 40s. By the age of 52, most U.S. women have undergone menopause. In some cases, menopause is induced, or caused by injury or surgical removal of the ovaries and related pelvic structures. Common causes of induced menopause include: - bilateral oophorectomy, or surgical removal of the ovaries - ovarian ablation, or the shutdown of ovary function, which may be done by hormone therapy, surgery, or radiotherapy techniques in women with estrogen receptor-positive tumors - pelvic radiation - pelvic injuries that severely damage or destroy the ovaries It’s worth talking with your healthcare provider if you’re experiencing troublesome or disabling menopause symptoms, or you’re experiencing menopause symptoms and are 45 years of age or younger. A new blood test known as the PicoAMH Elisa diagnostic test was recently approved by the This new test may be helpful to women who show symptoms of perimenopause, which can also have adverse health impacts. Early menopause is associated with a higher risk of osteoporosis and fracture, heart disease, cognitive changes, vaginal changes and loss of libido, and mood changes. Consistently elevated FSH blood levels of 30 mIU/mL or higher, combined with a lack of menstruation for one consecutive year, is usually confirmation of menopause. Saliva tests and over-the-counter (OTC) urine tests are also available, but they’re unreliable and expensive. During perimenopause, FSH and estrogen levels fluctuate daily, so most healthcare providers will diagnose this condition based on symptoms, medical history, and menstrual information. Depending on your symptoms and health history, your healthcare provider may also order additional blood tests to help rule out other underlying conditions that may be responsible for your symptoms. Additional blood tests commonly used to help confirm menopause include: You may need treatment if your symptoms are severe or affecting your quality of life. Hormone therapy may be an effective treatment in women under the age of 60, or within 10 years of menopause onset, for the reduction or management of: - hot flashes - night sweats - vaginal atrophy Other medications may be used to treat more specific menopause symptoms, like hair loss and vaginal dryness. Additional medications sometimes used for menopause symptoms include: - topical minoxidil 5 percent, used once daily for hair thinning and loss - antidandruff shampoos, commonly ketoconazole 2 percent and zinc pyrithione 1 percent, used for hair loss - eflornithine hydrochloride topical cream for unwanted hair growth - selective serotonin reuptake inhibitors (SSRIs), commonly paroxetine 7.5 milligrams for hot flashes, anxiety, and depression - nonhormonal vaginal moisturizers and lubricants - low-dose estrogen-based vaginal lubricants in the form of a cream, ring, or tablet - ospemifene for vaginal dryness and painful intercourse - prophylactic antibiotics for recurrent UTIs - sleep medications for insomnia - denosumab, teriparatide, raloxifene, or calcitonin for postmenstrual osteoporosis There are several ways to reduce minor-to-moderate menopause symptoms naturally, using home remedies, lifestyle changes, and alternative treatments. Here are some at-home tips for managing menopause symptoms: Keeping cool and staying comfortable Dress in loose, layered clothing, especially during the nighttime and during warm or unpredictable weather. This can help you manage hot flashes. Keeping your bedroom cool and avoiding heavy blankets at night can also help reduce your chances of night sweats. If you regularly have night sweats, consider using a waterproof sheet under your bedding to protect your mattress. You can also carry a portable fan to help cool you down if you’re feeling flushed. Exercising and managing your weight - increase energy - promote a better night’s sleep - improve mood - promote your general well-being Communicating your needs Talk to a therapist or psychologist about any feelings of depression, anxiety, sadness, isolation, insomnia, and identity changes. You should also try talking to your family members, loved ones, or friends about feelings of anxiety, mood changes, or depression so that they know your needs. Supplementing your diet Take calcium, vitamin D, and magnesium supplements to help reduce your risk for osteoporosis and improve energy levels and sleep. Talk to your doctor about supplements that can help you for your individual health needs. Practicing relaxation techniques Practice relaxation and breathing techniques, such as: Taking care of your skin Apply moisturizers daily to reduce skin dryness. You should also avoid excessive bathing or swimming, which can dry out or irritate your skin. Managing sleeping issues Use OTC sleep medications to temporarily manage your insomnia or consider discussing natural sleep aids with your doctor. Talk to your doctor if you regularly have trouble sleeping so they can help you manage it and get a better night’s rest. Quitting smoking and limiting alcohol use You should also limit your alcohol intake to reduce worsening symptoms. Heavy drinking during menopause may increase your risk of health concerns. Some limited studies have supported the use of herbal remedies for menopausal symptoms caused by estrogen deficiency. Natural supplements and nutrients that may help limit menopause symptoms include: - vitamin E - flax seed There are also claims that black cohosh may improve some symptoms, such as hot flashes and night sweats. But in a Menopause is the natural cessation, or stopping, of a woman’s menstrual cycle, and marks the end of fertility. Most women experience menopause by the age of 52, but pelvic or ovarian damage may cause sudden menopause earlier in life. Genetics or underlying conditions may also lead to early onset of menopause. Many women experience menopause symptoms in the few years before menopause, most commonly hot flashes, night sweats, and flushing. Symptoms can continue for four or more years after menopause. You may benefit from treatment, such as hormone therapy, if your symptoms are severe or affect your quality of life. Generally, menopause symptoms can be managed or reduced using natural remedies and lifestyle adjustments.
<urn:uuid:3caceb8a-b3f5-43bb-86b7-5da9f66e9fa0>
CC-MAIN-2021-43
https://www.healthline.com/health/menopause
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00271.warc.gz
en
0.914457
2,159
3.390625
3
Long after the fighting stops, war continues to impact on the health of soldiers, civilians and the environment. For some people, the physical and mental damage caused by war lasts a lifetime. Medical teams have had to develop methods to help them adjust to living with disability and illness. But sometimes innovations in military medicine result in better ways to treat an injury or advance fields of medicine, such as plastic surgery, psychiatry and emergency medicine. The wartime experience of surgeons who dealt with numerous limb injuries contributed to the grown of orthopaedic surgery (the branch of surgery concerned with the musculoskeletal system) in the first decades of the 20th century. In the past, most soldiers with serious wounds would have died, if not from their wounds then from infections. As military medicine improved, more and more soldiers survived. But many war veterans were left to cope with long-term physical and mental medical conditions. The loss of a limb was one of the earliest and most visible disabilities for war veterans. Although rates of amputation declined with improved surgical techniques and the introduction of antisepsis in the 19th century, the sheer scale of industrial warfare in the First World War (1914–18) resulted in large numbers of amputees. Specialist rehabilitation centres such as Queen Mary's Hospital in Roehampton were set up to fit veterans with prosthetic limbs and help them with physical rehabilitation and social support. After the Second World War (1939–45), faster and better treatment meant that more soldiers with serious neck and spinal injuries survived. But irreparably damaged nerves left many permanently paralysed with paraplegia (impairment in the legs) or quadriplegia (impairment in all four limbs). In September 1943, the government asked the spinal injuries specialist Dr Ludwig Guttmann to establish the National Spinal Injuries Centre at Stoke Mandeville Hospital, the UK's first specialist unit for treating spinal injuries. It became a leading centre for neurosurgery. After surgery, the long process of rehabilitation began. Guttmann believed that sport was a major part of rehabilitation. Sport helped veterans build up physical strength and self-confidence. Ludwig Guttmann organised the first Stoke Mandeville Games for disabled patients on 28 July 1948, the same day as the start of the London 1948 Summer Olympics. His Games are often regarded as the forerunners to the modern Paralympic Games. War and mental health Veterans with mental health conditions resulting from their wartime experience often needed continuing treatment and support after the war. Both World Wars impacted the fields of psychology and psychiatry, as specialists were called upon to treat soldiers suffering from debilitating stress and trauma. Special units were set up to receive soldiers experiencing mental trauma, some centres were near the war zone so soldiers could return to the front once they recovered. More serious cases were sent back to military hospitals in the UK. Successive wars have had their own ways of describing and dealing with mental health conditions resulting from war: Shell shock: 'blame the soldier not the situation' The term 'shell shock' was coined in the First World War. At first, doctors thought that it was a physical illness resulting from the effects of sustained shelling. Many soldiers who survived an explosion had no visible injuries but exhibited symptoms that could be attributed to spinal or nerve damage. The range of symptoms ascribed to shell shock included tinnitus, amnesia, headaches, dizziness, tremors and hypersensitivity to noise. Shell shock could also manifest as a helplessness, panic, fear, flight or an inability to reason, sleep, walk or talk. The young men who signed up to fight in 1914 had little preparation or support for dealing with the stress and trauma of modern warfare. Some refused to fight and were mistakenly accused of cowardice. During the First World War, 309 British soldiers were executed, many of whom are now believed to have had mental health conditions at the time. When soldiers who had never been exposed to shelling began to develop the symptoms of shell shock, the phenomenon was re-characterised as a range of mental rather than physical conditions and collectively called war neuroses. The specific diagnosis often depended on who you were. The walking wounded and officers tended to be diagnosed with neurasthenia or nervous breakdown. Other cases of debilitating nervous symptoms were regarded as a consequence of inherited weakness or degeneration. The soldier was blamed, not the situation. Shell shock was poorly understood, medically and psychologically, and the official response was often unsympathetic. Soldiers were suspected of feigning symptoms and accused of mallingering to avoid fighting. For those who were discharged or returned after the war, some treatments were available. For example, neurologists at the Royal Army Medical Corps hospitals at Netley and Seale Hayne tried a range of therapies such as hypnosis, electrotherapy and psychotherapy. Those that failed to respond or receive adequate help could end up in general asylums hospitals after the war, while many others returned to their homes to suffer in isolation. Private Meek's Recovery This short video was filmed at the military hospital at Netley during the First World War. It shows Private Meek, a 28 year old soldier during his two-and- a-half-year journey to recovery from shell shock. [1 min. 42 secs.] This is an extract from a longer promotional film directed by neurologist Major Arthur Hurst. Patients such as Private Meek participated in reconstructions in order to demonstrate the results of their treatment. Battle fatigue: 'every man has his breaking point' Battle fatigue or combat stress reaction (CSR) was a term used in the Second World War to describe a range of behaviours resulting from the stress of battle. The most common symptoms were fatigue, slower reaction times, indecision, disconnection from one's surroundings and the inability to prioritize. Battle fatigue was usually a short-term condition but could develop into something more serious. Men and women diagnosed with battle fatigue were removed from the front line for rest and recovery. Treatment was not very effective, and 40% of medical discharges from the military during the war were for psychiatric reasons. Military psychiatrists were more sympathetic towards troops in the Second World War than the First World War. The slogan 'every man has his breaking point' was used to warn people about the danger of stress. The idea that anyone could succumb to stress reduced the stigma surrounding battle fatigue, and helped traumatised soldiers to be accepted when they returned home. The focus shifted from the ‘weak or inadequate’ soldier to the traumatic situation. Post-traumatic stress disorder (PTSD) Post-traumatic stress disorder (PTSD) was the term developed soon after the Vietnam War and codified by the American Psychiatric Association in 1980 to explain the mental and psychological effects of war on soldiers. PTSD is a mental health disorder that can develop after the experience of either a single traumatic event or recurring traumatic experiences. It can affect a person's social interactions, their ability to work, or other important areas of their lives. Symptoms may include disturbing thoughts, feelings, mental or physical distress, unexpected changes in behaviour and an increase in the fight-or-flight response. The diagnosis of PTSD now covers a wide range of traumatic events such as accidents, assault, natural disasters and exposure to violence. The risk of developing PTSD after a traumatic event varies by trauma type, but military service and becoming a war refugee both increase the risk of developing PTSD. Not all wounds are visible This film was produced in collaboration with a small group of British military veterans from recent conflicts, who have all been diagnosed with post-traumatic stress disorder (PTSD). They hope to give an insight into the experience of living with this condition, which they believe is poorly understood. It seems that every combat situation has its own particular mental pressures: shell shock, war neuroses, battle fatigue, combat stress reaction and PTSD are all time-specific terms for a variety of psychological symptoms that can result from war. Some are short-term, while others are more long-lasting and need continuing therapeutic support. Wartime medical innovation There is a lot of debate about how much war and medicine have influenced each other. Sometimes war adds to medical knowledge by drawing attention to a particular injury, such as the loss of a limb. Military medicine has also influenced how medicine is done. Triage, the system of prioritising multiple casualties, has been adopted for all emergency medicine ever since the First World War. War has also created new roles and opportunities in medicine. The First World War saw a huge increase in the number of female nurses and male orderlies working in field hospitals near the front line. And with most men sent to fight, the War Office called on women to drive ambulances and female surgeons to perform surgery both in the war zone and at home, giving them a chance to prove their competence. Military medicine has also influenced society in unexpected ways. The military was one of the first organisations to use physical and phychological assessment tests on new recruits. Concerns about the poor performance of troops in the South African War prompted questions about the national efficiency and 'racial health' of the population. This eventually led to medical inspections for new recruits in the First World War. Towards the end of the war, a physiologist working for the military, Group Captain Martin Flack, used his knowledge of respiratory and circulatory physiology to create tests for Royal Air Force recruits. His tests were designed to select the recruits that were most suitable for training as pilots. Psychological evaluation was added to the assessment process in the Second World War. The idea of assessing and evaluating applicants gradually filtered into civilian recruitment and is now used for all kinds of jobs. Suggestions for further research - M R Howard, 'British medical services at the Battle of Waterloo' in British Medical Journal, 24 December 1988 (pdf ejournal) - H Graeme Anderson, 'The medical and surgical aspects of aviation' [with chapters on applied physiology of aviation, by Martin Flack], Hodder & Stoughton, 1919. [ebook] - J Laffin, 'Combat Surgeons'; Wiltshire: Sutton, 1999. - J Scruton, 'Stoke Mandeville: Road to the Paralympics'; Aylesbury: The Peterhouse Press, 1998.
<urn:uuid:c6ae5a9b-28bb-4a2b-94b5-dc219bd2872b>
CC-MAIN-2021-43
https://www.sciencemuseum.org.uk/objects-and-stories/medicine/medicine-aftermath-war
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00070.warc.gz
en
0.967347
2,140
3.703125
4
How a COVID-19 vaccine could change travel for good Video above: Traveling increases chance of COVID-19 transmission, infectious disease expert It was the good news that gave the world hope. On Nov. 9, it was announced that one of the candidates for a COVID-19 vaccine, made by Pfizer and BioNTech, was over 90% effective in preventing volunteers from contracting the virus. The beleaguered travel industry immediately got a boost, with airline and cruise company share prices rallying, and tour operators seeing upticks in searches and bookings for 2021. Finally, it feels as if vacations might be in our future. But will travel post-vaccine go back to how things were, or has your vacation been irrevocably changed? For starters, it'll be a while before we know the answer to that, according to travel specialist Dr. Felicity Nicholson, lead doctor at Trailfinders Travel Clinic in the UK. "I think it's just a matter of time before things come back to some degree of normality, but it'll take quite a long time," she said."At the moment, travel is way down the pecking order of vaccination." Nicholson said that countries will first be looking to vaccinate the vulnerable, then health care workers, before making inroads into the general population. That's not to mention the practical issues around the transportation and storage of the Pfizer vaccination, meaning that if it's the one that wins the race, it could take even longer to distribute. "We should be encouraged but understand it's unlikely to be as rapid as governments are suggesting," she said. "If they can find a way to transport it properly (it needs to be stored at minus 70 degrees Celsius (minus 94 F), it could be early next year before things start to get going. Countries whose economies are based on tourism will be desperate to get people back and moving, but most people (in the travel industry) aren't hopeful that things will really pick up until the fall of 2021," Nicholson added. And don't assume that once a vaccination program starts rolling out, you can jump on the next plane, whether or not you've had it. Nicholson reckons that proof of vaccination might become advisory, or even mandatory, for destinations. An international certificate of vaccination or prophylaxis (ICVP) — which travelers must carry to enter certain countries which mandate a yellow fever vaccination, or to exit those with high polio risk — could be the next addition to your travel kit. "I think we'll have a formal certificate, either online or on paper, showing that you've been vaccinated at a recognized, accredited clinic, as we do for yellow fever," Nicholson said. "It'll be the destination demanding it — and that could be everyone. Most countries where there's a vulnerable or older population will certainly be demanding proof because we know how devastating the disease can be." Making up for lost time So, you've had your vaccine, and are carrying your certificate — what's next? Well, you might be off on the trip of a lifetime, according to tour operators. John Bevan, CEO of Dnata Travel Group, which owns brands Travelbag, Travel Republic and Netflights as well as trade brand Gold Medal, says that there's been a noticeable uptick in bookings since news of the vaccine was announced. The average booking value increased by about 20% this week, Bevan said, compared to pre-COVID times. "People didn't get a vacation this year, so they're treating themselves," he said. "They're booking higher category rooms, and we're seeing more family groups, too." Netflights just took a booking for a group of 19 people to go to Dubai for Easter 2021. Tom Marchant, co-founder of luxury tour operator Black Tomato, agrees. "People have desperately missed the chance to travel, and want something to look forward to," he said. "They're saying, 'That first trip, I'm going to make it special.'" The demand for something out of the ordinary is so strong that in October the company launched a new lineup of once-in-a-lifetime trips, Journeys to Come — anything from seeing the solar eclipse in Patagonia to swimming with whales under the midnight sun in Iceland. "We wanted to create something to make people say, 'That'll get me through these challenging times,'" Marchant said. Bevan said his brands have also seen a triple-digit growth in trips to the U.S. for next year, from May onwards. The Maldives and the UAE are other popular destinations for Europeans wanting to escape next year. He earmarks Dubai in particular as a destination that's working hard to get tourists safely back, and also predicts the Caribbean will do very well. However, he thinks Australia and New Zealand will be off-limits until the last quarter of 2021. Marchant said his clients are starting to look towards Asia — although he thinks that the typical country-hopping trip through Southeast Asia will be off the cards for a while, because of the bureaucracy of testing and certificates at every border or on every flight. "Instead of hopping around, I think people will just go to a couple of places and really immerse themselves, and I think that's really positive," he said. "There'll be a shift in how people enjoy places — it won't be just box-ticking anymore." For the same reason, he thinks that multiple weekend breaks will be replaced by longer, two-week trips. Bucket list safaris However, it's not all plain sailing yet. According to Nigel Vere Nicoll, president of the African Travel and Tourism Association, the trade body for travel to sub-Saharan Africa, the biggest problem with travel in 2021 won't have anything to do with a vaccine — it'll be to do with flight availability. This is particularly the case for this part of the continent, which has just three main international hubs: Addis Ababa, Nairobi and Johannesburg. South African Airways, based at the latter, are currently not flying, while Kenya Airways is hoping for a cash injection from the government. Ethiopian Airlines, however, is expanding. "From there, you have to get an extra flight and domestic airlines have cut back," Vere Nicoll said. "And airlines won't increase flights unless they're sure there's enough business. It'll take time but we have to support them. The vaccine is a very, very exciting step — the first brick in rebuilding everything — but I can't see it rolling out until the middle of next year." For what it's worth, he doesn't think African countries — which have emerged relatively unscathed by the pandemic — will mandate the vaccine for travelers. Safari destinations have been particularly hard hit by the collapse of tourism, with poaching on the rise in national parks, and economic devastation for those working in lodges. And "grossly unfair" travel bans from the likes of the UK government — who impose a two-week quarantine on travelers coming from any African country, most of which have seen under 1,000 deaths from the virus, compared to the UK's 50,000 — haven't helped. And yet, Vere Nicoll said that the future could be bright for those looking for the holiday of a lifetime. "The Great Migration was better this year than it has been for years, and there are great initiatives going on — people have used this time to get tourism ready for when we come," he said. And, of course, a safari trip is largely outdoors. Chomping at the bit to get to Europe Are there any destinations which have been so marred by the virus that we won't want to go there for a while? Despite the high number of COVID-19 deaths in the U.S., Bevan's data shows that visitors are keen to get there — he thinks that could be optimism regarding the Biden administration's pledge to curb the virus. But he warns that Europe, which has been in the center of the pandemic, may not be so attractive to travelers from countries who've controlled it better. However, Tom Jenkins, CEO of the European Tour Operators Association (ETOA), disagrees. "The response to being told you can't do something is to want to do it, so if you've not been to Europe for a year, you'll want to go to Europe," he said. "You'll never see it this empty, you'll never see prices this competitive, you'll never have this experience again. There's real latent demand." Jenkins said that tour operators are already looking at a relatively good year, with plenty of trips postponed from 2020 to 2021, and search engine data showing big interest in travel to Europe from other continents. And with numbers not expected to recover until 2022, the continent will be emptier than it has been in many of our lifetimes. However, Jenkins warns that "there's no momentum in the market" — nobody traveling to Europe and inspiring people to follow them. Post-vaccine, it'll all hinge on the airlines to lay on flights, and the destinations making sure they're ready to go. "Cities bounce back fairly quickly but it may not be that straightforward," he said. Even with a potential vaccine, Bevan thinks that the travel experience itself will have changed — particularly at the airport, where he thinks airlines will move to a largely touchless experience. On board, he thinks the COVID-induced rule of deplaning row by row will continue — and that's a great thing. "I flew on EasyJet to Greece in August and it was immaculate — they made us stay seated till the row in front had got off, and there wasn't that horrendous bunfight. It was so calming," he said. And at the other end, Bevan thinks the restrictions on buffets will stay "till people feel more comfortable." He predicts the same for personal space. "I think we'll be more careful for a long time," he said. "I can't see us hugging or shaking hands with people we don't know for quite a while." Flexibility is here to stay One good thing to come out of the pandemic? Flexibility. Many deals on offer for 2021 are fully flexible, and it looks like that will continue, at least in the short to medium term. "The industry has handled the refunds (from earlier in the pandemic) with various degrees of effectiveness, and I think the consumer is going to be far more mindful of what they're booking and what they expect," Marchant said. "Suppliers should be able to offer flexibility, and the customer will expect transparency." Under a new policy, Black Tomato is offering a full refund up to 30 days before departure on most new bookings — and although Marchant doesn't know how long that'll last, he said, "I don't see it as a flash in the pan." Bevan agrees, and reckons flexibility is how the industry will recover. For the traveler, he said, the flexibility that airlines are currently offering means that there's "not a huge amount of risk" for those wanting to book. His only caveat — he advises would-be travelers to book as soon as they see a deal with flexible terms, because airline capacity will still be low in 2021. A wakeup call for us all Other upsides might emerge from the pandemic, too. Nicholson thinks that the resources poured into the vaccine effort will benefit the fight against other diseases — and predicts better vaccinations for viruses including Ebola. And she thinks travelers' own attitudes towards health while on the road will improve. "People are much more aware of infectious diseases now," she said, adding that, before the pandemic, the number of travelers who booked a pre-trip consultation was pretty low. "Before, they might have gone abroad without consulting anyone. (If the vaccine is mandatory) they'll have to come in for a consultation and we can talk to them about other risks in that destination. "In western countries, we tend to be cavalier, but perhaps people will respect how serious viruses can be now. "Everyone's had a wake-up call and learned about virology, and that can only help."
<urn:uuid:5063ad14-288c-435a-b965-27ef3f8434cd>
CC-MAIN-2021-43
https://www.wlwt.com/article/how-could-a-covid-19-vaccine-could-change-travel-for-good/34676244
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00230.warc.gz
en
0.973324
2,571
2.703125
3
The science of biology is undergoing a historic transformation, from one based on observation to one based on creation, and UCSF is in the forefront of driving that change. The move to a New Biology promises to accelerate an era of astounding discovery and achievement, in which science will not only cure many diseases and offer new therapies, but will also provide new breakthroughs in energy, agriculture, the environment and other fields in which biology plays a role. New Biology follows in the footsteps of earlier revolutions in its sister sciences of physics in the 16th and 17th centuries and chemistry in the 19th. New telescopes allowed astronomers to move physics from observation to analysis, ultimately enabling Newton to confirm the truth of his universal principles. Similarly, the development of the periodic table of elements in the 1860s helped establish the principles of chemical structure, and the growth of synthetic chemistry that followed helped propel the Industrial Revolution. Advances in technology and new discovery are now leading to New Biology, both through the mapping of the human genome and in the use of increasingly powerful microscopes and other instruments. Instead of merely describing what exists, New Biology explores what is possible, leading to broader, more systematic applications. “How we think of the role of biology is changing,” said Wendell Lim, PhD, professor in UCSF’s Department of Cellular and Molecular Pharmacology and investigator with the Howard Hughes Medical Institute. “We’ve got so much data from the genomic and the proteomic revolutions that we can start to see how biological systems work together.” “New Biology has two main streams,” Lim says. “We are working to understand biology at a deeper, mechanistic level, and to apply biology to solve a broader swath of problems.” Lim and Keith Yamamoto, PhD, UCSF’s vice chancellor of research and executive vice dean in the UCSF School of Medicine, have been leading the push for New Biology.Yamamoto testified in Congress to gain support for additional funding, and he and Lim co-authored a National Academy of Sciences report as part of the Committee on a New Biology for the 21st Century. “Biology is at an inflection point, poised on the brink of major advances that could address urgent societal problems,” Yamamoto told Congress. He described four areas of “urgent need — food, energy, the environment and health” — and said biological research could help bring new advances in each. “It no longer makes sense to talk about biomedical research as if it is unrelated to biofuel or agricultural research; advances made in any of these areas are directly applicable in the others, and all rely on the same foundational technologies and sciences.” UCSF researchers are already applying the principles of New Biology in their work. In one key aspect of New Biology, “we need to be effective at bringing people from different fields together, breaking down barriers and creating a culture of cooperation,” Lim says. “UCSF has always been a place that is historically not dominated by departments. Turf over ideas doesn’t exist here. It’s the perfect environment to be open to thinking about using different approaches to solving different classes of problems.” The Team Challenge organized by the Cell Propulsion Lab (an NIH sponsored Nanomedicine Development Center at UCSF) in 2009 was a good example of New Biology in action. In that exercise, Dan Fletcher, a bioengineering professor at UC Berkeley, joined with a team of UCSF and UC Berkeley scientists from different disciplines to conceptualize how to create a vesicle that could deliver therapies to cells. “If you had a blank sheet of paper, and the ability to put together any components you wanted, what would you want?” Fletcher asked. That notion was put to bright people with diverse backgrounds, such as cell biology, pharmacology, bioengineering and chemistry, from UCSF, UC Berkeley and Lawrence Berkeley Laboratory. “It’s an attractive idea to engineer a new process and find the defining rules of a system, like past engineers and physicists have done for other systems,” says Jessica Walter, PhD, a biology/biophysics postdoc who participated in the vesicle challenge. Walter remembers the inspiring nature of the project. “You could see ideas that at first sounded totally insane, but when people took them to their logical limits, they got something that might be feasible,” Walter says. “It’s counter-intuitive, but crazy ideas could become practical.” For instance, researcher Aynur Tasdemir, a former postdoc in the Lim lab, proposed a “kamikaze cell,” Walter says, and “everybody laughed at the idea.” But they went ahead and brainstormed, and actually figured out a way it might make sense to give the vesicles something toxic, send them somewhere such as a cancer cell, and then have them release their payload. Jason Park, an MD/PhD graduate student in the Cell Propulsion Lab, continues to pursue this approach. “We thought 20 years ago, we could attack cancer with a magic bullet, like radiation or chemotherapy,” Walter says. “But determining which cells are bad or good requires more computation than a single marker. It’s the kind of problem where an engineer might come in handy.” Fletcher says the tools that have developed in the intervening years have made this kind of thinking possible. “Rebuilding parts of cellular processes to harness them as therapeutics is not something that was realistic years ago,” he says. “Now it has become a real opportunity, because we have new technology to control the assembly of new materials, together with increased knowledge of what the molecules do and how they do it.” Cancer was the target in the spring of 2011, when Wallace Marshall, PhD, an associate professor of biophysics and biochemistry at UCSF, organized a meeting of cancer biologists and physicists. Recognizing the complex ways cancer operates, Wallace considered the notion that many problems that arise in cancer biology are similar to those faced by physicists in understanding the behavior of complex systems. His symposium studied whether the approaches used for understanding physical systems — conceptual, experimental, and computational — might provide useful insights into the behavior of cancer cells and tumors. “The basic idea is to try to put some more general principles into biology to make it more of an engineering discipline than just a collection of facts,” Marshall says. “I’m an engineer by training so that works for me. I’m trying to figure out how cells solve their own engineering problems. If a cell wants to change its structure, how does it do that?” One sure sign that science is heading in this direction, Marshall says, is that students arrive at UCSF “wanting to do this.” When he was a student, he had a “weird double major” of electrical engineering and biochemistry, with the goal of finding out, “how do I build things inside of cells?” Now universities are encouraging this sort of cross-fertilization, and he says it’s essential for moving science forward. Talk to Marshall and others in the field, and a theme emerges — a search for the big picture, for the same sort of principles underlying biology that Newton found when he studied physics 400 years ago. Some examples: * Zev Gartner, PhD, an assistant professor in pharmaceutical chemistry, is studying the body’s building blocks – from molecules to cells to organs – to better understand biological processes relating to tissue structure and its breakdown during disease. “At its core, we are trying to understand the way different systems and modules fit together in the complex task of maintaining homeostasis,” (the body’s ability to remain stable) Gartner says. “We’re not looking at, ‘How does this one little piece work?’ It’s only recently become possible to think about things in this way.” * Michael Fischbach, PhD, an assistant professor in bioengineering and therapeutic sciences, also works with the principles of modularity, but his lab’s approach is to build things and then study how they work. “When we build something, we have the potential to create something that we can actually understand in all of its complexity,” Fischbach says. And then, when scientists “perturb,” or disrupt the system, they can see the results of that single action reverberate throughout. “Think of synthetic ecology,” Fischbach says. “How do we construct a community of bacterial cells that I can put into the gut of a human being and get them to perform functions that are beneficial to the host? How is it that a community of hundreds of thousands of bacteria interacts? How are they structured physically? How do they alter one another’s behavior? And how does that play a role in how microbes interact with the host? That’s a great example of where you can take the lessons from old fashioned ecology, and the new fashioned studies that have revealed a wide range of organisms, and try to construct synthetic communities of bacteria to study.” * Hana El-Samad, PhD, an assistant professor in biochemistry and biophysics, is like Marshall an engineer by training who is deep into the search for sweeping biological principles. Instead of studying cruise controls and autopilots and other human engineered systems, El-Samad is studying the “homeostatic feedback systems that nature has evolved.” “There are so many similarities between complex biological systems and the technological systems we were so successful at designing.’ “The challenge,” she says, “is that there are also differences between engineering and biological systems. In engineering, people can build a laptop to such precise specifications that millions will roll off an assembly line, and each will perform in exactly the same fashion. Natural systems don’t work that way, instead exhibiting stochastic, or unpredictable, behavior. Cells could all be cloned from each other, and yet each behaves differently. But now scientists have the ability to run 1 million tests on those cells, get a distribution of outcomes, and we can quantify probability. That’s a good first step towards finding the principles that biological systems use to tune their fidelity and precision.” “We’ve gotten very good at collecting data we can analyze,” El-Samad says. “But we don’t know how to extract principles out of the data. Once we know those laws, the sky’s the limit.”
<urn:uuid:cc693764-69b4-4dd2-9be2-0bf8e2c703ec>
CC-MAIN-2021-43
http://www.danfost.com/2011/05/12/the-search-for-the-big-picture-ucsf-may-2011/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00710.warc.gz
en
0.955334
2,241
2.796875
3
The successful beating-off of such a large enemy force kept the S-boats well clear of the east coast for a while. As the year ended, Coastal Forces were very much on the attack, harrying enemy shipping from the north – where the Norwegian 54th and now the British 58th MTB Flotilla under Gemmel operated from Lerwick against the Norwegian coast, to encourage the Germans in their belief that the Allies were planning a large-scale invasion of Norway – to the south where the MTBs and MGBs of Plymouth Command were now fighting regularly in the Channel Islands and off the coast of Brittany. One of the long-standing complaints of the German S-boat crews had been that although their boats were faster than most of those of the British, they suffered from inferior armament. During the winter of 1943/4, however, a number of S-boats were rearmed with 40mm in place of their 20mm guns, which brought an aggressive new spirit amongst the German forces. In the past they had always avoided contact with their opposite numbers whenever possible, not from any lack of bravery or determination, but acting on German Naval Command policy. Unless they were defending their own convoys as escorts, their primary targets were Allied merchant ships, using either torpedoes or mines, and not the small craft of Coastal Forces which they usually hoped to avoid by their superior speed. These tactics had become less and less successful as Coastal Forces developed interception techniques to force the S-boats into combat, and on such occasions the German craft usually found themselves outgunned and at a distinct disadvantage. Now, with heavier guns, the S-boats showed less reluctance to engage in a direct confrontation and the time came, on the night of 14/15 February 1944, when they actually sought out and hunted a group of British MTBs. The events of the night began when a group of six S-boats crossed the North Sea with the intention of laying mines off the east coast. They were picked up by shore radar at 23.07 and driven off by the Harwich-based corvettes Mallard and Shearwater, which were on patrol. As they sped away, the S-boats were seen to jettison their mines. Meanwhile, five MTBs under Lieutenant Derek Leaf DSC had been sent earlier to the south end of Brown Ridge to try to intercept the enemy boats on their home run. The MTBs were 71½-foot BPB craft, able to stand up to long spells at high speed, but even so, they were too late: the enemy were already ahead of them. So Leaf decided to make for Ijmuiden, to be waiting on their doorstep when they returned to base. Approaching the Dutch coast, however, the MTBs came upon an enemy flak ship and two trawlers. A combined attack was made, in which the flak ship was torpedoed and sunk by MTB 455 (Lieutenant M.V. Round RNZNVR), while Leaf’s boat, MTB 444, repeatedly hit one of the trawlers with gunfire and left it burning. In coming in to make another attack, Leaf ran straight towards another enemy ship which he did not see until the last minute. The MTB was heavily hit both above and below the waterline. Leaf, his Petty Officer and two ratings were killed and two others wounded. This was not realized at the time by the other boats, however, and when three of them regrouped and 444 and 455 could not be seen, Lieutenant C.A. Burk RCNVR, commanding 439, took over as Senior Officer of the unit and set off to search for the missing boats. Almost immediately, Burk had the nasty shock of discovering by radar that six S-boats were shadowing his unit 1,000 yards off on the port quarter, an almost unheard of occurrence. The enemy craft were allowed to close to 600 yards, at which point further radar contacts, probably more S-boats, were picked up ahead. Burk decided to attack the shadowing boats first, rather than all groups at once. The unit altered course to port, increased to full speed and crossed the bows of the leading S-boat at 100 yards. Fire was opened at this and the second boat in line. Both were hit and the leader silenced and left stopped with a fire burning aft. During this engagement the MTBs were repeatedly hit by small-arms fire. Burk then turned to attack the second group of six S-boats, but during this manoeuvre MTB 441 (Lieutenant W. Fesq RANVR) lost contact with the others. While trying to rejoin them he came across two boats which he thought were MTBs but which, after challenges were flashed, turned out to be S-boats. Fire was exchanged and 441 broke away. There were so many radar echoes at this time that Fesq had no means of telling which were friendly and which were enemy craft, so he turned and headed back to base. The other two boats meanwhile found themselves outnumbered by no less than seventeen S-boats. Fire was exchanged while running at high speed, but the MTBs sustained little damage and only three men were slightly wounded. Eventually they broke off and set off for base, having already established W/T contact with 441 and 455, which were also returning and not in need of help. No contact could be made with 444 as the wireless on Leaf’s boat had been put out of action. What happened after Leaf was mortally wounded was described by Sub Lieutenant P.P. Bains, the first officer of 444 who took over command: As all the electrical equipment had been put out of action, I decided it was useless to try to regain contact with the remainder of the unit and so steered a north-westerly course to avoid further enemy boats until 04.15, when I altered for base and increased speed to 30 knots. Smoke and a distinct smell of burning was coming from the W/T compartment (where the telegraphist had been one of those killed; the others were the helmsman and Oerlikon and pom-pom gunners). This was drenched with Pyrene as the source could not be discovered but the smell and smoke persisted all the way back to Lowestoft. A serious leak in the forward mess-deck was discovered, and as soon as the hands could be spared, a chain of buckets was formed. This managed to keep the water down below danger level. There had also been a fire in the engine room which had been put out by the motor mechanic and stokers. The loss of Derek Leaf, one of the most brilliant of the MTB leaders, was a serious blow to Coastal Forces. It had been he, as Senior Officer of the 3rd MTB Flotilla, who had devised the successful tactics of attacking trawlers from astern as a means of avoiding detection by their hydrophones, which appeared to operate best forward of the beam. Indeed, it was these tactics that had resulted in success on his last attack. During the three years of night fighting by Coastal Forces, it had been the North Sea which commanded the lion’s share of operations. Now it was the turn of the English Channel to come into prominence with the greatest operation of them all, the Normandy landings, in which Coastal Forces had many important roles to play. As the invasion was to be launched principally by Portsmouth Command, in March a Captain, Coastal Forces, Channel, was appointed (Captain P.V. McLaughlin) to the staff of the Commander-in-Chief, Portsmouth, to take charge of all MTB and ML operations (MGBs were no longer designated separately). Such an appointment was long overdue and came more than a year after the similar appointment in Nore Command which had achieved such good results. While Captain McLaughlin and his small staff, which included such experienced flotilla commanders as Christopher Dreyer and Peter Scott, made detailed plans for the part that Coastal Forces were to play in the invasion, American PT boats made their first appearance in the Channel, brought over originally at the urgent request of the Office of Strategic Services to land and pick up agents on the French coast. This led to the re-commissioning of Squadron 2, which had previously been wound up in the Solomons at the end of 1943. The first of the Higgins boats, under Lieutenant Commander John Bulkeley, arrived at Dartmouth in April. They were fitted with special navigational equipment to aid them in locating specific points on the French coast, and their officers and men trained in launching and rowing special four-oared boats, constructed with padded sides and muffled rowlocks, so that they could land men and equipment on a beach swiftly and silently on the darkest nights. The first of these cloak-and-dagger operations took place on the night of 19 May, when PT 71 landed agents with equipment on a beach within 500 yards of German sentries. They continued up until November. The crews never knew the identity of their passengers and never once made contact with the enemy, which was as intended. To take part in the invasion itself, further PTs were shipped across: Squadron 34 (Lieutenant Allen H. Harris), Squadron 35 (Lieutenant Commander Richard Davis Jr) and Squadron 30 (Lieutenant Robert L. Searles). Bulkeley was appointed as task group commander of all PT operations. The main job of the British and American craft was to help defend the flanks of the spearhead attack on the shores of the Baie de la Seine and maintain guard over the subsequent flow of cross-Channel traffic. The most likely attacks were expected to come from destroyers, torpedo boats and minesweepers, of which the Germans still had large forces based in the Low Countries and on the Atlantic coast of France, and from S-boats based along the coast from Cherbourg to Holland. In the weeks before the invasion, ten flotillas of MTBs and MLs laid nearly 3,000 mines unobtrusively in areas close to the French coast, while at the same time other MTBs carried out their usual anti-S-boat patrols, and the MLs prepared for their wide range of tasks which were to include minesweeping, duties as escorts and navigational leaders, and shepherding in the landing craft. Knowing that an invasion was imminent, although not its date or location, the Germans were preparing their own plans. The S-boats played an important part in these and Petersen, as commander of all S-boats in the Channel and North Sea, with his headquarters at Scheveningen, Holland, was involved in a direct battle of wits with McLaughlin and his staff at Portsmouth. In order to hamper the Allied preparations, Petersen increased his patrols until large numbers of S-boats were at sea every night. Their biggest success came in the early hours of 28 April. A force of six S-boats from the 5th and 9th Flotillas had set sail from Cherbourg the evening before to attack an Allied convoy reported to be in the vicinity of Portland Bill. By the time the S-boats arrived they found they had missed the convoy, which had passed out of the danger area. The German craft were preparing to return home when, to their amazement, they came across a convoy of eight American tank landing ships sailing sedately at only 3½ knots in line ahead across Lyme Bay, off the Dorset coast, with only a corvette as escort, way ahead of the convoy and not guarding its flank. It seemed too good to be true. The S-boats raced into the attack before the Americans knew what had hit them. As the LSTs, packed with men and equipment, scattered in confusion, the S-boats sank two of them with torpedoes and severely damaged a third. The gunners on the other landing craft began wildly firing their machine-guns, often hitting friendly craft. By the time the corvette Azalea realized something was wrong and had turned about, the S-boats had sped away, completely unscathed, leaving a death toll of 441 military and 197 naval servicemen, which increased to a total of 749 over the following weeks as more bodies were recovered from the water or floated on to the shore. News of the disaster came as a shock to General Eisenhower and his commanders who were planning for the great invasion of Europe only five weeks away. The American landing craft were in fact taking part in an exercise to practise amphibious landings on the beach at nearby Slapton Sands, chosen because of its similarity to the beaches of Normandy. If a few small German boats could slip through at night, apparently undetected, and create such havoc amongst just eight landing craft, what might they not do against a target of thousands when the real invasion took place? If nothing else, the event once again proved the vital importance of coastal waters, both in offence and defence, and the value of small, well-armed boats which were difficult to detect at night. It was a lesson the Royal Navy had learned the hard way earlier in the North Sea and English Channel but a danger underestimated by the Americans – although the US Navy in the Pacific would have told a different story. Plans were put in hand to strengthen the forces defending the Normandy invasion fleet, including the deployment of more British and American motor gunboats. The Royal Air Force began a series of bombing raids against S-boat bases which severely reduced their numbers. And a news blackout was imposed on the fiasco to avoid a loss of morale among the American troops waiting to take part in the invasion, many of them as inexperienced in combat as those who had tragically lost their lives in Lyme Bay. But in reality, such S-boat successes as Lyme Bay were exceptional. As Kapitänleutnant Rudolph Petersen summed up at the time: ‘Owing to the superior radar, strong escorts and air patrols of the enemy, and the German dependence on good visibility (for their boats still lacked radar), each success must be paid for by many fruitless attacks.’ And as the Allies pieced together the events of that night, it became apparent that it was not so much a German success as a chapter of Allied errors. The destroyer Scimitar should have been part of the escort, but had been in a collision with one of the landing craft the night before and had put in to Plymouth for repairs. The destroyer Saladin was intended to replace her, but through an oversight had not reached the convoy. Shore radar contact with the S-boats had in fact been made and Azalea warned two hours before the attack took place, but still the corvette allowed the convoy to proceed slowly right into the enemy’s path without any evasive action. Although the Azalea was under the orders of US Navy officers, it was her British captain who was censured for not taking more effective measures to defend the convoy. The heavy loss of life included men who had jumped from their sinking or damaged craft and drowned because there were too few life rafts, they had not been instructed properly in the use of life vests, and, in the case of the troops, they were encumbered by their heavy equipment and the helmets they were still wearing. As stated in Captain Roskill’s Official History of the War at Sea: The first five months of 1944 marked a very important stage in the development of our maritime control over the narrow waters; for it was then that we gradually established a sufficient ascendancy to ensure that, when the invasion fleets set sail for France, the Germans would not be in a position to molest them seriously. The degree of success accomplished could not, of course, be judged until the expedition actually sailed; but by the end of May there were solid grounds for believing that, even though the passage would undoubtedly be contested with all the means available to the enemy, his worst efforts would not suffice to frustrate our purpose. Such was the measure of the accomplishment of the astonishingly varied forces of little ships and aircraft which had so long fought to gain control of our coastal waters, and to deny a similar measure of control to the enemy. As D-Day approached, so the work of Coastal Forces increased. Now it was not only a matter of laying mines to protect the flanks of the 15-mile-wide path of the invasion fleet across the Channel, but every effort had to be made to prevent S-boats from mining this path or the convoy routes of the invasion forces gathering in harbours along the south coast. There was a momentary alarm when, during an exercise on the night of 18/19 May in which MTBs were to act the part of S-boats to test the defences against these, two real S-boats approached the outer patrols. They were chased off, however, by two SGBs. It is outside the scope of this book to describe the complex plans for D-Day in detail. Very briefly, Operation Neptune, which was the naval part of the overall invasion, Operation Overlord, called for two great task forces to make landings on either side of a line dividing Seine Bay. To the east was the British area, under Rear Admiral Sir Philip Vian, where three divisions of the British Second Army were to land at three points, ‘Sword’, ‘Juno’ and ‘Gold’, on a 30-mile front between the River Orne and the harbour of Port-en-Bessin. To the west was the American area under Rear Admiral A.G. Kirk, where the US First Army was to make two landings, ‘Omaha’ and ‘Utah’, on a 20-mile front. Two follow-up forces were to come in immediately behind the main assaults: Force L, commanded by Rear Admiral W.E. Parry, and Force B, commanded by Commodore C. D. Edgar. Out of the total of 1,213 warships allocated to the assault phase of the operation, 495 were coastal craft, including SGBs, MTBs, PTs, MLs and HDMLs. With the Eastern Task Force there were ninety craft, including thirty American. With the Western Task Force there were 113, including eighty-one American. It was in the latter area that the SGBs and most of the PTs were to operate. A further 292 craft came under Home Commands, amongst which were thirteen Dutch, eight French and three Norwegian. The landing craft of various types which were to take part in the initial phase totalled 4,126. D-Day was originally scheduled for 5 June. As instructed, a group of three PTs, which were to be among the spearhead forces, set out on the 4th to rendezvous with minesweepers off the Isle of Wight and began the crossing towards Seine Bay. Only after they left was the belated notice received that D-Day had been postponed until the 6th because of the bad weather forecast. The PTs were all set to make a landing on their own, a day ahead of time, with consequences in revealing to the Germans the location of the invasion that hardly bear imagining. Luckily they were intercepted by a patrolling destroyer when halfway across the Channel and sent back to Portland. There was great anxiety and tension throughout that day, 5 June. It seemed impossible that the enemy could still be unaware of the Allied plans, considering the sheer size of the operation and the fact that the concentration of shipping of every kind imaginable in the Solent and Spithead was so great that scarcely an empty berth remained in those wide stretches of sheltered water. But there was no sign of enemy activity. As darkness fell on the waiting, darkened ships it seemed, incredible as it was, that the greatest invasion armada the world had ever seen might after all achieve that element of surprise that counted for so much.
<urn:uuid:82ac6095-7b47-4275-956b-5773008957de>
CC-MAIN-2021-43
https://weaponsandwarfare.com/2021/04/29/preparation-for-the-great-invasion-ii/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00310.warc.gz
en
0.988921
4,096
2.78125
3
Models of Intervention Addiction – or, as it is clinically defined now by the American Psychiatric Association, substance use disorder – is a chronic disorder where an individual continues to use a substance despite continual negative problems associated with its use (this article will use the terms addictionand substance use disorder interchangeably). From the onset, a good number of individuals with substance use disorders who decide that they need to stop their behavior initially believe that they can control their behavior; however, many people find that they are unable to successfully address their issues with addiction without some form of intervention or outside help. This article attempts to outline the major types of treatment interventions for addiction and to suggest the type of individual who may be suited to specific types of intervention. This article is offered as a general guideline to assist individuals to find a particular treatment program or style of treatment that may be suited towards their needs. Even though the goal of this article is to outline empirically validated treatment programs no specific claims regarding any specific successes/failures for any individual are intended or inferred. Success in treatment for substance use disorders is dependent on number of factors, both relating to the individual in treatment and to the program itself. General Issues in Treatment for Substance Use Disorders There are some general issues gleaned from the research that should be considered regardless of the specific treatment program a person chooses: - The length of time in treatment is predictive of the outcome of the treatment (people who remain in treatment longer tend to be more successful). - Relapse appears to be the rule rather than the exception; individuals need to learn from their relapse and move forward. - Dropouts from treatment programs are also the rule rather than the exception; individuals who drop out from treatment can always return to treatment, and individuals in treatment should not be discouraged if others drop out. The issues regarding relapse and dropouts from treatment should not be taken as excuses to relapse or to drop out, but instead represent realizations that no one who enters recovery has a flawless recovery, and that everyone in recovery will have issues. Common issues that occur in recovery are relapse and quitting treatment for a time. These experiences should be used to allow the person to learn and grow in an effort to achieve their goals in the long-term. A popular model of recovery from substance abuse/addiction termed The Developmental Model has outlined six stages that individuals with substance use disorders go through as they reach long-term recovery. This model is a useful guideline that outlines the process that individuals may experience regardless of the treatment program they choose: - The first stage is the Transition Stage: This stage represents a particular time period when individuals realize that they are unable to use their drug of choice in a recreational or “normal” manner. - The Stabilization Stage occurs when individuals begin to experience medical issues, such as withdrawal symptoms or other issues such as cravings, and learns how to isolate themselves from people and conditions that foster their drug usage. - The Early Recovery stage occurs when people address their need to establish a drug-free lifestyle and engage in relationships that support their long-term recovery. - Middle Recovery comprises developing a balanced style of living and continuing to address the issues that occurred with substance use disorder. - Late Recovery involves identifying and changing personal impressions and worldviews that contributed to thinking when individuals were using the drug of choice. - Maintenance involves the long-term, often lifelong, management of living without using drugs. Most people find that recovery is long, can be very complex, has issues with relapse and learning from relapse, and requires perseverance in order to be successful. Different people have different issues in recovery that make the recovery process even more complex, such as co-occurring psychiatric issues/diagnoses, family histories that helped promote substance use, personal stressors that also require attention, relationship issues, and so forth. Different treatment programs can be tailored to the specific needs of the individual. It is important to identify and address the particular needs/stressors that one faces when one is in treatment and then work on understanding and living with or changing them if possible. Empirically Validated Treatment Models The term empirically validated refers to the notion that there is sufficient research-based evidence to indicate that the model is effective in treating substance use disorders. The following discussion is a general discussion regarding general models of addiction, most of which empirically validated, and a few models that may lack good research validation, but remain in the forefront of recovery for many people. Pharmacotherapy refers to the treatment of substance use disorders by using specific medications such as Suboxone, methadone, or Antabuse. The specific type of medication used depends on the type of drug the individual is attempting to stop using. For the most part, pharmacotherapy is used as an adjunct form of treatment; it is not a standalone treatment. There are several different forms of pharmacotherapy that vary depending on the type of drug that the person is addicted to and the need to control withdrawal symptoms. Drugs like Suboxone and methadone are used to assist individuals with negotiating through the potential severe withdrawal symptoms and cravings that often come with stopping use of opiate drugs. Drugs like Antabuse and naloxone are designed to trigger specific averse reactions when an individual takes a specific type of drug. Antabuse makes people violently ill if they drink alcohol; naloxone triggers severe opioid withdrawal effects if people take a narcotic drug. Similar types of therapy are used for nicotine addiction and several other drugs of abuse. Most of these medications need to be taken under the supervision of a physician and cannot be purchased legally without a prescription. Some of them, like Suboxone, also have the potential for milder physical dependence and need to be discontinued gradually under the physician’s supervision. In general, these types of drugs are successful in accomplishing their purpose, assisting an individual in engaging in a recovery program while not using the drug of choice; however, compliance can also be an issue. For instance, users of Antabuse can simply stop using the medication and 48 hours later drink alcohol without serious ill effects. However, research indicates that these drugs are generally effective in assisting people recover. For instance, a study in The Journal of Community Hospital Internal Medicine Perspectives found that heroin addicts treated with Suboxone had relatively good outcomes and better quality of life. While pharmacotherapy is often an effective tool for some individuals, at this time, there is no reason for a person to think that engaging in a program that involves pharmacotherapy as a standalone treatment will be successful. At this time, all pharmacotherapy treatments should be supplemented with other types of recovery programs. Pharmacotherapy and other forms of empirically validated treatment operate by different but complementary mechanisms. Pharmacotherapy is best suited for: - Individuals who might experience severe withdrawal symptoms from the drug of choice and individuals with severe addiction issues are best suited for this type of adjunct treatment. - Individuals who have issues with the legal system often engage in pharmacotherapy in order to communicate their commitment to their recovery to a judge or officer of the court. - Since pharmacotherapy requires the supervision of a physician, individuals who prefer to “go it alone” may not find this alternative a suitable form of treatment Therapeutic Communities, Residential Treatment, and Inpatient Medical Detox Programs Residential treatment provides 24-hour care, most often in nonhospital settings; however, some hospitals may offer this type of treatment. The focus of these programs is typically to re-socialize the individual and offer comprehensive treatment at multiple levels. The treatment is often: - Highly structured - Utilizes a multidisciplinary approach (e.g., physicians, counselors, nurses, etc.) - May even be confrontational - Can be designed to assist clients as they undergo medical detox - Treat co-occurring psychiatric issues - Designed to address dysfunctional belief systems regarding substance use - Designed to develop a sense of personal responsibility and accountability - Attempts to help adopt a new more functional way for people to cope with their lives Inpatient treatment is typically time-limited and can last from 30 days to as long as 12 months, depending on the needs of the individual, the situation, the individual’s ability to pay, and so forth. These programs can be modified to treat individuals with special needs. They are particularly useful for individuals caught up in the criminal justice system as a result of their substance use, since they provide documentation of the individual’s treatment progress and structure while at the same time separating the person from society. While these types of inpatient programs are able to provide empirical evidence of positive results, individuals in them will need to continue an active recovery program once they are released. These programs are not standalone programs that “cure” patients; there is no cure for addiction. Thus, it is important that individuals released from these programs have a structured program of therapy and supervision in order for their success to be continued. These types of programs are best suited for: - Individuals who require medical detox from a drug - Individuals with co-occurring psychiatric disorders who need specialized treatment and structured supervision - Individuals with living conditions that are not conducive for recovery (e.g., homelessness, severely dysfunctional families, situations of abuse, etc.) - Individuals with potentially severe withdrawal issues - Individuals with legal issues - Individuals with multiple instances of relapse who need to develop a structured approach to recovery Some individuals have specific needs that require 24-hour supervision. That being said, a good number of individuals are able to experience the same benefits from treatment by utilizing outpatient recovery programs. The interventions under this heading are not meant to refer solely to interventions from the psychological paradigm of behaviorism. Behavioral therapies include the majority of therapies used to treat substance use disorders that attempt to help people to understand their incentives and motivations, develop coping skills, modify attitudes, identify triggers, and so forth. There are hundreds of recognized forms of therapy, and a discussion of some of the major categories of therapy follows. Cognitive Behavioral Therapy Many of the treatment interventions use the principles of Cognitive Behavioral Therapy (CBT), which combines two of the major psychological paradigms, cognitive psychology and behavioral psychology, in order to approach treatment from a holistic perspective. CBT actually refers to a number of different types of therapies as opposed to one singular approach, but all of these approaches attempt to challenge a person’s maladaptive behavior patterns and learning patterns by examining their thoughts, beliefs, and attitudes, and then help the person restructure them in real-life situations. One of the central principles of CBT is to help people to anticipate issues and enhance their ability to cope with the development of effective coping strategies. Central techniques used by most types of CBT in the treatment of addiction include: - Identifying and assisting with the change of dysfunctional attitudes, beliefs, and expectations - Self-monitoring to identify triggers/cravings and situations that put one at risk for relapse or use - Exploring positive and negative consequences of continued use - Developing strategies for coping with stress, cravings, and other triggers - Concentrating on developing positive relationships with others The evidence from research indicates that the skills developed in CBT continue to be applied by people even after they complete the course of treatment. CBT approaches are suited for people who: - Are motivated to explore their feelings and attitudes - Are willing to engage in “homework” outside the therapy sessions and apply principles learned - Are willing to look at themselves in an open-minded fashion - Can reflect on their experiences CBT can work with individuals who are initially very guarded and defensive; however, the therapist will need to apply the therapy towards those issues initially. Thus, CBT can be tailored for the individual (and most often is), and these approaches allow for quite a bit of variability and individualized restructuring in therapy. CBT is typically time-limited. Individuals who engage in CBT therapy will often get involved in support groups (see below) in order to continue a lifelong recovery program. Motivational Interviewing (MI) is based on motivational psychology principles. As the name implies, the therapy attempts to increase the motivation to change for people who demonstrate resistance to understanding that they have a substance use disorder or need to change their behavior. The research on this technique is mixed, indicating that in some cases, it may assist in treating substance abuse, and in others instances, it is not effective. The result of the research suggests that MI might be effective for individuals initially resistant to change when it is combined with CBT. Motivational Enhancement Therapy Motivational Enhancement Therapy (MET) assists individuals in addressing their ambivalence regarding stopping substance abuse as opposed to guiding the individual to recovery. The treatment is typically brief and designed to use the approach provided by Motivational Interviewing (see below) to strengthen motivation and help individuals to begin to build a plan for change. People in MET are often encouraged to bring significant others to the treatment sessions. MET appears to be particularly effective for people who abuse alcohol, and evidence for its use with other drugs, such as heroin and cocaine, appears to be mixed. This type of intervention may often be the choice of the therapist as opposed to the choice of a person seeking treatment because individuals who are resistant to changing their habits typically do not seek treatment. Contingency Management Interventions Contingency Management involves giving people with substance use disorders tangible rewards to reinforce behaviors that are suited to promote recovery, such as abstinence from their drug of choice. The rewards can consist of vouchers that can be exchanged for money, food, or other items, or can be actual cash rewards. Typically, this is a strictly behaviorist approach (meaning that the focus is on the person’s actual behavior/actions as opposed to the person’s thinking) that appears to be useful in helping individuals with substance use disorders significantly decrease or even stop using their substance of choice. Once abstinence is established, it appears that the use of CBT techniques can assist with maintenance. For instance, a study in the Journal of Counseling and Clinical Psychology found that this type of intervention was useful in reducing cannabis use in individuals and that the addition of CBT did not help initially, but was useful in maintenance of abstinence and continued recovery following the program. Contingency Management programs need to require objective documentation that the individual is maintaining abstinence such as a urine test. Thus, individuals who attempt to trick the therapist tend not to do well in these programs. These types of programs are typically very short-term and designed to help individuals curb their use or establish abstinence. They are particularly useful for individuals who are in need of a particular commodity, such as money, food, or housing, but can be tailored for the use of other rewards, such as being publicly recognized for one’s accomplishments. Individuals seeking treatment can often choose between individual therapy, group therapy, or combination of both. Most any type of therapy that is offered in an individual format is also offered a group format. There are various advantages to group therapy: - Group therapy offers a chance for individuals in various stages of recovery to learn from one another. - Group therapies offer a chance for one to share and help others. - Group therapies offer a variety of different opinions that may not receive exposure in individual treatment. However, some individuals may not initially be suited to certain types of groups, such as individuals with severe psychiatric issues or other special needs (but these might function better in groups of individuals with similar issues), extremely shy and withdrawn individuals, and those who are enormously embarrassed regarding their substance abuse and wish to remain private. There are numerous different types of support groups offered by private institutions, community organizations, and hospitals. Support groups range from 12-Step groups, such as Alcoholics Anonymous (AA), to specialized support groups offered by hospitals and the community in general. While there is ample empirical evidence for certain types of formal group therapy and family therapy in assisting to treat various substance use disorders, the research for many of the popular support groups such as AA is limited. Nonetheless, groups like AA have a very large membership and are readily available. These groups offer some advantages: - For the most part, community support groups are inexpensive or free, whereas many forms of therapy can be expensive and may not be covered by a person’s insurance. - These groups are often held every day of the week and at multiple times during the day, making them very accessible. - These groups offer advantages, such as learning from others, making one feel as if one is not alone in recovery, the opportunity to develop relationships with individuals who are abstinent, and so forth However, many of these groups embrace philosophies that certain individuals may not find attractive, such as philosophies with religious connotations, certain “steps” that are touted as being crucial to recovery (such as confessing one’s wrongdoings to other individuals), and so forth. Since most of these groups are free and donations to support them are voluntary, individuals who are interested in recovering from addiction can attend many different groups in order to find a group that fits their specific needs. Moreover, support groups offer individuals the lifelong opportunity to continue to participate in programs that help them focus on ongoing issues in their recovery. In addition, individuals can maintain contacts with others who face similar problems, whereas many of the other interventions for addiction are time-limited. A special form of group therapy that includes family members, family therapies have a number of specific uses in the treatment of addiction. There are many different forms of family therapies, ranging from couples therapy where married couples or romantically involved partners engage in therapy to family therapies that include members of the immediate and/or extended family. These therapies can apply principles from any of the major psychological paradigms and generally have solid empirical support regarding their effectiveness. A recent review article in the Journal of Marital and Family Therapy found that the overall research findings investigating the use of different types of family therapies was encouraging and that family therapy was especially useful in assisting the treatment of adolescents with issues relating to addiction/substance abuse. Family therapy can be very useful in situations where individuals have severe issues or dysfunction in the family, with children with substance use issues, or when a spouse or significant other wants to be involved in treatment. These interventions certainly do not represent the entire scope of available interventions but represent the major categories of interventions that are useful in the treatment of substance use disorders. The use of support groups is often helpful in the long run in maintaining lifelong sobriety and recovery, whereas formal interventions, such as CBT, represent shorter-term interventions that assist individuals in developing new habits and positive lifestyles. Recovery is a lifelong process, and most individuals will want to have ties to other individuals in recovery long after their initial treatment is completed.
<urn:uuid:ebc797c7-fa9c-4326-a6db-f8c01ca2ed6f>
CC-MAIN-2021-43
https://greenhousetreatment.com/intervention/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00471.warc.gz
en
0.955583
3,869
3.3125
3
The Science of Sleep in Schizophrenia Although insomnia or excessive tiredness are familiar to most who live with schizophrenia they are not unique to it. At least 70% of people suffer from some sleep abnormalities at some point in their lives for various reasons. Sleep problems can be the result of either a physical, mental or emotional issue that disturbs the normal rhythm of our bodies which then creates a vicious circle, as lack of sleep can have definite physical, mental or emotional implications leading to a continued cycle and a problem that is self-fulfilling. In this information sheet we look at the science of sleep in schizophrenia. There is more information about sleep including some practical hints to help improve your sleep in our general information sheet on sleep. Unlike people with schizophrenia, healthy individuals usually have sleep disturbances due to a transient issue rather than an organic issue. In this document we will look at the basic biology behind sleep and wakefulness generally then specifically at these processes in schizophrenia, their causes and abnormalities including treatments and techniques to combat either insomnia or excessive day time tiredness experienced by those living with schizophrenia. You may find that having a grasp of the biological mechanisms of how and why we sleep, how they work in a healthy brain and dysfunction of these mechanisms in schizophrenia is helpful. Often the reduction of what seems like an abstract concept, to its physical nuts and bolts can be of valuable therapeutic benefit in accepting and combating it. We will then discuss treatments and solutions to sleep abnormalities in schizophrenia. The Physiology of Sleep and Wakefulness Sleep and wakefulness are physiologically interrelated however there are definite differences in the active brain processes and neurochemical systems involved. Alertness and brain arousal are enabled by several pathways in the brain extending from the brain stem. The cells along this arousal pathway include neurons dedicated to the production, use and release of several important chemical messengers or neurotransmitters, that enable rapid communication within the brain between neurons or with the rest of the body and are essential to consciousness, wakefulness and associated behaviour. Namely, they are acetylcholine, noradrenalin, serotonin, dopamine and histamine. These cells fire in a particular pattern to promote arousal however, every 24 hours this system is inhibited during sleep. This is achieved by the actions of neurons that produce and release the inhibitory neurotransmitters gamma amino butyric acid (GABA) and galanine. The interaction between these two pathways operates like an electrical switch and allows the body to maintain either a stable sleep state or stable awake state.1 Usually this switch design allows stability between sleep and wakefulness while promoting rapid transition between the two physical states. Sleep disorders indicate a malfunction of this switch causing sleep-wake state instability which, results in either sleep intruding on wake state and vice versa. The switch from wake to sleep requires the inhibitory influence of GABA and galanine. These have been shown to suppress the activity caused by the neurotransmitters mentioned earlier which are known collectively as monoamines, that produce the arousal state. This exchange between the monoamine arousal group of neurons and sleep inducing group of cells means that when monoamine neurons are extremely active, they inhibit the sleep pathways and when inhibitory sleep pathways fire extensively during sleep, they block the discharge of the monoamine cell group. This is known as a ‘flip flop’ or bistable circuit. The two halves of the circuit strongly inhibit each other. This produces two stable activity patterns, either ON or OFF. Any sort of intermediary states of half wake or half sleep are avoided.2 In spite of this, if either side of the circuit is either abnormal, injured or weakened, the instability occurs during both sleep and wake states. In order to promote this stability, another neurochemical called orexin exerts an influence on sleep regulation. The effect of this chemical is mainly on the arousal pathway of neurons but only modestly effects the sleep inducing pathways thus promoting wakefulness.3 It is a commonly held belief that sleep is restorative however, what is being restored is not so certain. It is believed that it is a process that allows the body to return to a balance. The maintenance of the body’s internal balance is called homeostasis and covers the regulation of, for example, blood glucose level, fluid and electrolyte balance, hormone secretion and levels and arterial blood gas values such as oxygen/carbon dioxide levels and blood pH. This explains why prolonged sleep deprivation usually requires a compensatory sleep or ‘lie in’. The understanding of the underlying mechanisms is just as unclear.4 Studies have shown that a chemical called adenosine acting in an area of the brain called the forebrain, mediates control of the homeostatic process. Rising levels of adenosine occur along side an increasing sleep debt and the need to sleep. To explain, during a prolonged period of wakefulness, the body’s stored form of glucose called glycogen is broken down into adenosine which, consequently builds up in the forebrain. This then leads to the replenishment of glycogen stores through recovered sleep5. Studies have shown that when adenosine or an adenosine receptor agonist (a chemical that mimics adenosine, binds to the receptor and activates it) was injected in the forebrain of animals, it caused inhibition of the arousal state cell network and stimulated the activity of the sleep pathways. It was also shown that sleep pathways were further stimulated by its ability to increase the effects of the inhibitory neurotransmitter mentioned earlier, GABA. Thus by inhibiting the arousal circuitry and stimulating the sleep pathways, adenosine acts as a regulator of the need to sleep. The sleep inducing properties of adenosine have shown to be further increased by its effect on the Adenosine 1 (A1) receptor. It triggers what is known as a signal cascade through associated neurons, causing an increase in production of these sensitive A1 receptors6. It is worth noting at this point that the widely available stimulant, caffeine, inhibits the action of adenosine by acting as an adenosine receptor antagonist. This means it binds to the receptor but does not trigger any activity in the receptor therefore blocking it and preventing adenosine from binding and activating it. A second part of the sleep-wake regulation is the circadian influence. A study showed has shown its role in timing and structure of sleep and this biological clock is found in a part of the brain called the suprachianistic nucleus (SCN) of the thalamus and is termed as the body’s master clock. Circadian timing in which neurons fire in a 24 hour cycle is organised throughout the body. The SCN coordinates this rhythm based on light input from the outside world during the day and by melatonin during a dark cycle.7 Sleep and Schizophrenia Recent sleep research has shown two schizophrenia specific abnormalities that impact on cognitive function i.e. abnormality of circadian rhythm and reduced ‘sleep spindles’ on an electroencephalogram (EEG) which measures the electrical activity of the brain. The sleep disturbances associated with schizophrenia are all too familiar to both people with schizophrenia and their doctors and often requires either the short term use of hypnotic (sleep inducing) drugs or choosing a particular antipsychotic because of its sedative effect8. Sleep disturbances have been linked to the development of psychosis leading to schizophrenia. Studies have shown its contribution to cognitive impairment and analysed treatment success and psycho-behavioural therapies .Two lines of research have highlighted specific abnormalities in schizophrenia i.e. disturbances of circadian rhythm and changes in sleep architecture, especially sleep spindles.9 Although there are many issues relating to sleep in schizophrenia, the most difficult to treat is the tendency of people with schizophrenia to be awake when others are asleep. In medicine, this is called circadian rhythm disorder and it occurs when circadian sleep-wake drive from the SCN of the hypothalamus is out of synchrony with the environment. The main influence on this is light and people with psychosis may not get very much natural day light as they keep the curtains drawn during the day due to a perceived threat from the outside. In addition to this fear, they may also find the stimulation and hubbub of the day quite distressing therefore preferring to be awake at quieter times. This has been attributed to the effects of medication, negative symptoms of social withdrawal and apathy as part of the condition, moreover there may be a link between sleep-wake disorder and other symptoms i.e. cognitive impairment and could yield potential therapeutic intervention.10 A study into the sleep-wake rhythm of people with schizophrenia over a period of time compared with healthy individuals revealed that there are indeed abnormalities with over 50% of the schizophrenic group manifesting as asynchronous with normal night times. As was the rise and fall of melatonin levels which is a biomarker for the circadian rhythm. This circadian misalignment was not attributed to clinical state, antipsychotics nor everyday activities. A study of the EEG of people with schizophrenia during sleep showed a reduction of the duration and oscillation of sleep spindles (brain wave patterns that occur during sleep) in people with schizophrenia on medication compared to those not on medication. This abnormality has been linked to cognition and learning which did not improve after a good nights sleep however it is clear that more research is needed on this. The occurrence of sleep problems can also be a sign of the onset of psychosis or a relapse of symptoms11. How Commonly-used Sleep Medications Work Perhaps the most well known and most effective are the benzodiazepines. These are easily identifiable by their shared suffix –azepam. The first line drug of choice in terms of its sleep inducing properties is temazepam. The main disadvantages of this drug as with all drugs in this group is it can only be used short term as it is physically addictive. Another disadvantage is that there is a limited therapeutic dose range and it is a controlled drug with a street value so practitioners may be reluctant to initiate and if they do, only a 7 – 14 day treatment is given. On the whole however, temazepam is a good sleep inducing drug that will offer relief if taken correctly. You may find however that your sensitivity to different benzo’s varies. For example a person with bipolar mania find that the drug lorazepam has little or no noticeable effect but a person with psychosis may be extremely sensitive to relatively small doses.. Although the drug is shorter acting than other benzo’s it can sedate the most agitated psychotic state. Similar to the benzo’s are the z-drugs, zopliclone and zolpidem. Although originally believed to be a non addictive alternative to benzo’s they have in fact proven to be as addictive. Their mode of action is similar to benzo’s however dose range is limited and the maximum therapeutic dose is rarely exceeded due to side effects.. The disadvantages of zopiclone is that higher doses do not simply increase sedative effect as with benzo’s and can actually cause what is known as a paradoxical effect. This means higher doses have the opposite effect and in the case of both z-drugs this includes hallucination, agitation and acute confusion Zopiclone also can cause a metallic taste in the mouth after administration which persists the next day. For the relief of chronic insomnia other drugs are used whose primary indication is not sedation. The most common drugs of choice are the sedative antidepressants including the tricyclic antidepressant amitriptyline. It should be carefully noted that it is not much used as an antidepressant today therefore the dose given for sedation is not an antidepressant dose. For sedation, the usual dose is 10-20mg at night and although very effective, the disadvantage is its long duration of action which exceeds the normal length of time a person sleeps causing residual sedation on and after waking. The best way to avoid this is to take it 2-3 hours before bed. Other antidepressants commonly used for sedation mirtazapine and trazadone. Both drugs exert their sedative effects at much lower than usual doses and their main method of sedation is their side effect as anti-histamines which causes their sedative effect. It should be noted that lower doses of both drugs has more sedative action than higher doses. Both drugs can cause effects the next day with trazadone exerting a more noticeable unpleasant ‘hangover’ and mirtazapine causing more residual sedation in the morning. The other alternative for chronic relief of insomnia in schizophrenia is by selecting an antipsychotic based on its sedative properties. Probably the most potent sedative at even very small doses is quetiapine. Quetiapine achieves its extremely potent sedative effect by acting as a powerful anti-histamine with a dose of as low as 25mg achieving an adequate level of sedation however it must be remembered that the choice and dose will depend on its antipsychotic abilities with sedation a convenient secondary benefit. Other sedative anti psychotics include olanzapine and clozapine. Emphasis must be placed on the concept that an antipsychotic’s ability to sedate is based on their effect on histamine receptors. Only antipsychotics which block histamine receptors will cause sedation. This is also the case with sedative antidepressants. Similarly the antihistamine drug promethazine is available on prescription and over the counter. There are a number of other over the counter remedies including the sedative antihistamine diphenhydramine (Nytol) and herbal rememdies. Some healthy users find it an effective treatment for the relief of insomnia but its use in schizophrenia may not be as effective however it is worthwhile trying a number of remedies until an effective treatment is found that adequately treats acute episodes where sleeplessness is especially problematic. The main herbal remedies usually contain a mix of valerian root, wild lettuce, hops and passion flower. Again their effectiveness in treating either chronic or acute sleeplessness in schizophrenia is yet to be studied however they may be effective for some and not for others. Remember where all medications are concerned the best thing to do is to discuss the issue with your psychiatrist or Community Psychiatric Nurse who will be able to assess your condition and provide you with the most appropriate treatment that promotes your recovery and well being at any particular time. Author: Rob Foster, February 2017 (1 – 7).Neurophysiology of Sleep and Wakefulness (2008) – Johnathon RL Schwartz and Thomas Roth (8 -11).Sleep in Schizophrenia: time for closer attention (2012)- Sue Wilson and Spilios Argypopolous (12 – 16)) Schizophrenia and Sleep – Sleep Help Foundation
<urn:uuid:2cdac8d4-c487-465e-9797-043379729c8f>
CC-MAIN-2021-43
https://livingwithschizophreniauk.org/information-sheets/science-sleep-schizophrenia/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00630.warc.gz
en
0.943236
3,041
3.265625
3
1 edition of Millipedes, centipedes and woodlice of the Sheffield area found in the catalog. Millipedes, centipedes and woodlice of the Sheffield area |Statement||[edited by] J. P. Richards.| |Series||Sorby Record special series -- no.10| |Contributions||Richards, J. P., Sorby Natural History Society.| A reader from Indiana wrote to us a while ago about some black "worms" she is finding in her house. We think the black worms are in fact black millipedes, so we'll use that word from now on. The black millipedes (or "nearly black" millipedes, as the reader has it) are in her house, which is why they are a source of concern and frustration. The reader has already tried to get rid of the. The three photographs on the right show one of the Snake Millipedes (top), with an example of a Flat-backed Millipede and then a Pill Millipede below. At first glance, the Pill Millipede may easily be mistaken for a Woodlouse, but there are clear differences in the antennal structure of Centipedes, Millipedes and Woodlice. - Explore Jamie Luke's board "millipedes", followed by people on Pinterest. See more ideas about Millipede, Centipede, Arthropods pins. If this book had shared it's pages equally with millipedes and centipedes i would have given it 5 stars. But if you really are a beginner and don't know much about millipedes and centipedes i recommend buying this book i'ts definatley worth the money. Read more. 7 people found this s: 5. Centipedes are generally flat and brown with long legs. Millipedes are cylindrical and have many segments with short legs. Centipedes eat other insects, while millipedes generally eat plant matter, particularly decaying plant matter and dead leaves. Centipedes tend to move quite fast, while millipedes . The species in this area do not harm humans—no need to be afraid. Millipedes. Millipedes are often mistaken for centipedes, but millipedes have two sets of legs on each of their segments. With so many legs it looks as though a wave is going through its legs when it walks. They too have their own Class, called Diplopoda. Sho is hot in the cotton patch. Family without a name second part of A reply to Dr. Waterlands objections against Dr. Whitbys Disquisitiones modestæ Reflections upon Coll. Sidneys Arcadia, the old cause Evidence of identity Urban freeway development in twenty major cities. agenda for comparative legal skills research The influence of maternal education on infant and child mortality in Bangladesh Man of La Mancha. New economic directions for Australia Physician-Owned Specialty Hospitals: In the Interest of Patients or a Conflict of Interest? Millipedes, centipedes and woodlice around at all times of the year. This is one of the appealing things about their study; there is no real ‘close season1. The fauna of the Sheffield area is further enriched by the occurrence of Carboniferous Limstone to the south-west in Derbyshire and Magnesian (Permian) limestone to the east. Millipedes, Centipedes and Woodlice of the Sheffield Area by Richards, P. at Pemberley Books. Millipedes are a group of arthropods that are characterised by having two pairs of jointed legs on most body segments; they are known scientifically as the class Diplopoda, the name being derived from this double-legged segment is a result of two single segments fused together. Most millipedes have very elongated cylindrical or flattened bodies with more than 20 segments, while Class: Diplopoda, Blainville in Gervais, Centipedes. Within the myriapods, the centipedes belong to their own class, called chilopods. There are 8, species. The class name originates from the Greek cheilos, meaning "lip," and poda, meaning "foot."The word "centipede" comes from the Latin prefix centi- meaning "hundred," and pedis, meaning "foot."Despite the name, centipedes can have a varying number of legs, ranging from. The Millipedes, Centipedes and Woodlice of the Sheffield Areacould not have been produced without the help and inspiration of Gordon Blower and other members of the (then) British Myriapod Group (BMG). This combined contribution derives from 3 main areas of influence: 1. The development of identification skills through patient mentoring 2. It’s been raining. In Massachusetts, Rhode Island and throughout our service area, we’ve had an exceptionally rainy summer. Centipedes and woodlice of the Sheffield area book June, rainfall amounts in Providence were in the top 5 rainiest Junes on record!Besides keeping you indoors and helping your garden grow, an especially rainy summer can bring about a different type of pest into your RI, MA, or CT centipedes and woodlice of the Sheffield area book millipedes and centipedes. Distribution Atlas of Woodlice in Ireland (Doogue & Harding, ). 84pp ~ download pdf here; Local or County Atlases. Woodlice in Suffolk (Lee, P. ) ~ download pdf here; Leicestershire Woodlice (Daws, J., ) ~ download pdf here; Millipedes, Centipedes and Woodlice of the Sheffield Area (Richards, P., ) ~ download pdf here. Richards, J. ()Millipedes, Centipedes and Woodlice of the Sheffield Record Special Series No SNHS/Sheffield City Museum. Richards J. & Thomas, R. () Woodlice and Centipedes New to the Region. Sorby Record Richardson, D. () Yorkshire Millipedes. Bulletin of the British Myriapod Group 7: The Latin names of British Millipedes - G.C. Slawson download paper; Colin Peter Fairhurst (): bibliography of works on Myriapoda - Compiled by P.T. Harding download paper; Book review: Millipedes, centipedes and woodlice of the Sheffield are by P. Richards - A.D. Barber download paper. Centipedes and millipedes are many-legged recycling machines, munching their way along the woodland floor. Tell them apart by their legs: centipedes have one set of legs per body segment while millipedes have two pairs for each segment. Cave millipedes: Millipedes like cool, moist places. Well, few places fit that description better than a cave. These types of millipedes live in caves throughout the United States. Cave-dwelling millipedes vary drastically from cave to cave. Also, most have adapted. Centipedes, millipedes, sow bugs, and pill bugs or roly-polys are unusual arthropods. Sow bugs and pill bugs are actually crustaceans (related to shrimp, crabs, and lobsters). None of these pests transmit diseases to plants, animals, or humans. They don't damage furnishings, homes, or food -- but they can frighten people. Millipedes Some folks confuse millipedes with centipedes. These two. Millipedes, Centipedes and Woodlice of the Sheffield Area, Sorby Record Special Series No. 10 SNHS and Sheffield City Museum, Sheffield ISSN Manual of Natural History Curatorship: Chapter 9: “Health & Safety in Natural History Museums”Title: More Data4Nature, Ecological. Centipedes and millipedes are close relatives of insects, but they are not insects. Centipedes belong to the class Chlopoda, not Insecta; millipedes belong to the class Diplopoda, not Insecta. Centipedes look like segmented 1-inch worms with 30 or more legs. They are brown, flattened, have a distinct head, and one pair of jointed legs per segment. Millipedes. Body: Up to 1 1/2 inches long (except the Beauvois species found in Texas that can be up to 4 inches in length). Legs: Millipedes have two pairs of legs on each body segment. Their legs are shorter in relation to the body, so they look more like worms than do the centipedes. Color: Brown to black, rounded body. Food: Organic material and some young plants. This was just a brief look at the major differences between centipedes and millipedes. However, there are several facts about centipedes and millipedes that show the similarity between these organisms, both of which belong to the family of arthropods. So, the next time you spot a wriggly creature that you think might be a millipede or a centipede, just consider the points we discussed. The defensive mechanism of the pill woodlouse is very recognisable - it curls itself into a tight ball, only showing its plated armour to its attacker. It is an important. Centipedes (from the New Latin prefix centi- "hundred", and the Latin word pes, pedis, "foot") are predatory arthropods belonging to the class Chilopoda (Ancient Greek χεῖλος, kheilos, lip, and New Latin suffix -poda, "foot", describing the forcipules) of the subphylum Myriapoda, an arthropod group which also includes millipedes and other multi-legged creatures. Buy An Introduction to Centipedes, Millipedes and Woodlice by Richards, Paul (ISBN: ) from Amazon's Book Store. Everyday low prices and free delivery on eligible : Paul Richards. Centipedes and millipedes are many-legged relatives of insects (figure 1). Their outer body covering is so thin that it does not provide much protection from desiccation. This normally restricts these species to dark, moist places under landscape mulch and in compost piles. Sometimes centipedes and millipedes accidentally wander into homes. - Explore Creepy Crawly Creatures's board "Millipedes" on Pinterest. See more ideas about Millipede, Arthropods, Creepy crawlies pins.Millipedes and Centipedes have the following differences according to : 1. Differences in classification. The classification of millipedes and centipedes are same because they both are the part of phylum arthropods. Although all the insects are part .Classes Diplopoda and Chilopoda – Millipedes and Centipedes. The h ouse centipede pictured here is considered harmless to humans or their pets. Size: Body = 25mm, overall = 7cm. Centipedes are some of the oldest terrestrial animals, and some of the very first creatures to crawl from the sea onto the land were probably very similar in appearance to modern centipedes.
<urn:uuid:f7562110-7bd7-404a-9dad-5af687ff4ea4>
CC-MAIN-2021-43
https://boqacaxaqy.sheepshedgalleryandtearoom.com/millipedes-centipedes-and-woodlice-of-the-sheffield-area-book-26946hx.php
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00070.warc.gz
en
0.908093
2,372
2.71875
3
Periods stay outside of parentheses for parenthetical sentences that consist of an incomplete sentence or list. If parentheses enclose a whole sentence, place the period inside of the closing parenthesis. Do periods go inside or outside of parentheses? Periods and parentheses are two of the most basic punctuation marks to master for the English language. We use periods to mark the end of a sentence and parentheses to insert additional content. But since we inevitably use periods and parentheses within the same sentence, it’s essential to learn how to use them correctly. Two primary rules for using parentheses and periods in the same sentence #1. Place periods inside of the parentheses when parenthetical material consists of a complete sentence. In this case, parenthetical sentences do not occur within another whole sentence. - “Feed the cats twice a day and no later than midnight. (They run away if you’re inconsistent on time.)” - “Please have the dogs go inside the house at night. (Just don’t let them sleep on the bed). #2. Use periods outside of parentheses when parenthetical material consists of a dependent sentence clause or list. - “I expect students to turn in their assignments completed, edited, and on-on time (I make exceptions for emergencies).” - “All assignments should include bibliographies with MLA formatting (not APA.).” What are periods? Grammarians refer to periods as “terminal” or “strong punctuation marks” because they mark the end of a sentence or a “full stop.” Unless you end a sentence with a question mark or exclamation point, all sentences must end with a period. - “This is an example sentence.” (For independent clauses and quotes, always enclose terminal punctuation with a closing quotation mark or parenthesis.) The only exception for terminal punctuations occurs when a sentence ends with a formal abbreviation or special character. - “For returns, please ship your headphones to Beats Electronics, LLC.” - “We have a flight to catch at 4:30 a.m.” - “Tonight, we are watching Who’s Afraid of the Dark?” - “Stay tuned for a new episode of Don’t Look!” What about the ellipsis? The ellipsis consists of three periods (also known as dot-dot-dot). Often found within newspapers to save printing space, an ellipsis formally conveys how part of the sentence or quote is missing from the original statement. If an ellipsis occurs at the beginning of a quote, that means the whole sentence started before the quoted phrase began. Likewise, a terminal ellipsis implies that the whole statement continues for a while longer. For example, - “The two had been approached by television and movie executives in the past year…” - “…Ms. Johnson wanted Mr. Stanton to tell her story.” What are parentheses? Parentheses are round brackets that we use to provide in-text citations, lists, or side-notes to a sentence (like this, for example). As you might have noticed, parentheses consist of two brackets: - Opening parenthesis: ( - Closing parenthesis: ) The standard rule for parentheses is that they all must open and close. Writing a sentence without an opening and a closing parenthesis is like writing a sentence without a period. Can we use commas with parentheses? Parentheses are tricky because they have specific grammar rules for other punctuation marks. For example, we never use a comma before an opening parenthesis, but we can use a comma after the closing parenthesis when necessary. For example, Correct: “I’m a complete sentence (a fun one at that), but I’m also an example.” Incorrect: “I’m a complete sentence, (a fun one at that), but I’m also an example.” Like many sentences with parenthetical clauses, we can use parentheses similarly to commas or em-dashes. In fact, we could have written, “I’m a complete sentence, a fun one at that, but…” or “I’m a complete sentence–– a fun one at that–– but…” However, using parentheses or em-dashes too often is distracting for readers. If adding accessory content is important for tone, try reconstructing the sentence into two statements or incorporating footnotes or endnotes. If we include a parenthetical list inside of a sentence, we always use commas to separate list-objects. For example, - “We visited several state schools (such as Oregon, Washington, Idaho, and Arizona), but we decided to go with online education.” - “Your natal birth chart uses your birth city, time, and date to analyze the composition of astral bodies (e.g., Venus, Mars, Saturn, etc.).” Now, you might notice how we use periods inside the parentheses while abbreviating “e.g.,” and “etc.” As long as the abbreviations are necessary and correct, there’s no problem using periods within any parenthetical list or statement. In our case, “e.g.,” means “exempli gratia” (‘for example’) and “etc.” means “et cetera” (‘and other things’). Other types of punctuation marks within parentheses Outside of periods and commas, we can use question marks or exclamation points (or “exclamation marks” for British English) wherever necessary inside parentheses. For example, - “My parents took us to Italy (which I loved!), where we visited family.” - “Marilyn Monroe never won an Oscar (I don’t think?), but she is one of the most recognizable actresses in the world.” In either case, the question mark and exclamation point are exclusive to the parenthetical clause, and the rest of the sentence should end with a period. When to use parentheses? The three most common ways to use parentheses is to insert accessory information into your writing, provide an abbreviation for a long, formal title, or to provide a citation for quoted material. Use parentheses to add accessory information We call parenthetical sentences “accessory” because we should be able to remove the bracketed clause without obscuring the sentence’s meaning. For example, With parenthetical material: - “I now have 50 plants (and yes, they are my “babies”), as opposed to the 72 that stood before the windstorm.” Without parenthetical material: - “I now have 50 plants as opposed to the 72 that stood before the windstorm.” As shown above, we can remove parenthetical information without eliminating the meaning of a sentence. However, we can also use parentheses to clarify terminology or provide formal, scientific names. For example, - “The horticulturist is selling the philodendron (‘Pink Princess’) for $350.” - “People say the ‘Pink Princess’ (Philodendron erubescens) are rare, but they’re not.” There are also times when parenthetical sentences occur outside of a complete sentence. For example, - “I began homeschooling the kids on September 12. (Teaching is harder than it looks.)” For this particular example, it’s correct to treat parenthetical material as an independent clause and use a period inside the brackets. However, using parentheses in this manner is somewhat uncommon for formal writing. To write more fluidly, we recommend structuring the sentence as such: - “I began homeschooling the kids on September 12 (which is harder than it looks).” - Note: Since the second example now consists of a dependent clause (an incomplete sentence), the period goes after the closing parenthesis to mark the end of the entire sentence. Use parentheses for in-text citations The second most common use of parentheses involves in-text citations, a formal writing practice that adheres to academic style guides like the American Psychological Association (APA) Publication Manual or the Modern Language Association (MLA) Handbook. APA and MLA style guides require in-text citations for formal writing to avoid plagiarism, direct audiences to sources within a bibliography, and add credibility to your writing. Regardless of the type of information you cite, all in-text citations occur at the end of a sentence while placing a period after the closing parenthesis. For example, APA: “A complete sentence with sourced information (Surname, p. 201). MLA: “A complete sentence with sourced information (Surname 201). While citing direct quotes, APA and MLA formats allow writers to parenthesize page numbers for the second mention and onward (just make sure your citations appear regularly for the same source). For example, APA: “A complete sentence with the second mention of a source (p. 201). MLA: “A complete sentence with the second mention of a source (201). Most APA citations parenthesize the author’s last name and year of publication (separated with a comma). But if you include a direct quote, be sure to add the abbreviation “p.” for “page” with the page number. If you reference a range of pages, use “pp.” and separate page numbers with an en-dash. Basic in-text citations for APA: - The first mention of source: (Surname, 2020). - Direct quote from one page: (Surname, 2020, p. 201). - Direct quote from page range: (Surname, 2020, pp. 201–202). - Subsequent mentions of source: (p. 201) or (pp.201–202). MLA format omits commas and page abbreviations for in-text citations while parenthesizing the author’s last name and page number(s). Additionally, if you cite more than one source at a time, separate each citation with a semicolon (see below). Basic in-text citations for MLA: - First mention or direct quote from the source: (Surname 201). - Subsequent mentions or direct quotes from the same source: (201). - Citing multiple sources: (Surname 201; Surname 401). How we cite information also depends on the source media (e.g., television shows, essays, books, dictionaries, websites, etc.). MLA requires nuanced source citations, so make sure to check out Purdue OWL’s citation guide if you don’t have an updated MLA Handbook. Use parentheses for acronyms and abbreviations Lastly, we can use parentheses to disclose official acronyms or abbreviations for titles or proper names. To avoid writing the same title throughout your work, introduce the full title with an official abbreviation in parentheses, and then use the acronym for there on out. For example, First mention: “Senators disclose 2022 budget for the National Aeronautics and Space Administration (NASA).” Subsequent mentions: “NASA prepares to negotiate the budget to meet their financial needs for 2022.” For standard abbreviations, such as “pacific time zone,” enclose “PT” with parentheses when necessary. For example, - “The award show begins at 8:30 p.m. (PT).” Special cases for parentheses Parentheses are also necessary for writing chronological lists or phone numbers. For instance, most people parenthesize their area code and use hyphens to separate local digits of their phone number. For example, “(971)-971-9701.” Chronological lists also use parentheses to enclose numbers, roman numerals, or letters, and they always begin with a colon, use commas, and end with a period. For example, - “To bake cookies: (a) preheat oven to 350 degrees, (b) grease cookie tray with shortening or butter, (c) position cookies on tray three inches apart, (4) bake cookies for approximately 12 minutes or until brown.” Feeling ready to master periods and parentheses for your writing? Challenge yourself with the following multiple-choice questions. - True or false: Periods go outside of parenthetical sentences when they consist of a whole sentence. - Which of the following are not forms of terminal punctuation? a. Question mark c. Exclamation point - Periods do not terminate sentences that end with a _____________. b. Formal abbreviation c. Question mark d. Exclamation point - Which of the following punctuation marks can work similarly to parentheses? d. A and C - Which of the following do we not use parentheses for? a. Area codes b. Side notes d. None of the above - “Dashes & Parentheses.” Center for Academic Success, University of Illinois, Springfield, 2020. - “Dashes, Parentheses, Brackets, Ellipses.” Writing Associates Program, Swarthmore College, 2020. - Garcia, S.E. “Tanqueray, Humans of New York Star, Brings in Over $1.5 Million in Donations.” The New York Times, 28 Sept. 2020. - “How to use parentheses and brackets ( ) [ ].” Lexico, 2020. - “In-text citations: the basics.” Purdue Online Writing Lab, Purdue University, 2020. - “MLA in-text citation: the basics.” Purdue Online Writing Lab, Purdue University, 2020.
<urn:uuid:ad03871c-d8db-43ba-96b2-9288f993601c>
CC-MAIN-2021-43
https://thewordcounter.com/period-inside-or-outside-parentheses/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00311.warc.gz
en
0.85735
2,994
4.03125
4
Let's Disambiguate Some Terms by Cam N. Coulter Posted on April 23, 2021 accessibility You know what I love? Context. I can honestly get quite excited about it sometimes. I appreciate when folks pause to provide some context for a discussion, define and disambiguate their terminology, and make sure everyone is on the same page. In that spirit, before I progress with my 100 Days of A11y blog series, I want to take a moment to do just that. There are several different theoretical models that people use to understand what disability is, and how you define disability is influenced by which model(s) you subscribe to. In my next post, I’ll explore these different models in more detail. Personally, I tend to view disability through the social model and define disability as a mismatch between a person’s capabilities and what their environment requires. For example, a deaf person is disabled when a video they are watching fails to provide captions, but they are not necessarily disabled while going about their everyday life as a deaf person. Accessibility is an attribute, a reflection of how accessible something is, of whether different people (with varying capabilities) are able to use it (and to some degree, how easily they are able to use it). I think, strictly speaking, accessibility refers to whether someone is able to use something at all (sort of like a binary on/off switch), while usability refers to how easily and intuitively someone can use something (sort of like a zero to one hundred slider). In practice, I think usability considerations are often intertwined with accessibility considerations, and they may not always be easy to separate. I almost described accessibility as a feature a moment ago, rather than as an attribute, but I don’t think that’s the best word choice, for a couple reasons. First, accessibility should not be viewed as an optional, nice-to-have feature. That’s ableist. Second, a product’s accessibility really is a sliding scale, from accommodating only a small number of people on one side to accommodating every possible person on the other. In this way, accessibility isn’t a feature that something has or doesn’t have. Rather, it’s a fundamental attribute of anything we create, with many different possible values ranging from “barely accessible to anyone” to “accessible to people with certain impairments but not others” and finally to “accessible to everybody!” I think the word “attribute” reflects this reality better than the word “feature.” Accessibility is often abbreviated as a11y because there are eleven characters between the “a” and the “y” in the word “accessibility”. Accessibility applies to both physical and digital spaces. I’ve noticed that the abbreviation a11y more often tends to be associated with digital and specifically web accessibility. I’ve also noticed that people tend to use the phrase “accessible built environments” to refer to accessibility in physical, real-world spaces. Personally, I think that’s a little confusing, because digital, virtual spaces are also “built,” albeit in a different way. Accessibility is related to inclusive design. Inclusive design is a method, a way of approaching design that is proactively mindful and respectful of human differences across many axes. If someone practices inclusive design, they will be mindful of people’s wide array of varying abilities and carefully design accessible products. An inclusive designer will also pay attention to considerations beyond accessibility. For example, as a nonbinary person, I am frequently infuriated by web forms that assume gender is binary (male or female). While this isn’t necessarily an accessibility consideration, it is something that an inclusive designer would care about. Accessibility is also a professional field. The International Association for Accessibility Professionals (IAAP) is a professional organization that represents, supports, and champions this field. Within the field of accessibility, there are many different roles you might have. Accessibility professionals may audit and remediate websites, documents, and apps, to ensure that they are accessible to people with disabilities. Accessibility professionals may also work as designers, developers, or product managers, proactively ensuring that products are built accessibly from the get-go. Accessibility professionals may also work as trainers or consultants, supporting others in learning about and creating accessibility. Accessibility professionals are often guided by an ethic of universal design and seek to create things that are natively accessible to as many people as possible. Finally, accessibility professionals frequently stay up-to-date on relevant laws, regulations, and standards, and some accessibility professionals trained in law may specialize in accessibility laws and disability civil rights. For some people, accessibility is their whole career path. For others, it’s just a part of what they do. It’s helpful to have some people who specialize in accessibility, but it’s also important to have many others who are aware of accessibility fundamentals and who can help create accessible experiences, even if it’s not their core responsibility. There are a few other distinct professional fields that relate to disability and accessibility. First, there is the world of specialized assistive technologies for persons with disabilities. This is related to but distinct from accessibility. Accessibility professionals create things that can be accessed by assistive technology. For example, a web accessibility specialist will focus on creating websites that can be accessed through screen readers or switch controls. However, assistive technology professionals focus on actually creating those screen readers or switch controls to begin with. Assistive technology professionals may also help people with disabilities learn about and learn to use assistive technology, depending on their exact role. Folks who are immersed in the assistive technology world are more likely to be members of the Rehabilitation Engineering and Assistive Technology Society of North America (RESNA), rather than IAAP. Another distinct but related field is occupational therapy. Occupational therapists (OTs) help people improve their fine and gross motor skills so that they can more independently perform tasks they want and need to do. OTs work with people with disabilities and other people as well, such as those recovering from certain injuries or who have been diagnosed with certain medical conditions. OTs may also help make modifications to a person’s environment or help a person learn to use assistive technologies. OTs are likely to be members of the American Occupational Therapy Association (AOTA). I’ve also noticed that some OTs are members of RESNA. Here is a link to an interesting article I found about occupational therapists and assistive technology engineers working together. There is also a universe of people whose job it is to support people with disabilities in academic institutions. Teachers, paraprofessionals, and others who work in disability services may support students in K–12 schools. Those who work in higher education might be members of the Association on Higher Education and Disability (AHEAD). AHEAD members are a diverse group of people, some of whom may work in disability resources, others in IT, and others might work as ADA/508 coordinators or have other roles on campus. Those with a background in rehabilitation counseling might work in disability resources, or the Office of Accessible Education, as it’s now known at my alma mater. Others on campus might work to facilitate accommodations for students with disabilities, make assistive technologies available to those students, create accessible curricula materials, train faculty and others on campus on accessibility, or otherwise support the university’s accessibility policies and infrastructure. Who exactly has these roles and how they are carried out can vary by institution. One last group that I want to shout out are direct support professionals (DSPs). DSPs support people with disabilities (I think most frequently those with intellectual and developmental disabilities) in a wide array of contexts, including work, hobbies, community life, and activities of daily living. The relevant professional organization here is the National Alliance for Direct Support Professionals (NADSP). I worked as a DSP for a couple years in a few different roles (residential, day services, and supported living), and I’ve seen the great work that many of these professionals do on a daily basis. This field is painfully undervalued by society as a whole, but that’s a topic for another post. When it comes to accessibility and assistive technology, DSPs aren’t expected to be experts, but they typically have a lot of experience with accessibility barriers and adaptive strategies. Depending on whom they support, DSPs may also be more knowledgeable about assistive technologies and might support persons with disabilities in using their assistive tech. What’s the point here? There are many professional fields, career paths, and job roles when it comes to disability, accessibility, and assistive technology. While there are similarities and overlap between all of these roles, I think these are nonetheless distinct fields, each with their own expertise, experience, and professional competencies. Depending on where you enter from, you may or may not be aware of all these fields and roles. I am working to become an accessibility professional, but I started out as a DSP. It’s taken me time to be able to write this blog post, to be aware of and be able to tease out the differences between all these roles. For a person who enters from another point (design or engineering, for example), it may take them a while to become aware of roles OTs and DSPs play. That’s part of why I mentioned so many professional organizations in this blog. Discovering those organizations help me understand what all these distinct fields are, and it also helped me learn more about each one. If one of those organizations or fields is new to you, I’d encourage you to check out their website and learn a little more about it. I think that can help make us all more effective allies and advocates for disability justice. Imaged created by Cam Coulter. Icon by mikion on the Noun Project.
<urn:uuid:777fdca3-f3ad-404e-afbf-3c365284e67a>
CC-MAIN-2021-43
https://www.cncoulter.com/2021/04/23/Lets-Disambiguate-Some-Terms/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587770.37/warc/CC-MAIN-20211025220214-20211026010214-00631.warc.gz
en
0.959373
2,088
2.640625
3
Evolution's Radiometric Dating Methods: Are they accurate? Although many things about a rock can be measured, its age cannot be directly measured. Radiometric dating techniques relies upon assumptions. To help you understand the reality of radiometric dating, think of it like this: If you were asked to find out when a candle, that was burning on a table, was lit, and you weren't there, all you can find out is either the height of the candle, or you can measure how fast it is currently burning by measuring the candle for awhile. So all we have is the height of the candle, and the rate at which it is currently burning. You still cannot figure out when it was lit, unless you make some assumptions. - How tall was the candle to begin with? - Has it always burned at the same rate? Neither of those can be known. If you find a fossil in the dirt, the amount of carbon can be measured, and the rate of decay can be determined. We don't argue with either of those. But we must make some assumptions. - How much was in it when it lived? - Has it always decayed at the same rate? - Has it been contaminated sitting in the ground for all these millions of years? 2. Decay rates have been increased in the laboratory by factors of billions of times! There are over several lines of evidence that decay rates have been faster in the past. For example, with Carbon dating, we know that: - A decrease in the earth's magnetic field, increases C-14. - And that increased CO2 (carbon dioxide) levels, decrease C-14. (So we can assume that the ratio decreased during the industrial revolution, due to factory generation of CO2, and during other such natural events...) Therefore, using an assumed constant ratio for dating inevitably results in inaccurate radiocarbon readings. And there is no way to prove that the decay rate was not different at some point in the past. These two problems alone, in reality, clearly calls into question the validity of virtually any dates assigned to fossils. Now that we know how radiometric dating actually works, let's see the evidence of its inaccuracy and unreliability with two major radiometric dating methods... Carbon-14 actually decays so quickly—with a half-life of only 5,730 years—that none is expected to remain in fossils after only a few hundred thousand years. Carbon-14 shouldn't even be used to date anything over a few thousand years, since its half-life is only 5,730 years! And Carbon-14 has been detected in “ancient” fossils—supposedly up to hundreds of millions of years old—ever since the earliest days of radiocarbon dating. It's found in everything! If radiocarbon lasts only a few hundred thousand years, why is it found in all the earth’s diamonds dated at billions of years old? This range is lowered even more by the calculations of other reputable secular scientists, including Robert L. Whitelaw (1968), who believed that C-14 could date no farther back than 5000 years. Dr. Walt Brown (2008), has presented recent data showing that C-14 dating becomes unreliable after about 3,500 years. The bottom line is that C-14 dating is clearly useless for dating so-called prehistoric life forms. Despite these, there are many other cases that illustrate just how inaccurate and unreliable radiocarbon dating can be. The following striking examples are just the tip of the iceberg: - Mammoth Dating Inconsistencies. A fossilized baby mammoth nicknamed Dima, was dated by Dr. Brown (2008). The radiocarbon dating indicated that one section of Dima's body was 40,000 years old, while another part was 26,000 years old. To make matters worse, C-14 dating determined that the wood that enveloped Dima was only 9,000 to 10,000 years old, when all materials that appear together with the fossil are, by definition, the same age as the fossil! - Young Dinosaur Fossils Rejected. After C-14 dating a dinosaur fossil, the Oak Ridge National Laboratory, located near Knoxville, Tennessee, indicated that the dating results showed the fossil to be just a few thousand years old, not millions. Not wanting to abandon their preconceived notion that dinosaurs have not existed for the past 65 million years, however, their evolutionary researchers dismissed the results as invalid. This is not an isolated case. Scientists often reject dating results that do not fit their theories. - Blind Dating. In a “blind” sample C-14 test, researchers provided fossilized dinosaur bones to University of Arizona's Laboratory of lsotope Geochemistry, in 1990, without indicating what kind of animal they were dating. The results, announced on August 10 of that year by the University's Professor of Geosciences, showed that what were actually allosaurus bones, were only about 16,000 years old instead of their official 140-million-year age! - Inconsistent Dates By Far. In the Geological Survey Professional Paper 862, they carbon dated sample #SI454 and said it was 17,210 years plus or minus 500. They then tested a different sample, sample #SI455, and said it's 24,140 years old. Then they find out that the second sample was actually the same sample, 454! The very same sample, tested again. So is it 17,000 or 24,000? This same mistake happened again... Sample #299 was claimed to be less than 20,000 years old, and Sample #L136 was greater than 28,000. They then find out it was the same sample as #299. How can a sample be less than 20 and greater than 28 at the same time? - Known Dates Inaccurate. Living penguins have been dated as 8,000 years old. Material from layers where dinosaur bones were found have been carbon dated at 34,000 years old. A freshly killed seal was 1,300 years old when they carbon dated it. Living snails have been carbon dated 27,000 years old. They tested a living mollusk, a clam, and it was 2,300 years old. It was still alive. (Earth’s Most Challenging Mysteries, by R. Daly, 1972, p. 280) (Science vol. 141, 1963, p. 634-637, M. Keith and G. Anderson) - The Earth Age Inconsistency. Back in 1770 they taught the earth was 70,000 years old. In 1905 they said it's 2 billion years old. By 1969, they went to the moon, they brought back moon rocks and said: “Oh, they're 3.5 billion years old.” That was the official age; 3.5 billion. Today they say it's 4.6 billion years old. The list goes on... Another thing you may not have known, is that Carbon dating was not invented until 1949! So when they started teaching the earth is billions of years old, back in 1830, they didn't teach it because of carbon dating. They'd never thought of carbon dating. It had never been heard of. Why were they teaching the earth is billions of years old 160 years ago? Because they needed billions of years to make their theory look good. - Volcanic eruption of Mt Etna, Sicily in 122 BC the basalt dated 170,000–330,000 years old - Volcanic eruption of Mt Etna, Sicily in AD 1972 the basalt dated 210,000–490,000 years old - Volcanic eruption of Mt St. Helens, Washington in AD 1980 the rocks dated 300,000–400,000 years old - Volcanic eruption at Hualalai basalt, Hawaii in AD 1800–1801 the rocks dated 1.44–1.76 million years old - Volcanic eruption of Mt Ngauruhoe, New Zealand in AD 1954 the rocks dated 3.3–3.7 million years old - Volcanic eruption at Kilauea Iki, Hawaii in AD 1959 the basalt dated 1.7–15.3 million years old |Andrew Snelling, “Excess Argon: The ‘Achilles’ Heel’ of Potassium-Argon and Argon-Argon Dating of Volcanic Rocks,” Impact #307, Institute for Creation Research| - Wild dates are always obtained with carbon dating or potassium argon dating. - Dates that don't fit the theory are rejected. - Only the ‘correct dates’ get published [that match the geologic column]. - The original content can not possibly be known. - You can't know that there's been no contamination. - You can't know that the decay rate has always remained the same. To be considered credible, radiometric dating would have to be scientifically sound and consistently accurate. As we have just seen, however, it is riddled with scientific flaws and endless examples of inaccurate measurements. Therefore, it is no more valid than the geologic column for determining when dinosaurs lived. In 1970, at the Nobel Symposium an evolutionist said: “If a C14 date supports our theories, we put it in the main text. If it does not entirely contradict them, we put it in a footnote. And if it is completely ‘out of date’, we just drop it.” - 100 Year Cover-Up Revealed: We Lived With Dinosaurs! (Book) - By James Edward Gilmer - Andrew Snelling, “Excess Argon: The ‘Achilles’ Heel’ of Potassium-Argon and Argon-Argon Dating of Volcanic Rocks,” Impact #307, Institute for Creation Research - Science vol. 141, 1963, p. 634-637, M. Keith and G. Anderson - Earth’s Most Challenging Mysteries, by R. Daly, 1972, p. 280 - T. Save-Soderbergh and I.U. Olsson (Institute of Egyptology and Institute of Physics respectively, Univ. of Uppsala, Sweden), C-14 dating and Egyptian chronology in Radiocarbon Variations and Absolute Chronology”-- Proceedings of the Twelfth Nobel Symposium, New York, 1970, p. 35
<urn:uuid:115a83fd-7f4a-4d83-aa12-d50d17533a31>
CC-MAIN-2021-43
https://www.christianevidence.net/2017/11/evolutions-radiometric-dating-methods_32.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00711.warc.gz
en
0.954236
2,184
3.921875
4
Extreme weather is bringing anguish and grief to an already precarious way of life. Extreme weather is bringing anguish and grief to an already precarious way of life. October 5, 2021 Nikiko Masumoto grew up revering the peach trees and grape vines on her family’s farm in California’s Central Valley. The orchard and vineyard have been passed down through her Japanese American family for generations and their fruits were the juicy economic engines that fed her community and assured the farm’s survival. But this year, there’s anguish in the peaceful groves as record-breaking heat waves, air-polluting wildfires, and droughts repeatedly pummel California. Warmer winters and more severe droughts spell poorer fruit sets and smaller fruit. And Masumoto, who returned 10 years ago to farm with her father, author and well-known farmer (and Civil Eats advisory board member) Mas Masumoto, will be responsible for transforming the farming operation so it remains viable into the future. It’s a calculus that likely includes using much less water and replacing some or all of the farm’s beloved peaches and grapes with other crops. “We will need to adapt, even if it means the painful reality that I might not get to leave this living cathedral of memory—the orchards—to a next generation,” said Masumoto. “If it comes to it, I fear the weight of that grief.” As climate change-fueled extreme weather events such as storms and droughts become more frequent and intense, farmers and others in the agriculture community across the country are increasingly feeling the brunt and contemplating a dark future. Beyond the inherent stress of farming, they face anxiety, depression, and grief linked to a fast-changing natural environment on which they’ve staked their livelihoods—at a time when few mental health-related resources are available to them. “The weather has become a more dominant factor in farmers’ stress than it was in times past,” said Mike Rosmann, an Iowa farmer and agricultural psychologist. “We’re seeing more concern. Even the farmers who are climate deniers say spring is coming earlier than it used to or are seeing longer periods without rainfall.” This year is proving to be one more in a series of disastrous years for farmers. Intense heat waves have ravaged the western U.S.—from Washington state to California and Arizona—and most of the region is experiencing extreme or exceptional drought conditions, leading to severe irrigation water restrictions, farmers fallowing fields, and ranchers culling cattle they can no longer feed. Mega-fires across the West have destroyed crops and infrastructure. Drought is also spreading in the Northern Plains and the Midwest, putting key commodity crops such as corn, wheat, and soybeans at risk. And in the Northeast, producers have seen repeated heavy rains this summer, and post-Hurricane Ida flooding imperil crops and food distribution networks. These ongoing, often long-term disasters are impacting farmers’ well-being, experts say. The farmer crisis hotline run by Farm Aid (1-800-FARM-AID or through an online form) has seen a significant increase in calls related to “natural disasters that are exacerbated if not caused by climate change,” said Jennifer Fahy, the group’s communications director. For Lori Mercer, a Farm Aid hotline operator, several recent calls come to mind. An older California rancher called to say he had woken up one morning to take care of his livestock—but when he opened up his well, nothing came up but sand. He couldn’t afford the $15,000 to $30,000 it would take to drill a new well, Mercer said. “The dearth of care is incredible. In farming communities, people just carry on and put their health as the last priority.” Another call came from the western region of the U.S.: a producer’s entire farm, including his farmhouse and all of his crops, had burned down in a raging wildfire. His plea to the hotline, Mercer said, was elemental: He needed help finding emergency housing. And a more recent call from a farmer in one of the southeastern states devastated by Hurricane Ida revealed another desperate situation: livestock missing and/or killed, crops ruined, all of the fences, the power, and the computer down, and all crops in the freezer and fridge storage spoiled. “It’s terribly hard for farmers to talk,” said Mercer, who stressed that the calls are fully confidential. “And the calls we get are just the tip of an iceberg. Most don’t reach out because of their streak of independence and pull-yourself-up-by-the-bootstraps mentality.” Farmers calling the hotline get to vent about their experience to supportive listeners and often get help crafting a plan of action, Mercer said. They receive referrals to local organizations in their county or state that can help them address the crisis on the ground and support them in its aftermath. Farm Aid also links farmers with a slew of resources and sends out $500 emergency checks to help the farmers with bills such as household expenses and food. (It can take up to six months to two years to get help through a relief program, said Mercer.) But in recent years, in response to mounting calls for help related to the climate, Farm Aid has shifted to organizing workshops that can proactively help farmers address the climate crisis. The workshops focus on how farmers and ranchers can become more resilient to future disasters by implementing sustainable methods of farming such as rotational grazing, soil regeneration, and habitat restoration. Others train farmers on how to document their losses and apply for federal financial relief. The increase in climate-related disasters and calls for help is also forcing the organization to reframe the very idea of disaster relief, said Fahy, the communications director. In the past, isolated natural disasters motivated giving. But in recent years, getting the public interested in giving money to a group of farmers facing a localized crisis is more challenging, she said, given that such weather events have become “the new normal.” “How do we raise public awareness and ask for support when the disasters are a constant, ongoing extreme situation?” Fahy said. Climate change is especially stressful for beginning farmers who must find a way to continue farming and remain profitable for decades to come. For Nikiko Masumoto, whose family grows organic peaches, nectarines, and grapes on 80 acres 15 miles southeast of Fresno, the pressure and potential losses are significant. Already, the Masumoto family has pulled some vines and trees to fallow land because of dwindling groundwater reserves and a lack of rain and snow that in the past fed surface water sources. They have reduced their irrigation by 20 to 30 percent, leading to smaller peaches, which are more difficult to sell. The family is looking at planting more drought-resistant perennial crops such as fig or olive trees—or even annual vegetable or grain crops, Masumoto said. This would be a radical change, but it might be necessary. And as she’s struggling with the weight of the decision, she remembers the resilience of her jiichan (grandfather) who was imprisoned during World War II in a Japanese-American concentration camp and later returned to Central California to buy the farm’s first 40 acres and plant its first crop of peaches. “Climate change can get depressing,” said Masumoto, “but I think of my ancestors and their incredible will to survive. I have no right to give up now.” Forty miles west, in Madera, another young farmer contemplates the uncertain future of her farming family. Allie Quady said her family’s winery, which grows some of its own grapes, had to drill a new well this year because the casing of the old one was broken and it was pulling up sand. And because the water table had dropped significantly—10 feet per year for the past seven years, compared to only 10 feet over a 20-year span prior to that—the new well had to go in much deeper, said Quady, the winery’s health, safety, and organization manager. It has taken three months for Quady Winery to get its new well because hundreds of other wells in Madera County have also needed replacement. The county’s aquifer is vastly over-drafted by farmers, some of whom rely entirely on groundwater that is not being replenished due to long-term drought. The Quady family’s yields were much lower as a result, but the grapes were saved, Quady said, thanks to the back-breaking work of the winemaker who went out every day, three times a day, even at temperatures that surpassed 100 degrees to check the drip lines and replace a filter that kept some water flowing. “It was very stressful . . . to not be able to water the grapes consistently and efficiently,” Quady said. “[They] do die pretty quick if you don’t get the water to them.” The family is contemplating moving some of its operations to other parts of California, Quady said, although its muscat grapes require heat as well as abundant water, which is scarce everywhere in the state. If the water runs out, Quady also worries about the livelihood of the area farmers who sell grapes to her family. “We’re tied to the local community of growers,” Quady said. “We all rise and fall together.” Small- and mid-scale farmers and ranchers have long experienced high levels of stress and anxiety. They can’t control prices or trade policies, and many have faced increasing debt levels and diminishing incomes. Farmers are also known for their grit, self-reliance, and perseverance, despite holding down one of the most dangerous occupations. They’re used to working alone, in far-flung isolated areas. They also are among the occupational groups with the highest rate of suicide. But, experts say, climate change is challenging the very nature of farming—and causing farmers even greater emotional distress—because the job engages directly with the shifting forces of nature. And yet, the stigma of seeking help in rural communities remains real, said Fahy. “Everyone knows everyone and knows that’s your truck parked in the therapist’s parking lot,” she said. And many farmers continue to lack access to care. In some Iowa counties, for example, there’s one professional mental health care provider for roughly every 12,000 residents. Another barrier is the lack of therapists, behavioral health care professionals, and extension specialists who actually understand the nature of farming. And even when enough trained providers are available, farmers often lack the health insurance to cover care expenses, Fahy said. “The dearth of care is incredible,” she said. “In farming communities, people just carry on and put their health as the last priority.” But, Fahy added, there’s growing willingness in recent years to acknowledge the stress farmers face and services are expanding rapidly in states including Illinois, Iowa, Colorado, and New Hampshire. The shift toward more services and increased openness in the farming population are partly due to a transition to the term “behavioral health,” which carries less stigma than mental health, said Rosmann, the agricultural psychologist. “Farm stress” is also commonly used. “We once thought it was sacred, but depression is now viewed more like diabetes, it’s something we have to accept and manage,” Rosmann said. “I think the time is coming where the understanding of how we manage our behavior is a central factor in our success as farmers.” To improve access to behavioral health among farmers, Rosmann said the federal government should establish a permanent program—and permanent funding—similar to the AgrAbility program that supports disabled farmers or programs that support veteran farmers. Currently, the Farm and Ranch Stress Assistance Network (FRSAN) grant program, established in 2008 and run by the U.S. Department of Agriculture’s National Institute of Food and Agriculture, is up for reauthorization in the farm bill every five years. The grants fund hotlines, training and workshops, support groups, and outreach services. Last year, NIFA awarded $28.7 million to four regional entities and funded additional Farm Aid hotline operators and expanded hotline hours, among other services. More research and academic training is also needed, including support for agricultural behavioral programs that are just getting established, said Rosmann, who is working on the first textbook in the field. Behavioral skills—including coping with stress, establishing a support network, curbing substance abuse, or effectively managing family relationships and employees—also need to be taught in agricultural and vocational programs, he added. “I think the time is coming where the understanding of how we manage our behavior is a central factor in our success as farmers,” Rosmann said. Farmers who are bearing the burden of climate change should also consider modifying their farming practices if the current ones no longer work, he added. Research has shown that farmers’ job satisfaction—and hence their emotional well-being—is often higher when they employ more sustainable, non-extractive practices, Rosmann said. In one study, done in Iowa in the 1990s, researchers from Iowa State University found that sustainable farmers reported “improved physical health, reduced job stress, more challenging and satisfying work activities, and more satisfying family and community relations”—all potential boons to their mental health. “When you feel you are farming in a way that benefits consumers and sustains the resources needed to farm, you feel satisfaction. And satisfaction is more important than money,” Rosmann said. “It’s hard to change, but if farmers don’t, they’re going to lose out.” Matt Angell, a well fixer in Madera, knows first-hand that a farming community is more than just its farmers—and that climate change is also causing distress to everyone who supports agriculture. In recent years, an unprecedented number of well drillers, pump service people, and water district officials—who are under constant, intense pressure to keep agricultural wells running—have suffered heart attacks and strokes, said Angell, the owner of Madera Pumps. During the 2012–2016 mega-drought, Angell was diagnosed with diabetes because, he said, he ate dozens of donuts every day to keep up his energy and smother the incredible stress. “We’re going after deeper water, and as we go deeper, the aquifers aren’t as strong. Tier 3 drilling is coming. It’s kind of like Stage 4 cancer; it’s terminal.” Homeowners in agricultural areas are also facing extreme stress levels, Angell said. In counties like Madera, where more than 720,000 acres—representing more than half of the county’s land—are harvested and many people live near the fields, home wells are going dry as the farmers dig increasingly deeper ones in a race to suck water out of the dwindling aquifer. Those homeowners, just like the farmers, also call well fixers for help. Often, the farmers and homeowners are the well fixers’ family and friends. “We’re a community. People are connected with one another. And when wells start to fail, people reach out in desperation. Desperation then turns to fear and anger,” Angell said, emotions that everyone in a farming town must face just about daily during the drought. This year, Angell said, he is seeing an unprecedented number of wells drilled during the previous drought broken, their steel casings crushed by subsidence. And because the water table has dropped down further than Angell has ever seen, new wells must now be drilled even deeper to hit water. This is likely the third and final round of drilling before well fixers hit granite and/or water that’s too salty to irrigate crops, said Angell. And it could spell a decline of the community he calls home. “We’re going after deeper water, and as we go deeper, the aquifers aren’t as strong,” Angell said. “Tier 3 drilling is coming. It’s kind of like Stage 4 cancer; it’s terminal.” Angell is concerned that the Sustainable Groundwater Management Act, or SGMA, which was signed into law in 2014 and requires addressing the groundwater overdraft by the early 2040’s, won’t make a difference in time to save the aquifer or its farming community. He said most farmers he works with—despite the deep anxiety they feel about the drought—are unwilling to change their practices. And it’s probable, he said, that unless they soon pull some trees and fallow land, the aquifer will continue to disappear. “We’re not trying to solve the problem, we’re just kicking the can down the road,” he said. “Everybody’s in denial.” Quady agrees. She says for now, the race is on as to which farmer can dig the deepest well—which causes anxiety to the small and mid-size farmers who will likely lose out in that race. “I feel a lot of frustration because if everybody would recognize the problem, we could find solutions,” she said. If you or someone you know needs immediate mental health support, there are a number of national hotlines available: The Rural Health Information Hub also maintains a detailed page dedicated to farmer mental health and suicide prevention. September 30, 2021 September 24, 2021 October 18, 2021 Bees, butterflies, and other insects are abundant in areas along the nation’s highways. The $1.2 trillion federal bill would make them pollinator-friendly. October 15, 2021 October 6, 2021 October 1, 2021
<urn:uuid:6625bb4c-1226-45ed-a151-0c72835cb310>
CC-MAIN-2021-43
https://civileats.com/2021/10/05/climate-anxiety-takes-a-growing-toll-on-farmers/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00669.warc.gz
en
0.962112
3,809
2.765625
3
NMR (Nuclear Magnetic Resonance) spectroscopy is one of the most popular tools to elucidate chemical structures of an unknown molecule in solution. In practical application, NMR can also be used to validate new structures because it provides information about scalar coupling, which is an indirect interaction of the nuclei of atoms in a magnetic field. These scalar coupling, or J coupling constants obtained from an NMR spectrum contain information about relative bond distances and angles in a molecule. This is useful for determining the connectivity of atoms in a molecule. For the seasoned chemist, confirming a basic chemical structure from the peaks on an NMR spectra alone is definitely possible, such as the 1H NMR spectrum for 1,1,2-trichloroethane below. The peaks below correspond to the hydrogens in the molecule and can easily be assigned. In this case, J coupling constants weren’t necessary to validate the structure. But what happens when you are dealing with more complex molecules or are working to synthesize new ones where you can’t easily determine what splitting patterns mean? How can you validate that what you’ve isolated is what you think it is? This is where J coupling constants come in. If you know what J coupling constant you’re looking for prior to getting the NMR spectrum, you can use that number as a confirmation that you’ve isolated your target molecule, or at the very least, confirm that you’re on the right track. Incorrect structures can be reported if there is no way to validate the coupling constants with the results found. Therefore, software tools that can predict these constants accurately will be useful in validation of structures in practice. In a collaboration with Chance Dare, this project aims to predict scalar coupling constants using machine learning models given known properties of molecules so that they can be used in the application of research in chemistry. We used this dataset that is part of the CHAMPS (Chemistry and Mathematics in Phase Space) Kaggle competition. The train dataset contained 4,658,147 scalar coupling observations of 85,003 unique molecules, and the test dataset contained 2,505,542 scalar coupling observations of 45,772 unique molecules. These molecules contained only the atoms: carbon (C), hydrogen (H), nitrogen (N), fluorine (F), and oxygen (O). There were 8 different types of scalar coupling: 1JHC, 1JHN, 2JHH, 2JHC, 2JHN, 3JHH, 3JHC, 3JHH. Fluorine coupling was not represented in this dataset. Looking at the data, we can see that the train and test sets had relatively even distributions of scalar coupling type and of the number of atoms present in each dataset. This tells us that the train data is a good enough representation of the test data in order to create a model that predicts the scalar coupling constants. What are the different types of scalar coupling constants? J coupling is an indirect interaction between the nuclear spins of 2 atoms in a magnetic field. The number that comes before the J in the J coupling types (1J, 2J, 3J) denotes the number of bonds between the atoms that are coupling. So 1J, 2J, 3J coupling will have 1, 2, and 3 bonds between the atoms, respectively. If we look at the distribution of the distance between atoms in the different types of coupling below, we can see that 1J has the lowest distance between atoms, and 3J has the highest distance between atoms, with 2J somewhere in the middle. The increase in bonds between atoms is observed as an increase in the distance between them. The distance feature contains information about the arrangement of atoms in space which could help a model predict the J coupling constant more accurately. The distribution of the scalar coupling constant values isolated by type also reveals that there are clear differences in the ranges that these values appear in. This gives us the insight that different molecular properties affect each type of J coupling differently and unique models should be used for all 8 coupling types found in the dataset. What factors affect the scalar coupling constant? Understanding what properties of molecules affect the scalar coupling constant is the key to training a model that can accurately predict these values for future experiments. Some properties that affect the scalar coupling constant are: - dihedral angles — the angle between two intersecting planes - substituent electronegativities — the tendency of an atom to attract a shared pair of electrons) - hybridization of atoms — contains information about the number of atoms bonded and the coordination of the molecule - ring strain — instability in a molecule due to abnormal bonding angles found in a ring Since our dataset didn’t explicitly include this information, we needed to engineer features that would bring information about the factors impacting coupling values into the data set. Our dataset was extremely limited in features, so our model was going to rely heavily on engineering a lot of new features. The important consideration here was understanding what could potentially be useful and working from principles. For instance, knowing that hybridization had an impact on the J coupling constant, we understood that the number of bonds on each atom would be an important feature to engineer for our model. We engineered ~60 features — some based off of basic math and some with pretty dense calculations. A few of these features included: - distance — the distance between the given cartesian points of each atom - n_bonds — the number of bonds on a specific atom - mu — the square root of the sum of the squared Cartesian values - delta_en — the difference between the electronegativites of two atoms J type subsetting + Train/Validation splitting In order to build 8 models, first we created subsets of the data containing each J type: 1JHC, 1JHN, 2JHH, 2JHC, 2JHN, 3JHH, 3JHC, 3JHH. Then, we further split each subset into a train/val set. Since our training dataset was so big (4M+ observations), we opted for a 75/25 train/val split instead of doing a cross validation to save time in the initial phases of building a model, even though a cross validation would likely produce better results (Note: This is an ongoing project and cross validation will be used to validate the model when it is closer to final). The training data includes more than 1 observation under each molecule_name. Because of this nature, we had to be considerate to not leak data from train molecules into the validation set. We did a train_test_split() on the molecule_name instead, and created train/val subsets with the data from each J type — now 16 subsets total! Our hypothesis was that the coupling constant from each J type would be impacted differently by each feature, so we created 8 different models to get the most accurate predictions possible. We first started with a simple Linear Regression for our baseline model, but needed something with decision trees for better accuracy. We also confirmed that using 8 models was significantly better than 1 by running a test model with all of the J types in the data for comparison. Ultimately, we used a LightGBM Regressor model for all 8 J types. We then tuned the hyper parameters, by running the LGBM Regressor with RandomizedSearchCV to give us the best score we could get. Below you can find the summary of the modeling, which includes each model broken down by its respective validation score and permutation importances: Validation + Score Validation error was calculated based on the log of the mean absolute error (MAE). The function below takes all models into account and returns the average score: groups = val['type'] def group_lmae(y_true, y_pred, groups, floor=1e-9): maes = (y_true - y_pred).abs().groupby(groups).mean() return np.log(maes.map(lambda x: max(x, floor))).mean() We also calculated separate validation scores for each model to better understand how each J type model scored. If one model was scored significantly lower than the other, this would give us the insight to potentially explore other modeling options for that coupling type. To calculate the validation score for separate J types, we used: np.log((val_pred — y_val).abs().mean()) For this metric, the MAE for any group has a floor of 1e-9, which means the best possible score is -20.732. The best publicly available score on the Kaggle competition leaderboard is currently -3.069. Results + Discussion Using data that included known molecular properties of molecules, we created machine learning models to predict the scalar coupling constant of a pair of atoms. 8 different LightGBM Regressor models were used to predict the target for each type of coupling: 1JHC, 1JHN, 2JHH, 2JHC, 2JHN, 3JHH, 3JHC, 3JHH, resulting in a final score of -0.64. We engineered ~60 features to help train our models accurately and improve the validation scores. Some features were very helpful for our models, some not so much, and the ones that did not impact the score at all were discarded. We also found that over cluttering the data with bunch of features could also hurt the score, so a few of those features were removed as well. The validation scores for each J coupling type were: - 1JHC : 0.258 - 1JHN : -0.388 - 2JHH : -1.179 - 2JHC : -0.352 - 2JHN : -0.973 - 3JHH : -0.991 - 3JHC : -0.259 - 3JHN : -1.242 The model predicting the 3JHN coupling constants performed the best with a validation score of -1.242, and the model predicting the 1JHC coupling constants performed the worst with a validation score of 0.25. The significant differences in validation scores between models is due to features that are unaccounted for in our dataset that likely have a large effect on that specific type of coupling. To improve those scores further, some features we can try engineering are ones that take dipole interactions, magnetic shielding, and potential energy into account. All of the models had different permutation importances depending on the type of coupling. Some of the most common features with high importance for each model were the distance, mulliken_charges, and both of the bond_lengths (x and y). Below are two shapely plots showing two different predictions. The one on top is a prediction for molecule dsgdb9nsd_133884 with 1JHC, the worst model, and the one on the bottom is for molecule dsgdb9nsd_105770 with 3JHN, the best model. The 1JHC model predicted 90.14 when the actual value was 99.69, resulting in an error of 9.55. The 3JHN model predicted 0.77, which was way closer to the actual value of 0.06, and only had an error of 0.71. Data + Code The data for this project was provided by CHAMPS (Chemistry and Mathematics in Phase Space) for a Kaggle competition. All code was written in python and visualizions were created using Matplotlib, ASE, Eli5, and Seaborn libraries. You find the notebooks to engineer features, create models, and visualizations in this github repo. Since this project is part of an ongoing Kaggle competition we will continue to work toward improving the models until the competition deadline of August 28, 2019. Our team is currently in the top 50% on the public leaderboard.
<urn:uuid:638ffb50-ffdf-4d92-b60d-344de3643af8>
CC-MAIN-2021-43
https://lizzie.codes/2019/08/04/predicting-scalar-coupling-constants-using-machine-learning-models/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00471.warc.gz
en
0.925182
2,521
3.015625
3
In my first post on Robots and AI, I dealt with the impact of these new technologies on future employment and productivity. I raised the contradiction that develops within the capitalist mode of production between increased productivity achieved through new technology and falling profitability. In this second part, I want to consider the impact of robots and AI seen through the prism of Marx’s law of value under capitalism. There are two key assumptions that Marx makes in order to explain the laws of motion under capitalism: 1) that only human labour creates value and 2) over time, investment by capitalists in technology and means of production will outstrip investment in human labour power – to use Marx’s terminology, there will be a rise in the organic composition of capital over time. There is no space here to provide the empirical evidence for the latter. But you can find it here (crisis and the law for BOOK1-1). Marx explained in detail in Capital that a rising organic composition of capital is one of the key features in capitalist accumulation. Investment under capitalism takes place for profit only, not to raise output or productivity as such. If profit cannot be sufficiently raised through more labour hours (i.e. more workers and longer hours) or by intensifying efforts (speed and efficiency – time and motion), then the productivity of labour (more value per labour hour) can only be increased by better technology. So, in Marxist terms, the organic composition of capital (the amount of machinery and plant relative to the number of workers) will rise secularly. Workers can fight to keep as much of the new value that they have created as part of their ‘compensation’ but capitalism will only invest for growth if that wage share does not rise so much that it causes profitability to decline. So capitalist accumulation implies a falling share to labour over time, or what Marx would call a rising rate of exploitation (or surplus value). The ‘capital-bias’ of technology is something continually ignored by mainstream economics. But as Branco Milanovic has pointed out,even mainstream economic theory could encompass this secular process under capitalist accumulation. As Milanovic puts it: “In Marx, the assumption is that more capital intensive processes are always more productive. So capitalists just tend to pile more and more capital and replace labor….. This in Marxist framework means that there are fewer and fewer workers who obviously produce less (absolute) surplus value and this smaller surplus value over an increased mass of capital means that the rate of profit goes down. ….. “The result is identical if we set this Marxist process in a neoclassical framework and assume that the elasticity of substitution is less than 1. Then, simply, r shoots down in every successive round of capital-intensive investments until it practically reaches zero. As Marx writes, every individual capitalist has an interest to invest in more capital-intensive processes in order to undersell other capitalists, but when they all do that, the rate of profits decreases for all. They thus work ultimately to drive themselves “out of business” (more exactly they drive themselves to a zero rate of profit). Milanovic then considers the robot technology: “Net income, in Marxist equilibrium, will be low because only labor produces “new value” and since very few workers will be employed, “new value” will be low (regardless of how high capitalists try to drive the rate of surplus value). To visualize Marxist equilibrium, imagine thousands of robots working in a big factory with only one worker checking them out, and with the useful life of robots being one year so that you keep on replacing robots continuously and thus run enormous depreciation and reinvestment costs every year. The composition of GDP would be very interesting. If total GDP is 100, we could have consumption=5, net investment=5 and depreciation=90. You would live in a country with GDP per capita of $500,000 but $450,000 of that would be depreciation.” This poses the key contradiction of capitalist production: rising productivity leads to falling profitability, which periodically stops production and productivity growth. But what does this all mean if we enter the extreme (science fiction?) future where robotic technology and AI leads to robots making robots AND robots extracting raw materials and making everything AND carrying out all personal and public services so that human labour is no longer required for ANY task of production at all? Let’s imagine a totally automated process where no human existed in the production. Surely, value has been added by the conversion of raw materials into goods without humans? Surely, that refutes Marx’s claim that only human labour can create value? But this confuses the dual nature of value under capitalism: use value and exchange value. There is use value (things and services that people need); and exchange value (the value measured in labour time and appropriated from human labour by the owners of capital and realised by sale on the market). In every commodity under the capitalist mode of production, there is both use value and exchange value. You can’t have one without the other under capitalism. But the latter rules the capitalist investment and production process, not the former. Value (as defined) is specific to capitalism. Sure, living labour can create things and do services (use values). But value is the substance of the capitalist mode of producing things. Capital (the owners) controls the means of production created by labour and will only put them to use in order to appropriate value created by labour. Capital does not create value itself. But in our hypothetical all-encompassing robot/AI world, productivity (of use values) would tend to infinity while profitability (surplus value to capital value) would tend to zero. Human labour would no longer be employed and exploited by Capital (owners). Instead, robots would do all. This is no longer capitalism. I think the analogy is more with a slave economy as in ancient Rome. In ancient Rome, over hundreds of years, the formerly predominantly small-holding peasant economy was replaced by slaves in mining, farming and all sorts of other tasks. This happened because the booty of the successful wars that the Roman republic and empire conducted included a mass supply of slave labour. The cost to the slave owners of these slaves was incredibly cheap (to begin with) compared with employing free labour. The slave owners drove the farmers off their land through of a combination of debt demands, requisition in wars and sheer violence. The former peasants and their families were forced into slavery themselves or into the cities, where they scraped a living with menial tasks and skills or begged. The class struggle did not end. The struggle was between the slave-owning aristocrats and the slaves and between the aristocrats and the atomised plebs in the cities. A modern science fiction can be found the recent Elysium movie. In this movie, the owners of the robots and modern technology have built themselves a complete space planet separate from the earth. There they live a life of luxury off the things and services provided by robots and defend their separated lives with their robot armies. The rest of the human race lives on earth in a dire state of poverty, disease and misery – an immiseration of the working class who no longer work for a living. In the Elysium world, the question would remain: who owns the means of production? In the completely automated planet, how would the goods and services produced by the robots be distributed in order to be consumed? That would depend on who owns the robots, the means of production. Suppose there are 100 lucky guys on the robot-run planet. One of them may own the best robots and so appropriate the whole product. Why should he share it with the other 99? They will be sent back to the Earth. Or they might not like it and will fight for the appropriation of some of the robots. And so, as Marx put it once, the whole shit begins again, but with a difference. The question often posed at this point is: who are the owners of the robots and their products and services going to sell to make a profit? If workers are not working and receiving no income, then surely there is massive overproduction and underconsumption? So, in the last analysis, it is the underconsumption of the masses that brings capitalism down? Again, I think this is a misunderstanding. Such a robot economy is not capitalist any more; it is more like a slave economy. The owners of the means of production (robots) now have a super-abundant economy of things and services at zero cost (robots making robots making robots). The owners can just consume. They don’t need to make ‘a profit’, just as the aristocrat slave owners in Rome just consumed and did not run businesses to make a profit. This does not deliver an overproduction crisis in the capitalist sense (relative to profit) nor ‘underconsumption’ (lack of purchasing power or effective demand for goods on a market), except in the physical sense of poverty. Mainstream economics continues to see the rise of the robots under capitalism as creating a crisis of underconsumption. As Jeffrey Sachsput it: “Where I see the problem on a generalised level for society as a whole is if humans are made redundant on an industrial scale (47% quoted in US) then where’s the market for the goods?” Or as Martin Ford puts it: “there is no way to envision how the private sector can solve this problem. There is simply no real alternative except for the government to provide some type of income mechanism for consumers”. Ford does not propose socialism, of course, but merely a mechanism to redirect lost wages back to ‘consumers’, but such a scheme would threaten private property and profit. A robotic economy could mean a super-abundant world for all (post-capitalism as Paul Mason suggests); or it could mean Elysium. FT columnist, Martin Wolf put it this way: “The rise of intelligent machines is a moment in history. It will change many things, including our economy. But their potential is clear: they will make it possible for human beings to live far better lives. Whether they end up doing so depends on how the gains are produced and distributed. It is possible that the ultimate result will be a tiny minority of huge winners and a vast number of losers. But such an outcome would be a choice not a destiny. A form of techno-feudalism is unnecessary. Above all, technology itself does not dictate the outcomes. Economic and political institutions do. If the ones we have do not give the results we want, we must change them”. It’s a social ‘choice’ or more accurately, it depends of the outcome of the class struggle under capitalism. John Lanchester is much more to the point: “It’s also worth noting what isn’t being said about this robotified future. The scenario we’re given – the one being made to feel inevitable – is of a hyper-capitalist dystopia. There’s capital, doing better than ever; the robots, doing all the work; and the great mass of humanity, doing not much, but having fun playing with its gadgets…There is a possible alternative, however, in which ownership and control of robots is disconnected from capital in its current form. The robots liberate most of humanity from work, and everybody benefits from the proceeds: we don’t have to work in factories or go down mines or clean toilets or drive long-distance lorries, but we can choreograph and weave and garden and tell stories and invent things and set about creating a new universe of wants. This would be the world of unlimited wants described by economics, but with a distinction between the wants satisfied by humans and the work done by our machines. It seems to me that the only way that world would work is with alternative forms of ownership. The reason, the only reason, for thinking this better world is possible is that the dystopian future of capitalism-plus-robots may prove just too grim to be politically viable. This alternative future would be the kind of world dreamed of by William Morris, full of humans engaged in meaningful and sanely remunerated labour. Except with added robots. It says a lot about the current moment that as we stand facing a future which might resemble either a hyper-capitalist dystopia or a socialist paradise, the second option doesn’t get a mention.” But let’s come back to the here and now. If the whole world of technology, consumer products and services could reproduce itself without living labour going to work and could do so through robots, then things and services would be produced, but the creation of value (in particular, profit or surplus value) would not. As Martin Ford puts it: the more machines begin to run themselves, the value that the average worker adds begins to decline.” So accumulation under capitalism would cease well before robots took over fully, because profitability would disappear under the weight of ‘capital-bias’. The most important law of motion under capitalism, as Marx called it, would be in operation, namely the tendency for the rate of profit to fall. As ‘capital-biased’ technology increases, the organic composition of capital would also rise and thus labour would eventually create insufficient value to sustain profitability (i.e. surplus value relative to all costs of capital). We would never get to a robotic society; we would never get to a workless society – not under capitalism. Crises and social explosions would intervene well before that. And that is the key point. Not so fast on the robot economy. In the next and final post on the issue, I shall consider the reality of the robot/AI future under capitalism.
<urn:uuid:7610f9a4-0865-482d-a181-7713efce44c5>
CC-MAIN-2021-43
https://cesran.org/robots-and-ai-utopia-or-dystopia-ii.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00230.warc.gz
en
0.946303
2,837
2.53125
3
Many of us own pots, pans, and bakeware that is covered in a non-stick coating. These products are not only inexpensive, but come with the promise of making our lives just a little bit easier. But have you ever wondered about the safety and health ramifications of these pans? What Makes It Non-Stick? Most non-stick cookware is aluminum coated with a synthetic polymer called polytetrafluoroethylene (PTFE). PTFE was developed by DuPont 1938, then patented and trademarked in 1945 as Teflon. (2, 7) Teflon provides a non-stick surface because it is an extremely non-polar chemical, meaning that it repels other substances. Thus, it provides a frictionless surface and it also doesn’t react with other chemicals making it very stable. (4, 7) Teflon is added to many products to make them resistant to water and stains. These include carpets, fabrics, clothing, and paint as well as cookware. (7) Other brands that use PTFE include Silverstone, Stationmaster, and Gore-Tex. Teflon, and the chemicals used in its production have grown into an industry which profits $2 billion a year. (1, 6) PTFE is a fluorotelomer, or a perflurorochemical (PFCs), and they received this name because they contain fluoride atoms. PFC’s have been shown to be carcinogenic, disrupt hormone balances and affect fetal development. Since PFC’s have a variety of applications and the research on their effects is quite complex, I wrote a post discussing these chemicals in detail. For now, let’s turn our attention to Teflon. The Trouble with Teflon Despite the various pros of PTFE, there are also many cons. The main concern around Teflon is the fact that it produces fumes when overheated which has shown to kill pet birds and cause people to experience flu like symptoms. The Teflon Flu Polymer Fume Fever or the “Teflon Flu” refers to the flu-like symptoms of chills, sore throat, coughing, headaches, muscle aches, and fevers between 100-104°F, that a person may experience if they are exposed to the fumes released when non-stick cookware is overheated. Teflon flu generally lasts for two to three days. (6, 7) While DuPont has known about the illness caused by its products, it claims that Teflon maintains its integrity until around 500°F, and only produces fumes when it reaches 660°F to 680°F. While that seems like a high number, studies have shown that not only is it easy to reach in conventional cooking, but that harmful compounds are released at lower temperatures. (2) A study conducted in 1991 found that when Teflon cookware reached 464°F, PTFE particles could be measured in the air. At 554°F oxidized particles are released. At 680°F toxic gases are released which are known to be carcinogenic to animals, poisonous to plants, and even lethal to humans. (8) Even within two to five minutes cookware on a conventional stove can reach these temperatures. An even greater concern is using Teflon under a broiler in the oven or on the grill. Many ovens today are made with non-stick materials, and have self-cleaning cycles which will reach 800°F. (1** 2) While neither the long term effects of routine exposure, nor the effects of coming down with the “Teflon Flu” have been well studied, it does seem like there is minimal health risk in ingesting Teflon, even if it is flaking. (2) There is, however, concern about exposure to PFOA, perfluorooctanoic acid, a PFC used to make PTFE. Even though its thought that there is minimal PFOA present in the final Teflon product, after repeated heating and cooling it’s possible for PFOA to leach into food. Thankfully DuPont stopped using this PFC to manufacture Teflon in 2013. (7) Canaries for Your Kitchen While the chemical flu is the most studied effect on humans, there have been many reported avian fatalities. That’s right, pet birds are dying from pots and pans and muffin tins. (This also makes you wonder if there are longer-term damages that haven’t been identified yet!) Some of These Documented Cases Include: - Deaths of 1,000 broiler chicks under Teflon-coated heat lamps at 396°F - Deaths of baby parrots (number unknown) when a Teflon lined oven was used to bake biscuits at 325°F - Deaths of 55 birds when water burned off a hot pan. - Death of pet Cockatoo when water was boiled out of a Teflon pan The makers of Teflon even acknowledge this risk, and warn consumers about this issue. In an online brochure sponsored by DuPont, as well as the Association of Avian Veterinarians and the ASPCA, the writer (a veterinarian) states that “bird fatalities can result when both birds and cooking pots or pans are left unattended in the kitchen, even for a few minutes.” (7,10) Of course, in this industry sponsored brochure, it is made to seem that any cookware can cause birds this harm. However, on its own website, the Association of Avian Veterinarians places only non-stick cookware under the category of air pollutants dangerous to pet birds. Air pollutants such as cigarette smoke, insecticides, and toxic fumes from over-heated non-stick-coated utensils can cause serious respiratory problems and even death. (9) All sources recommend that birds be kept out of the kitchen and that ventilation should be utilized when cooking with non-stick. How to Avoid Teflon Fumes Though cases of Teflon flu in humans are rare, they do occur. Also, some sources, like the EWG, have concerns about potential cancer links with teflon fumes, though more research is needed. Of course, the best way to avoid Teflon fumes in the kitchen is to use alternative cookware. Stick with traditional cookware options such as stainless steel or cast iron pots and pans, and ceramic and glass bakeware. In my experience its also best to avoid so called “green pans” which have a thin ceramic coating which scratches easily causing food to stick. A high quality alternative is fully ceramic pans or ceramic-enameled cast iron cookware like dutch ovens and brasiers. My Cookware Choices I wasn’t aware of the problems with Teflon when I got married, and while we registered for mostly stainless steel dishes, we did receive a few non-stick items as well. After researching, we eventually got rid of these pieces and I’ve actually downsized to just the few dishes that we use regularly and love. My personal favorite non-toxic cookware pieces are: - A set of Xtrema Ceramic pans– I reviewed them in depth here, but I love them because they are metal free, teflon free, and non-scratch (making them very easy to clean!). They can also be stored in the fridge, used in the oven, on the stove, or even microwaved (though we don’t personally use them in the microwave). - A Le Cruset Skillet– A tribute to my French side and a skillet I use often. - Caraway – Made with naturally smooth ceramic and is free of PTFE & other toxic materials. This means no leaching and no harmful toxic fumes. - From Our Place – Check out the Always Pan. This is one of my favorite pans to cook stir fry and veggies in. It comes with a stainless steel steamer tray and is a dream to cook with. And to clean! - Stainless Steel Bakeware- See a full list of my kitchen supplies here and some unusual items I use daily in this post. Can’t Ditch the Non-Stick Yet? If you do have non-stick cookware and can’t (or don’t want to) get rid of it right now, there are many things you can do to limit your exposure to fumes such as: - Never preheat non-stick cookware at high heat. - Use low to medium cooking temperatures. - Don’t put non-stick cookware in an oven heated to over 400°F. - Use an exhaust fan when cooking with non-stick. - Don’t use the self-cleaning function on your oven if it contains any non-stick coatings. And of course, if you have pet birds… don’t keep them in the kitchen while you are cooking! This article was medically reviewed by Madiha Saeed, MD, a board certified family physician. As always, this is not personal medical advice and we recommend that you talk with your doctor. 1. Environmental Working Group. “PFC Dictionary”. 2. Environmental Working Group. “Healthy Home Tips: Tip 6- Skip the Non-Stick to Avoid the Dangers of Teflon.” 3. Environmental Working Group. “ EWG’s Guide to Avoiding PFCs: A Family of Chemicals You Don’t Want Near Your Family.” 4. American Cancer Society. “Teflon and Perfluorooctanoic Acid (PFOA)” 5. Environmental Working Group. “Canaries in the Kitchen: Teflon Kills Birds. May 15, 2003. 6. ABC News. “Can Non-Stick Make You Sick?” Ross, Brian; Schwartz, Rhonda; and Sauer, Maddie. November 14, 2003. 7. Huang, Mimi. Science Writing and Communications Club, University of North Carolina at Chapel Hill. July 6, 2015. A Toxicologist: Is It Safe to Use Teflon Pans. 8. Environmental Working Group. “Teflon Can’t Stand the Heat.” 9. Association of Avian Veterinarians. “Basic Pet Bird Care.” 10. Resenthal, Karen, DVM, MS. “Breathing Easy: Safeguarding Your Pet Bird from Dangers in the Kitchen” Do you use teflon? Ever considered trying other options?
<urn:uuid:05931e6d-f801-4939-b85b-9ea12cf3cc62>
CC-MAIN-2021-43
https://wellnessmama.com/77396/ditch-the-teflon/comment-page-3/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585837.82/warc/CC-MAIN-20211024015104-20211024045104-00710.warc.gz
en
0.939296
2,253
3.0625
3
London is one of the great metropolises of the world, capable of advertising itself using its rich and varied heritage. While the city’s Roman, medieval, and modern history may be familiar to many, London also has an important but lesser known prehistory. From the Old Stone Age through to the Iron Age, human communities lived in the area now encompassed by Greater London, each leaving their own distinct mark upon the region. Photograph © Thomas Dowson Although often eclipsed amid the city’s Victorian terraces and glass skyscrapers, visual reminders of prehistoric London are still dotted around the city. These are all off the usual tourist trail, tucked away in local parks, on street corners, and in golf courses. While not as substantial and dramatic as some of the prehistoric sites elsewhere in southern Britain, these sites are still well worth a visit and help to provide the visitor with a better idea of what society was like before the arrival of the Romans and the establishment of the Roman Empire. The Palaeolithic: Megafauna Early Stone Age display at the Museum of London. Photograph © Thomas Dowson The Britons of the Palaeolithic, or Old Stone Age, were likely nomadic and lived a hunter-gatherer lifestyle. They would have been members of small bands who travelled across the landscape in search of the resources needed for survival. Although they have left behind no monuments or sites for contemporary tourists to visit, we nevertheless have much evidence of their world. Many of the most impressive Palaeolithic finds from the London area are on display at the Museum of London. This fantastic centre, located within London’s financial district – the City of London – contains a room devoted to ‘London Before London’. On display are several stone hand-axes and other tools that Palaeolithic Britons would have utilised in their day-to-day life. Palaeolithic Britons shared the landscape of southern Britain with an array of mega-fauna, including hippos, rhinos, and mammoths. The bones of these and other animals feature in the museum’s exhibits. Entry to the Museum of London is free, although it requests voluntary donations. It is situated in central London, near to good transport links. The nearest underground tube station is Barbican, which is situated on the Circle, Metropolitan, and Hammersmith and City lines. The Mesolithic: The Vauxhall Timbers The site of the Mesolithic timbers found at Vauxhall Bridge. Photograph © Ethan Doyle White By the Mesolithic, homo sapiens were the only surviving human species left in Britain. Rising global temperatures had replaced the Ice Age tundra with lush forests of birch and pine. Retaining the largely nomadic hunter-gatherer lifestyle of their Old Stone Age ancestors, Mesolithic Britons left behind no monuments or buildings for us to visit. Instead, much of what we know about them comes from the flint tools and butchered animal bones found by archaeologists at sites like Three Ways Wharf in Uxbridge and along the Old Kent Road. While deer and wild horse were part of the Mesolithic diet, they also made plenty use of fish and water fowl. The London area has many rivers and other wetlands, although by far the largest is the majestic River Thames, which runs through the heart of the modern city and would have been an important resource for prehistoric people. In 2010, archaeologists from the Thames Discovery Programme discovered six timbers on the river’s southern shore at Vauxhall. Radiocarbon dating revealed that they were Mesolithic, from the fifth millennium BCE, making this the oldest known structure on the Thames foreshore. Unfortunately, the precise function of the structure remains unknown; perhaps it was a jetty for fishing boats or a platform for the ritualised deposition of items into the water. Several Mesolithic tools, including a stone adze, were close by (More on the Archaeology at Vauxhall Bridge). Although the shifting tides mean that the timbers themselves are rarely visible, their location is clearly observable from both the southern bank (just in front of the MI6 headquarters) or from the adjacent Vauxhall Bridge. These locations are only a few minutes’ walk north of Vauxhall station, which is on the Victoria line of the underground and part of the National Rail network. After visiting the site, tourists can take an eastward walk along the southern bank, taking in impressive views of the Palace of Westminster. The Bronze Age: The Round Barrows of Southeast London Shrewsbury Barrow at Shooters Hill. Photograph © Ethan Doyle White The population of Bronze Age Britain lived in a settled, agricultural society. Their material remains reflect growing social stratification, suggesting that by this point an elite dominated society. This is perhaps best reflected in the fact that select individuals were chosen for interment within round barrows or tumuli, often clustered together in cemeteries. Many of these were built between 2400 and 1500 BCE, although unfortunately others have yet to be conclusively dated. The area of modern southeast London saw barrow cemeteries established at various high points. At the time of construction, they would have offered impressive views over both the Thames and the wider landscape. Although the forces of urbanisation have destroyed most of these barrows, a small number still survive, allowing the visitor a glimpse into the Bronze Age world of their makers. Perhaps the finest is Shrewsbury Barrow, situated on the corner of Brinklow Crescent and Plum Lane in the suburban jungle of Shooters Hill. This barrow was once one of six, three of which were in a linear alignment. An iron fence surrounds the tumulus and an information board informs the visitor about the site. Parking is available in neighbouring roads, while the barrow is approximately twenty minutes’ walk from Woolwich railway station. The Bronze Age round barrow on Winn’s Common. Photograph © Ethan Doyle White A short drive away is the lone survivor of another barrow group, situated just north of Bleak Hill Lane in the eastern part of Winn’s Common, Plumstead. The Winn’s Common Tumulus is in the middle of an open field and thus is visible from the adjacent roads. On the common, signposts erroneously label it as ‘Roman barrow’. Although the landscape has changed since the Bronze Age, the lack of any housing or trees crowding the barrow offers the visitor the chance to appreciate the site in its wider geographic context. Nearby roads offer spaces for parking, while the site is also accessible following a half hour walk from Plumstead railway station. Pushing further east, intrepid explorers armed with an OS-map can find a much-damaged tumulus atop the steep hill in Lesnes Abbey Wood. Although less spectacular than the previous two barrows, a visit to this barrow can easily incorporate a trip to the picturesque medieval ruins of Lesnes Abbey itself. Limited parking is available on Abbey Road and New Road, while the tumulus can be arrived at following a half hour walk from Abbey Wood railway station. The Iron Age: Caesar’s Camp, Wimbledon Part of the ditch-and-bank earthworks at the western end of Caesar’s Camp. © Ethan Doyle White From the latter part of the Bronze Age through much of the Iron Age, there appears to have been an increasing militarisation of British society. Evidence for this is visible in the proliferation of hillforts – large, defensible earthwork positions erected on high ground. These hillforts are found across much of Britain; some, such as Dorset’s Maidan Castle, are particularly awe-inspiring. A small number of hillforts are known from in and around the Greater London area. These include Ambresbury Banks and Loughton Camp, both found in Epping Forest on London’s north-eastern border with Essex. To the south of the city is St Ann’s Hill near Chertsey in Surrey and Caesar’s Camp in Keston near Bromley. A little closer to the heart of the city is a hillfort just south of Wimbledon Common in southwest London. Confusingly, this hillfort is also known as Caesar’s Camp, a misleading name probably given to it by nineteenth-century antiquarians, although it was also known locally as Bensbury, Warren Bulwarks, and The Rounds. Encircled by a ditch and bank, the hillfort is approximately 300 metres in diameter. Excavation carried out in 1937 revealed that the site was likely built in the third century BCE, and continued to be used into the late Iron Age. At one point, someone buried an urn filled with Roman coins at the site, perhaps for safekeeping or as a votive offering to gods or spirits. Unfortunately, a golf course now engulfs the site, and one needs an astute eye to determine which earthworks are Iron Age and which are recent additions for the benefit of golfers. Only members of the gold club have full access to the hillfort, although a public footpath cuts right through the middle of it from east to west, allowing decent views of the ditch-and-bank on the two ends. Caesar’s Camp is a forty minutes’ walk from Wimbledon railway station or a fifty minutes’ walk from Wimbledon Park underground station, which is on the District line. Good walking shoes are a must. A visit to the site can also incorporate a pleasant walk through Wimbledon Common or a trip to the magnificent Buddhapadipa Temple, a Buddhist centre designed in accordance with traditional Thai architecture. There are more Iron Age hillforts in Epping Forest, to the east of the city: Ambresbury Bank and Loughton Camp. One of the Iron Age displays in the Museum of London – focusing here on the famous Battersea Shield. Photograph © Thomas Dowson The End of Prehistory: The Brentford Monument The Brentford Monument. Photograph © Ethan Doyle White For the story of London, the arrival of the Romans has significance as both an end and a beginning. It marked the end of prehistory; henceforth, London’s story was to be recorded in text as well as artefacts. At the same time, it was a fresh beginning as the Roman Empire established a new town, Londinium, on the northern banks of the Thames. Caesar invaded Britain twice, first in 55 BCE and then again in 54 BCE. It was during the second invasion that he clashed with the forces of an indigenous tribal chief, Cassivellaunus. This likely took place at Brentford, a town on the northern side of the Thames in West London. This was only the beginning of the Roman conquest, which was completed nearly a century later under the Emperor Claudius in 43 CE. To mark Caesar’s battle with Cassivellaunus, in 1909 the local antiquarian Montague Sharpe organised the inscribing of an epigraph on a granite pillar. This, the Brentford Monument, was initially erected next to the Thames on a wharf at the end of Ferry Lane (photograph of the unveiling ceremony in 1909). Once known as the Julius Caesar Monument, the pillar also commemorates three other major historical events in Brentford’s history: - AD 780-1: King Offa’s church council - 1016: Canute driven across the Thames by Edmund Ironside - 1642: Civil War Battle of Brentford In the early twentieth century, the site was used for unloading coal and the monument itself was eventually buried. In 1992, the local authorities moved the Monument further inland. Today it is situated next to a bus stop on the intersection between Brentford High Street and Alexandra Road. Here, it is easily accessible, just ten minutes’ walk from Brentford railway station. Visiting Prehistoric Sites in London London’s prehistory is very much off the tourist trail, and much the better for it. One can visit these sites without having to wade through the large crowds that sometimes mar better known prehistoric locations like Stonehenge. This also means that access is rarely straightforward, and for those with serious mobility issues several of these sites will be simply inaccessible. When planning your visit, make sure that you are properly prepared. A detailed map is strongly recommended, particularly for sites like Caesar’s Camp and the Brentford Monument which are not well signposted. Good walking shoes, water, and snacks are also advisable when visiting sites in parks. However, for those willing to put in the effort, a new and deeper appreciation of London’s past provides a welcome reward.
<urn:uuid:8b0a04ec-53b7-4ebe-a6cd-09c522f24dc5>
CC-MAIN-2021-43
https://archaeology-travel.com/en-articles/prehistoric-london/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00511.warc.gz
en
0.95436
2,635
3.15625
3
With the runaway success of Visual Basic 1.0, it made sense for future versions to be released improving incrementally on the existing version. Such was how we ended up with Visual Basic 2.0. In some ways, Visual Basic 2.0 changed Visual Basic from a backwater project like Microsoft Multiplan into something that would become a development focus for years to come- like Microsoft Excel. Visual Basic 2.0 was quite a leap forward; it improved language features, the IDE, and numerous other things. One of the biggest changes was the introduction of the “Variant” Data type. A full list of new features, from the Visual Basic 2.0 Programmers Guide, with annotations on each by me. - Improved Form design tools, including a toolbar and a Properties Window Visual Basic 2.0 adds the ability to select multiple controls by dragging a box around them. It also adds a Toolbar, which replaces the area used by the Property modification controls in Visual Basic 1.0. The new Properties Window moves the Property Editing to a separate Window, which is a massive improvement since you can more easily inspect properties. - Multiple-Document interface Support Another rather big feature. MDI was and is the capability that allows a Window to have it’s own Child Windows. This has started to fall out of vogue and is all but forgotten. Earlier Office versions provided an MDI interface. The core of MDI was basically set by Program Manager itself, which was a MDI Application. Visual Basic 2.0 allows you to create MDI Forms, and MDI Applications. This is provided through a few Properties. I will cover MDI stuff that VB2.0 adds later in this Post. - New Properties, Events, and Methods Visual Basic 2.0 added several Properties, Events, and Methods to the available controls. It changes the “CtlName” of all Controls to the less stupid “Name”, and added multiple new Events, particularly surrounding Drag and Drop capabilities. - Object Variables and Multiple Form instances This is a pretty major shift. For one thing, it established Forms not as their own, distinct objects (as was the case in VB1.0) but rather as their own Class of Object. You were also capable of creating new form instances, inspect Object Types, and various other Object-Oriented capabilities. it was still relatively limited, but it was certainly a step forward and it added a wealth of capability to the language. - Variant Data Type This is another Big one. Visual Basic 1.0 had a number of Data Types, as you would expect; Integer, a 16-bit Integer value, Long, a 32-bit Integer value, Single, a 16-bit floating point value, Double, a 32-bit floating point value, Currency, a Scaled Integer value, and String. Visual Basic 2.0 shakes things up by not only adding Forms as their own ‘Data Type’ of sorts, but it also adds Variant, which is basically a Value that can represent anything. Variants are an interesting topic because while they originally appeared in Visual Basic 2.0, they would eventually seep into the OLE libraries. As we move through the Visual Basic versions, we will see a rather distinct change in the Language, as well as the IDE software itself to reflect the changing buttresses of the featureset. One of the additional changes to Visual Basic 2.0 was “Implicit” declaration. Variables that hadn’t been referred to previously would be automatically declared; This had good and bad points, of course- since a misspelling could suddenly become extremely difficult to track down. It also added the ability to specify “Option Explicit” at the top of Modules and Forms, which required the use of explicit declarations. Visual Basic 1.0 also allowed for implicit declarations, but you needed to use some of the ancient BASIC incantations (DefInt, DefLng, DefSng, DefDbl, and DefCur) to set default data types for a range of characters. It was confusing and weird, to say the least. - Shape,Line, and Image controls The Shape, Line, and Image controls added to VB2 are a new feature known as “windowless” controls, in that they do not actually use a Window handle. One of the larger benefits from this was that the controls were lightweight; the second was that they could be used for simple graphics on a VB2 Form. - Grid Custom Control Visual Basic 2.0 comes with a Grid Custom Control. I swear this thing has what feels like an entire chapter devoted to it in the Programmers guide. I’m not even Joking- “Chapter 13: Using the Grid Control”. The Grid control is rather awkward to use for those more accustomed to modern programmatic approaches and better designed control interfaces. - Object Linking & Embedding Custom Control OLE (pronounced “O-Lay” I was pronouncing it as Oh-Ell-Eee for the longest time and don’t think I’ll ever live down the embarassment. The basic idea was to allow one application to be “embedded” inside another. functionality- to the user- it would simply look like inserting a document, which was part of the purpose. For example, you can insert an Excel spreadsheet inside a Word document and then edit that Excel Spreadsheet- from within Word- as if it was Excel. What happened? Well it was bloody confusing. While it was (and still is) a very powerful feature, it was far from intuitive and was something far more likely to be used by power users. - Added Debugging Features, including Watch variables and a Calls Window. It’s amazing the stuff we did without in older Programming environments, isn’t it? Visual Basic 1.0 provided very simplistic Debugging support. This was not uncommon among the IDE tools of the time. Visual Basic 2.0 added some debugging helpers and in some ways added a new “mode” to the Program; Immediate Mode. Visual Basic 1.0 had similar capabilities, in that it did have something of an “immediate” mode; particularly shown by the Immediate Window. However, Visual Basic 1.0’s implementation was far simpler, and it didn’t support Watch Variables, which is one of the primary new features added in VB 2.0. This paired with with the Toolbar controls that almost emulate “playback” of the application gave rise to the idea of Three Modes; The first, you write code and design forms. The second is where you run the application, and the third. Immediate Mode, is when you are debugging; eg. Your application is Stopped but you can inspect it’s running state. - ASCII representation of Forms As far as I’m concerned, this is the single best feature added to Visual Basic 2.0. Historically, many applications- including things like Visual Basic as well as other Language interpreters or editors, saved their source in a proprietary, binary format. This was done not so much to protect it, but for space-saving reasons. When you only have a 160K disk, a difference of a single Kilobyte can be important. Additionally, for text formats it takes longer to load and save (at least with the paltry memory and processing power of the time in comparison to today). Visual Basic 1.0 as well as it’s QuickBASIC predecessor allowed for saving files as text, but this was not the default option. Visual Basic 2.0 adds the ability to save not only source code- as the Visual Basic 1.0 Code->Save Text Option did- but also to save the Form design in a text format. this was a massively useful feature since it allowed external tools to manipulate the form design as well as the code, as well as making your software development less dependent on an undocumented format. - 256-Color support for bitmaps and color palettes. Back in those days, Colour was a trade-off. Video Adapters usually had limited Video Memory, so you usually had a trade-off between either higher resolution and fewer colours, or lower resolution and more colours. Today, this isn’t an issue at all- 32-bit and 24-bit Colour has been the standard for nearly two decades. As this was developing, however, we had the curious instance of 256-colour formats. 256-colour modes uses a single byte to index each colour, and palette entries are stored separately. The index them becomes a lookup into that table. This had some interesting effects; Applications could swap about the colours in their palette and create animations without really doing anything; the Video Adapter would simply change mappings itself. This was a very useful feature for DOS applications and Games. Windows, however, complicated things. Because Windows could run and display the images from several Applications simultaneously, 256-color support was something of a tricky subject. Windows itself reserved it’s own special colours for things like the various element colours, but aside from that 8-bit colour modes depended on realized palettes. What this means is that Applications would tell windows what colours they wanted, and Windows would do what it could to accomodate them. The Foreground application naturally took precedence, and in general when an application that supported 8-bit colour got the focus, it would say “OK, cool… realize my palette now so this owl doesn’t look like somebody vomited on an Oil Painting”. With Visual Basic 1.0, this feature was not available for numerous reasons, most reasonable among them being a combination of it just not being very important paired with the fact that VB was designed primarily as a “front end” glue application for other pieces of code. Visual Basic 2.0 however adds 256-color support. This adds quite a few properties and methods that are used. VB itself manages the Palette-relevant Windows Messages, which was one of the reasons VB 1.0 couldn’t even be forced to support it. As we can see above, Visual Basic 2.0 adds Syntax highlighting over VB1; an additional side effect of this is that the colours can also be customized. I recall I was a fan of using a green background and yellow text for comments to make them stand out, myself. On a personal Note, Visual Basic 2.0 is dear to me (well, as dear as a software product can be), since it was the first Programming Language I learned and became competent with to the point where I realized that I might have a future with software development. Arguably, that future is now, but it hasn’t actually become sustainable (I may have to relocate). But more so than this is the fact that I was given a full, legal copy of the software. This in itself isn’t exceptional, but what is is the fact that it had all the manuals: Dog-eared, ragged, and well-used, these books- primarily the Programmers Guide and Language Reference- became the subject of meticulous study for me.
<urn:uuid:ea9032ea-dfa9-4d06-91c3-ca488662dfdb>
CC-MAIN-2021-43
https://bc-programming.com/blogs/2013/05/history-of-development-visual-basic-2-0/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00591.warc.gz
en
0.960386
2,340
2.578125
3
FREEDOM OF RELIGION Origins: Jefferson’s Bill and Madison’s Memorial · Separation of Church and State o Jefferson: to protect government from influences of the church o Madison: to protect church from influences of the government § “If religion were to remain healthy, it had to remain free from the interfering hand of government.” · This is a distinct issue in that we are losing ground on this tenet through the use of government-subsidized churches and faith-based initiatives. · “The Wall” of Separation: Jefferson’s letter to Messrs (1802) o “religion is a matter which lies solely between man and his God…that the legislative power of government reach actions only and not opinions…thus building a wall of separation between church and state.” o There is no true, absolute wall § If taken literally, church would have absolute immunity. § Jefferson noted man cannot skirt social duties due to duties of God · “Nonpreferentialism”: Rhenquist’s idea in Wallace v. Jaffree that it is constitutional to enact reasonable laws preferring religion over non-religion so long as the government does not endorse/prefer any particular sect or denomination. o Concerns with this Approach: § Constitution would not bar: · Preferential treatment to faith-based institutions, · Preferential treatment – or absolute preference – for “religious” public school teachers. · Providing such absolute incentives would suggest a creation of religious-based everything. · Taxation and spending to carry out a religious mission. · Tradition of Religious “Liberty”: o Religion is voluntary – authentic worship is a personal and free act. o “Natural Right” Free Exercise Clause · MAIN TEST: Dept of Human Resources of Oregon v. Smith: Constitutional law if: o If Law is Neutral (generally applicable), must have Rational Basis for serving Legitimate Govt Interests. o If Law is Not Neutral, enforce Strict Scrutiny (Necessary to achieve Compelling Interests). § Discriminates against religion § Targets religious practice b/c it is religion · Church of Lukumi Babalu Aye v. City of Hialeah: law prohibiting ritual slaughter of animals (Santeria religion) was unconstitutional, b/c it was not neutral, failed to satisfy “necessary to achieve compelling ends,” and was not narrowly tailored. o Facial Neutrality is not enough – it must be neutral in practice. · Locke v. Davey: denial of a scholarship to an individual studying theology is not a violation of the First Amendment, b/c the Court finds no discrimination against religion, yielding the rational basis test. Dissent finds discrimination and a failure of strict scrutiny. o This is the GRAY AREA between Establishing Religion and Inhibiting the Free Exercise thereof. · Do Superficially-Neutral laws burdening practice of religion offend the First Amendment? o Reynolds v. US: “no polygamy” laws are constitutional, b/c Court found the law “neutral” and focuses more on the “peace and good order of society” in judging social norms than the religious interests. o Braunfield v. Brown: Sunday closing laws affecting Jewish store owner was constitutional, b/c it wasn’t directly related to the practice of their religion. o Sherbert v. Verner: Seventh-Day Adventist’s discharge for failure to work on Saturday was unconstitutional, b/c, unlike Braunfield, the law directly affected petitioner here with no justifying rationale (no proof of fraudulent claims under the guise of religion). § Sherbert Test: balance the burden of individual vs. competing interests of the state. · Statute affecting practice of religion must be narrowly tailored to minimize the burden. § Direct Impact based on Religion → Strict Scrutiny. o Wisconsin v. Yoder: forcing Amish to send children to school until 16 was unconstitutional, b/c state did not have compelling interest in enforcing two extra years of school on child → Strict Scrutiny. o US v. Lee: enforcement of Social Security program against Amish was constitutional under Strict Scrutiny, b/c government interest was compelling. · Bottom Line: Apply Peyote case where law is not neutral. Apply Sherbert to cases where the law is neutral. Note the “gray area” discussed in Locke. Religion and in Public Schools · Lemon Test (Lemon v. Kurtzman): o Secular Purpose (genuine, not a “sham,” and not secondary to religious obj) o Primary Effect neither advances nor inhibits religion § Suggested alternatives to Primary Effect: · Coercion: must not coerce (hardly used) · Endorsement(O’Connor): cannot “send a message to nonadherents that they are outsiders, not full members of the political community, and an accompanying message to adherents that they are insiders.” o No Excessive Entanglement (no bureaucratic intermingling, surveillance/ supervision, or divisiveness) · Religion in Public Schools o McCollum v. Board of Education: releasing children to attend religious programs in classrooms violated the First Amendment, b/c the law focused on promoting religious education w/in the public domain. o Zorach v. Clauson: releasing children to attend religious programs off school grounds did not violate the First Amendment, b/c neither public funds nor public grounds were used to support the religious classes. · Prayer in Public Schools o Engel v. Vitale: mandating prayer to “Almighty God” in school is unconstitutional, b/c government is forcing religion – coercion. o Abington School Dist. v. Schempp: state law requiring reading of Bible verses each day in public school was unconstitutional based on the obviously religious purpose of the law (Court focused on intent of law-makers). o Wallace v. Jaffree: state law authorizing one-minute each day to “meditation or voluntary prayer” in school was unconstitutional based on the improper motives of the legislature. § O’Connor: an authentic law allowing for prayer/silence would be ok. § Rhenquist: non-preferentialism makes this law ok. § Note: Abington and Jaffree both focus on mal-intent of legislature. o Lee v. Weisman(focus case): inviting religious leaders to conduct prayer at high school graduation is unconstitutional, b/c it violates Lemon’s 2nd prong. Much of the opinion revolves around the mandatory attendance of graduation. § Souter: Coercion by government to participate. § Scalia: “Speech is not coercive, the listener may do as he likes.” o Santa Fe ISD v. Doe: prayer before football game was unconstitutional, even if led by private students, b/c the use of the school’s loud speaker on government property before a govt-sponsored event makes it official government speech. § Majority adopts an “endorsement-esk” test to find it impermissible. o Good News Club v. Milford Central School: denying access to after-school religious programs on school grounds violated First Amendment, b/c it was viewpoint discrimination in a “limited public forum” that had been opened up non-selectively to a wide range of groups. § Note: This, to me, is the appropriate approach to establishment clause. A violation of First Amendment only occurs if access has been denied, not if access has been granted. o Bottom Line: look to events surrounding the circumstances to determine: § voluntary participation vs. mandatory event, § government-sponsored speech (use of government equipment); § Consider in terms of endorsement/coercion used in Lee and Santa Fe. o Note Mitt Romney’s concern in “Faith in America”: driving religion from the public square essentially creates a religion in itself – secularism. o Elk Grove v. Newdow: Undecided case allows schools to state the pledge, offering students the ability to leave. Constitutional? § Rhenquist – Yes due to tradition and ability to leave. § O’Connor – Yes due to lack of endorsement. § Thomas – Yes, b/c, unlike Free Exercise, EC should not be applied to individual rights – only to state protection from federal invasion. “Creationism” in Public Schools · Epperson v. AR: law prohibiting the teaching of evolution in public school was unconstitutional, b/c the law did not have a secular purpose as evidenced by the “fundamentalist sectarian conviction” of the legislature in creating the law. · Edwards v. Aguillard: law requiring that creationism be taught if evolution is included in curriculum was unconstitutional, b/c the “secular purpose” offered by the state was a “sham.” o As in Epperson, Court looked to the intent of the legislature. o Scalia Dissent: there was secular purpose expressed in the text of the law, and Court acted outside its bounds by supposing the subjective intent of legislature. · Dover (Intelligent Design) case? a C&PD in reference to California statute. o Holmes/Brandeis Concurrence: C&PD requires seriousness, imminence, and likelihood of the speech – disagrees with majority’s deference to the legislature. · Dennis v. US: prosecution for teaching communist doctrines was constitutional based on C&PD of the evil. Court considers Gravity of Evil and Probability of Evil (X/Y axis). o This the C&PD diluted, b/c imminence is not necessary if gravity of evil is large o Frankfurter Concurrence urges Case by Case Balancing: “Absolute rules would inevitably lead to absolute exceptions, and such exceptions would eventually corrode the rules.” · Brandenburg v. Ohio: Ohio statute allowing for prosecution of Ku Klux Klan leader for advocating action against US to protect “whites” was unconstitutional, b/c no C&PD. o Current Test: Advocacy must be directed to (i.e. “intent”) inciting or producing imminent lawlessness action and is likely to incite or produce such action. Mere Advocacy is not enough. o Whitney is overruled. o Look to the words of the speech Fighting Words and Hostile Audiences · Unprotected Fighting Words: “those which by their very utterance inflict injury or tend to incite an immediate breach of the peace.” o Direct personal insults or personally abusive epithets intentionally designed and inherently likely to provoke hostile reaction. o Look at statute – what is being restricted? o Look at words – protected or unprotected? o Look at Standard of Review – · Chaplinksy v. New Hampshire: the conviction of a Jehovah’s Witness for making statements that the city council was “God damned racketeers” and “facists” was constitutional, b/c the words were unprotected fighting words. Not all speech is protected – govt may restrict. o Categories of Unprotected Speech (yielding Rational Basis test): § Fighting Words § Commercial Fraud o Categorical Balancing: § Balance: low value of speech vs. social interests in protecting against § If it falls within this category, apply Rational Basis to the statute. · Terminiello v. Chicago: conviction of breach of peace for viciously denouncing political and racial groups to an angry crowd was unconstitutional, b/c the statute allowed restriction of speech where there was no likely C&PD beyond “public inconvenience, annoyance, or unrest” o Saying unpopular words is not enough – must rise to a level likely to incite breach of peace. · Feiner v. NY: conviction of disorderly conduct for making speech encouraging “negros” to “rise up in arms” to an angry crowd was constitutional, b/c the conviction was based, not on the speech, but on the need to control a public crowd and prevent a riot. o Nitro was too close to the glycerin – look to the speaker’s intent and the crowd’s reaction to determine if words will likely cause breach of peace. · Cohen v. California: conviction for offensive conduct for wearing a jacket stating “Fuck the Draft” was unconstitutional, b/c the “boundless state goal” of preserving the peace created an over-suppression of ideas where the speech was not directed at anyone, there were no personal insults expressed, and the likelihood of a hostile reaction was very minimal. o Avert your eyes if you do not like it. o “States do not have power to regulate a ‘suitable level of discourse within the body politic’ unless they fall within a general category of unprotected speech.”
<urn:uuid:304510df-311b-470e-b227-54f6dcfbbc35>
CC-MAIN-2021-43
https://www.copvcia.com/lol/university-of-oklahoma-college-of-law-712/first-amendment-11594/outline-68674/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00269.warc.gz
en
0.935434
2,783
3.09375
3
England & The UK The United Kingdom of Great Britain and Northern Ireland (the United Kingdom or the UK) is a constitutional monarchy comprising much of the British Isles. This Union is more than 300 years old and comprises four constituent countries: England, Scotland, Wales, and Northern Ireland. It occupies all of the island of Great Britain, the north-eastern portion of the island of Ireland and most of the remaining British Isles. The UK is an island nation, but shares a open land border with Ireland. It neighbours several countries by sea, including France, Belgium, the Netherlands, Germany, Portugal, Spain, Denmark, Norway, Sweden, the Faroe Islands and Iceland. The UK today is a diverse patchwork of native and immigrant cultures, possessing a fascinating history and dynamic modern culture, both of which remain hugely influential in the wider world. Although Britannia no longer rules the waves, the UK is still an overwhelmingly popular destination for many travellers. Its capital and largest city of London is, along with New York, often reckoned to be one of only two cities of truly global importance but many come to see quaint villages and the beautiful and quickly changing countryside. Click on the city’s name for hotels and activity ideas - England is just one of the constituent parts of the United Kingdom, alongside Wales, Scotland and Northern Ireland. Treating “England” and “The United Kingdom” as synonyms is a mistake commonly made by visitors, which can annoy the Welsh, Scottish & Northern Irish. Similarly, “British” and “English” are not the same. - It is important to remember that the Republic of Ireland is a completely separate state from the United Kingdom, that seceded from the Union in 1922 and gained full independence in 1937. The ‘Great’ in Great Britain (Britannia Major in Roman times; Grande-Bretagne in French) is to distinguish it (the island) from the other, smaller “Britain”: Brittany (Britannia Minor; Bretagne) which is a region of northwestern France. However, for a geographer “Great Britain” (“GB”) refers just to the single largest island in the British Isles that has most of the land area of Scotland, England and Wales. In normal usage it is a collective term for all those three nations together. Great Britain became part of the United Kingdom when the Irish and British parliaments merged in 1801 to form the “United Kingdom of Great Britain and Ireland”. This was changed to “… and Northern Ireland” when all but the six Northern Irish counties seceded from the Union in 1922 after a treaty granting Irish home rule. “Britain” is simply another name for the United Kingdom, and does include Northern Ireland, despite common misconceptions otherwise. The flag of the United Kingdom is popularly known as the Union Jack or, more properly, Union Flag. It comprises the flags of St. George of England, St. Andrew of Scotland and the St. Patrick’s Cross of Ireland superimposed on each other. Within England, Northern Ireland, Scotland and Wales, the flags of each nation are commonly used. The St. Patrick’s Cross flag is often seen on St. Patrick’s Day in Northern Ireland. Since the Republic of Ireland split from the UK though, St. Patrick’s Saltire is not used for Northern Ireland, as it represented the whole of the island of Ireland. A flag (known as the “Ulster Banner”) was designed for Northern Ireland in the 1920s, which was based on the flag of Ulster (similar in appearance to the Saint George’s Cross flag of England) and includes a Red Hand of Ulster and a crown. Although the flag’s official status ended with the dissolving of the province’s devolved government in the early 1970s, it can still be seen in Northern Ireland, particularly among the Loyalist community and on sporting occasions. As Wales was politically integrated into the English kingdom hundreds of years ago, its flag was not incorporated into the Union Jack. The Welsh flag features the Red Dragon of Cadwaladr, King of Gwynedd, superimposed on the Tudor colours of green and white. Map showing how far away many Overseas territories are from the UK (click image to see enlargements) You don’t have to be British to vote in the UK! The Isle of Man and the various Channel Islands are not strictly part of the UK, but rather are ‘Crown Dependencies’ (or, in the case of Sark, a Crown Appanage): they have their own democratic governments, laws and courts and are not part of the EU. They are not entirely sovereign either, falling under the British Crown which chooses to have its UK Government manage some of the islands’ foreign and defence affairs. The people are British Citizens, but unless they have direct ties with the UK, through a parent, or have lived there for at least 5 years, they are not able to take up work or residence elsewhere in the European Union. Overseas Territories and the Commonwealth Again, these are not constitutionally part of the United Kingdom, but are largely former colonies of the former British Empire which are, to varying degrees, self-governing entities that still recognise the British Monarch as their head of state. The key difference is residents of Overseas Territories still possess British citizenship, whereas those of Commonwealth nations do not, and are subject to the same entry and immigration rules as non-EU citizens. The British embassy in your home country however may accept visa applications to selected overseas territories and Commonwealth nations. The United Kingdom is a constitutional monarchy with the Queen as the nominal head of state. It has a bicameral parliament: The lower house, known as the House of Commons, is elected by the people and is responsible for proposing new laws. The upper house, known as the House of Lords, primarily scrutinises and amends bills proposed by the lower house. The House of Lords is not elected and consists of Hereditary Peers, whose membership is guaranteed by birth right, Life Peers, who are appointed to it by the Queen, and the Lords Spiritual, who are bishops of the Church of England. The Head of Government is the Prime Minister, who is usually the leader of the majority party in the House of Commons. It has a first-past-the post system divided into local constituencies. In practice, the Prime Minister wields the most authority in government, with the Queen being pretty much a figurehead, though all bills that have been passed in both houses of parliament require the Queen to grant royal assent before they become law. The Queen does have limited powers to dissolve parliament and call a general election in exceptional circumstances – for example during times of war, or if an election ends in stalemate; but these are generally never exercised. Additionally, Northern Ireland, Scotland and Wales have their own elected bodies (the Northern Ireland Assembly, Scottish Parliament and Welsh Assembly). These devolved governments have a First Minister and varying degrees of power over matters internal to that constituent country, including the passing of laws. For example, the Scottish Parliament in Edinburgh exercises power and passes laws over almost every matter internal to Scotland. In the areas over which it has power, the UK government plays no role. As a result, institutions and systems can be radically different between the four constituent countries in the UK. England has no similar body of its own, with all government coming from Westminster. The exception to this is London, which owing to its huge size and population has partial devolved government in the form of an elected Mayor and assembly, which exercises a range of powers previously controlled by both central and local governments. There are also local government authorities responsible for services at a local level. Each constituency votes for a local MP (Member of Parliament) who then goes to sit in Parliament and debate and vote. Using maps and postcodes Most basic mapping in the United Kingdom is undertaken by the Ordnance Survey of Great Britain and the Ordnance Survey of Northern Ireland. The maps found in bookshops may be published directly by those organisations, or by private map publishers drawing on basic Ordnance Survey data. One consequence of this for the traveller is the widespread use of Ordnance Survey grid references in guide books and other information sources. These are usually presented [xx999999] (e.g. [SU921206]) and form a quick way of finding any location on a map. If using a GPS be sure to set it to the British National Grid (BNG) and the OSGB datum. Alternatively, every postal address has a postcode, either a unique one or one shared with its immediate neighbours. British postcodes take the form (XXYY ZZZ), where XX is a 2 or 1 character alphabetic code representing the town, city or geographic area, a 1 or 2 digit number YY representing the area of that town or city, followed by a 3 digit alphanumeric code ZZZ which denotes the road and a specific section or house on that road. Therefore, a postcode will identify a location to within a few tens of yards in urban locations; and adding a house number and street will identify a property uniquely (at road junctions two houses with the same number may share the same postcode). Most internet mapping services enable locations to be found by postcode. Owing to London’s huge size and population it has its own distinct variation of the postcode system where the town code XX is replaced by an area code indicating the geographic part of the city – e.g N-North, WC-West Central, EC-East Central, SW-South West; and so on. Although NE refers to Newcasle upon Tyne, 300 miles away, and S to Sheffield, 170 miles away. The Ordnance Survey’s 1:50000 or 1:25000 scale maps are astonishingly detailed and show contour lines, public rights of way, and access land. For pursuits such as walking, they are practically indispensable, and in rural areas show individual farm buildings and (on the larger scale) field boundaries. Although few visitors come for the weather, the UK has a benign humid-temperate climate moderated by the North Atlantic current and the country’s proximity to the sea. Warm, damp summers and mild winters provide temperatures pleasant enough to engage in outdoor activities all year round. Having said that, the weather in the UK is very changeable over both short distances and periods of time and conditions are often windy and wet. British rain is world renowned, but in practice it rarely rains more than two or three hours at a time and often parts of the country stay dry for many weeks at a time, especially in the East. More common are overcast or partly cloudy skies. It is a good idea to be prepared for a change of weather when going out; a jumper and a raincoat usually suffice when it is not winter. In summer temperatures can reach 30ºC (86ºF) in parts and in winter temperatures may be mild, eg: 10ºC (50ºF) in southern Britain and -2ºC (28.4ºF) in Scotland. Because the UK stretches almost 800 miles from end to end, temperatures can vary quite considerably between north and south. Differences in rainfall are also pronounced between the drier east and wetter west. Scotland and north-western England (particularly the Lake District) are often rainy and cold. Alpine conditions with heavy snowfall are common in the mountains of northern Scotland during the winter. The north-east and Midlands are also cool, though with less rainfall. The south-east and east Anglia are generally warm and dry, and the south-west warm but often wet. Wales and Northern Ireland tend to experience cool to mild temperatures and moderate rainfall, while the hills of Wales occasionally experience heavy snowfall. Even though the highest land in the UK rarely reaches more than 1,100m, the effect of height on rainfall and temperature is great. Bank (public) holidays Each country within the UK has a number of bank holidays, on which the majority of people do not work. Shops, pubs, restaurants and similar are usually open. Many UK residents will take advantage of the time off to travel, both within the UK and abroad. This makes transport links busier than usual and tends to increase prices. If your travel dates are flexible you may wish to avoid travelling to or from the UK on bank holiday weekends. The following 8 bank holidays apply in all parts of the UK: - New Year’s Day (1 Jan) - Good Friday (the Friday immediately before Easter Sunday) - Easter Monday (the Monday immediately after Easter Sunday) (Except in Scotland) - Early May Bank Holiday (the first Monday in May) - Spring Bank Holiday (the last Monday in May) - Summer Bank Holiday (the last Monday in August, except in Scotland where it is the first Monday in August) - Christmas Day (25 Dec) - Boxing Day (26 Dec) Northern Ireland has the following two additional bank holidays: - St Patrick’s Day (17 Mar) - Battle of the Boyne / Orangemen’s Day (12 Jul) Scotland officially has two additional bank holidays: - the day after New Year’s Day (2 Jan) - St Andrew’s Day (30 Nov) In practice, with the exception of Easter, Christmas and New Year holidays, UK bank holidays are virtually ignored in Scotland in favour of local holidays which vary from place to place. Where a bank holiday falls on a Saturday or Sunday, it is moved to the following Monday. If both Christmas Day and Boxing Day fall on a weekend, the Boxing Day holiday is moved to the following Tuesday. A full list of bank holidays for future years can be viewed here. Content copyleft courtesy of the wonderful Wikitravel.
<urn:uuid:e123e5d4-7593-4896-8f6c-ac9699880169>
CC-MAIN-2021-43
https://matouring.com/england-uk/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00191.warc.gz
en
0.953487
2,890
3.5
4
That Time the U.S. Postal Service Tried Delivering Mail By Missile Today for the equivalent price of a decent cup of coffee you can get the United States Postal Service to pick up and deliver a letter to anywhere in the continental United States in just a day or two. But it hasn’t always been that way and there’s a reason they call it “snail mail”. To get around the problem, for a brief time in the 1950s the USPS dared to dream big- like rocket ship big… Yes, on June 8, 1959, the then named U.S. Post Office Department, in conjunction with the US Navy, launched a rocket laden with what they officially dubbed MISSILE MAIL! Not the first to try sending projectile transmitted messages, arrows have been frequently used throughout history to send messages over rivers and castle walls and the like. In slightly more modern times, author Heinrich von Kleist suggested in an 1810 article titled “Useful Inventions” that shooting artillery shells loaded with letters would be a great way to rapidly send important mail throughout Germany via establishing a relay network of this type of artillery. That idea never got off the ground, but others who later had more or less the same idea had better success. For example, in the late 19th century in Tonga, residents of the island of Niuafo’ou decided to try using Congreve rockets to send and receive mail. You see, the island’s lack of beaches and harbour, as well as the presence of the second deepest oceanic trench in the world, the Tonga Trench, right next to it (making it impossible to anchor), meant getting mail from ship to land wasn’t something regularly done, despite ships frequently passing by. The ultimate solution to leverage the existing ship traffic here for sending and receiving mail was simply to have ships drop cans containing mail into the water and then blast their horns as they passed by. Strong swimmers would then swim out to try to collect the cans before the current did. Likewise, the swimmers would carry messages from the island out to the shipping lane to drop off, with the canned letters picked up when the ships passed. This all eventually earned Niuafo’ou the nickname of Tin Can Island. But before they earned that moniker, they decided to go with the Congreve rockets, which is definitely a missed opportunity here in terms of a more badass nickname. In any event, the primary problem with using Congreve rockets, perhaps better remembered today via being immortalized in the lyrics of the Star Spangled Banner, for mail delivery was simply the inherent inaccuracy and unreliability of said rockets. This is notably illustrated by British officer Alexander Cavalié Mercer when discussing the medium range variety during the Waterloo Campaign in 1815: The order to fire is given – port-fire applied – the fidgety missile begins to sputter out sparks and wriggle its tail for a second or so, and then darts forth straight up the chaussée. A gun stands right in its way, between the wheels of which the shell in the head of the rocket bursts, the gunners fall right and left… our rocketeers kept shooting off rockets, none of which ever followed the course of the first; most of them, on arriving about the middle of the ascent, took a vertical direction, whilst some actually turned back upon ourselves – and one of these, following me like a squib until its shell exploded, actually put me in more danger than all the fire of the enemy throughout the day. Fast-forwarding a few decades, among the numerous others throughout the world attempting rocket mail in the early to mid-20th century, arguably the most successful of all attempts at establishing it occurred independently in Austria and India. In the former, one Friedrich Schmiedl launched a series of rockets containing mail from one town to another, including one route spanning about 6 km, from St Radegund to Kumberg. This is succinctly described in a 1934 issue of Popular Mechanics, Each rocket carries 200 to 300 letters from the starting point, the Shocket, to Radegund or Kunberg, in the neighborhood of Graz, whence the mail is forwarded by regular postal service. All of the mail rockets have functioned perfectly, each flight being made according to scheduled plans without the loss of a single letter. Bearing special “rocket mail” stamps, the letters are sealed in a metal container to prevent damage, but this precaution has been unnecessary, due to the accuracy with which the rockets have arrived at the destination. Unfortunately, despite the extreme success of the project, Schmiedl’s efforts were cut short when the Austrian Post Office killed funding for rocket mail. Also unfortunately for Schmiedl, at least in terms of his place in rocket design history, WWII began not long after. Afraid his work would be used to develop rockets, not to carry mail or for scientific usage, but to carry explosives, he destroyed the records of his designs and gave up on pursing rocket technology altogether, even when later offered a position to develop rockets in the U.S. after the war- he simply didn’t want any of his work to be weaponized. Going over to India, former dentist but then Secretary of the Indian Airmail Society, Stephen Smith, from 1934 to 1944 shot off about 80 rockets containing mail (and countless other experimental rockets without mail). On top of letters, he also in one instance shot a rocket containing food supplies to help earthquake survivors. In addition to that, on June 29, 1935, he successfully shot a rocket across the Damodar river. The payload? Two live chickens named Adam and Eve. They survived the ordeal just fine and spent the remainder of their days in a zoo. Like Schmiedl, unfortunately WWII saw Smith’s work curtailed and he died not long after the war ended. Going back across the pond to the U.S. and there are several instances of rocket enthusiasts using various rocket powered aircraft to send letters, including on February 23, 1936 when rockets were used to shoot a bundle of letters from Greenwood Lake, New York, to Hewitt, New Jersey, over a then frozen lake. While the rockets ultimately crashed after a flight of only about a half mile, their payload was successfully collected by a Hewitt postal worker and taken to the post office for further processing. This all brings us back to June 8, 1959 and the U.S. Postal Service jumping on the Rocket Mail delivery bandwagon, but in a massively more advanced way than had ever been tried before. Although purported to be an altruistic endeavour designed to test the feasibility of sending mail via missile, with the Postmaster General, Arthur E Summerfield, himself at the time waxing poetic about the potential the idea had, as far as the military was concerned, this was just a “a huge flex” aimed squarely at the Soviets. You see, the Cold War was just beginning to heat up and the sending of mail hundreds of miles via guided missile was seen by the Department of Defence as a great publicity stunt to use to show off the accuracy and precision of the United States’ nuclear arsenal. To this end, the missile chosen to carry the mail was a Regulus I- a cruise missile ordinarily tipped with a nuclear warhead that in this case had been replaced by two mail containers. Said containers were hand-loaded with the help of Summerfield. After this, he then headed off to the missile’s destination point. Being transported on that rocket were about 3,000 copies of a letter written by Summerfield addressed to everyone from the Postmasters of allied nations to President Dwight Eisenhower. In addition, it’s noted that everyone aboard the submarine the missile was launched from also got a copy of the letter (shown below) as a sort of memento to the historic occasion. And, ya, beside the inherent awesome with regard to sending mail via rocket power, the missile was launched from the USS Barbero submarine. The target, for lack of a better term, for the missile was a Naval Auxiliary Air Station in Florida just under 200 miles away. Launched a little after noon on June 8, 1959, the missile landing safely after a mere 22 minute flight. As mentioned, Summerfield was waiting to hand collect the mail and from there the letters were taken to a post office in Jacksonville, Florida to be sorted like any other piece of mail. Enthusiastic about the success of the mission and the unprecedented speed in which mail had just been transported, Postmaster Summerfield was quoted as saying: This peacetime employment of a guided missile for the important and practical purpose of carrying mail, is the first known official use of missiles by any Post Office Department of any nation. Before man reaches the moon, mail will be delivered within hours from New York to California, to Britain, to India or Australia by guided missiles. It’s unclear if Summerfield was in on the the fact the whole thing was intended as a show of force to the Soviets, as he seemed deadly serious about implementing rocket-powered mail more widely, even proudly reporting how the Post Office Department and Defense Department were going to work together to make the idea a reality in the famous Missile Mail letter. Despite Summerfield’s lofty claims about the prospects of rocket mail, the idea never really caught on. That said, other attempts have since been made, such as when XCOR Aerospace used one of its EZ-Rocket planes to carry mail for the USPS from Mojave to California City (about 180 miles or 290 km), demonstrating how in the perhaps not too distance future reusable rockets may make it economically viable to send physical mail and packages anywhere in the world within hours. But as for now, Rocket Mail is still a pie in the sky dream… which is a shame because it sure makes Amazon’s otherwise futuristic one-day drone deliveries seem kind of boring. I mean, Jeff Bezos owns a huge stake in a rocket company AND Amazon. It doesn’t take a rocket surgeon to put two and two together here. Think of it- delivering Amazon Fire Sticks not within a day, but minutes, of ordering, all via shooting off slightly more literal Amazon fire sticks… probably with lasers involved somewhere. Just saying… - What Happens to Undeliverable Mail with No Return Address? - The Origin of the @ Symbol and the First Email Message - Do Mailboxes Have to Be “Approved by the Postmaster General”? - The Curious Case of Sherlock Holmes’ Mail - What the “ZIP” in “ZIP Code” Stands For and What the Numbers Signify - A fun thing to do when researching other topics and coming across old newspaper or magazine articles is to peruse the ads and surrounding articles as well. On that note, doing just that to the aforementioned May of 1934 edition of Popular Mechanics Magazine, describing Schmiedl’s momentous rocket mail deliveries in Austria, unearthed the following little gem about an amazing new device invented in Britain. The article includes a picture of a dapper gentlemen casually sitting at his desk reading the paper while apparently talking to himself. But Au-contrare, he’s not talking to himself because the future is now… err… then. The article explains, “Telephone conversations can [now] be conducted without holding a receiver to the ear or speaking directly into a transmitter by employing a recent British invention consisting of a box containing a sensitive microphone and a loud speaker. The box is placed on a table and the person carrying on a telephone conversation may be seated in an easy chair several feet distant.” - The Rise and Fall of Rocket Mail - Popular Mechanics May of 1934 - Popular Mechanics May of 1936 - Popular Mechanics - Rocket Mail - Submarine Fires Missile - Journal of the Waterloo Campaign - Official U.S. Mail - USS Barbero - Regulus 1 - Congreve Rocket - Delivering U.S. Mail Via Rocket - Regulus 1 - Regulus Missile Mail Container - Friedrich Schmiedl - Stephen Smith - Robert Zubrin - Rocket Mail - Missile Mail - The Story of Tin Can Mail |Share the Knowledge!|
<urn:uuid:2907f63d-a1a0-441c-8cd1-4093edf1b604>
CC-MAIN-2021-43
http://www.todayifoundout.com/index.php/2018/06/that-time-the-u-s-postal-service-tried-delivering-mail-by-missile/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00310.warc.gz
en
0.95317
2,553
3.296875
3
The adder (Vipera berus) is the only poisonous snake native to the UK. However, a number of dangerous snakes are kept as pets, and worldwide venomous snakes still cause significant mortality. There are three types of venomous snake: 1 Viperidae have long erectile fangs. They are subdivided into two types: (a) Viperinae (true vipers, e.g. Russell’s viper [dabora], European adder), which are found in all parts of the world except America and the Asian Pacific. (b) Crotalinae (pit-vipers, e.g. rattlesnakes, Malayan pit-viper), which are found in Asia and America. They have small heat-sensitive pits between the eyes and the nostrils. The venom of both of these classes of snake is vasculotoxic. 2 Elapidae (cobras, mambas, kraits, coral-snakes) are found in all parts of the world except Europe. They have short, unmoving fangs and the venom produces neurotoxic features. Venom from the Asian cobra and the African spitting cobra also produces local tissue necrosis. 3 Hydrophiidae (sea-snakes) are found in Asian Pacific coastal waters. They have short fangs and flattened tails. The venom is myotoxic. Russell’s viper is the most important cause of snake-bite mortality in India, Pakistan and Burma. There is local swelling at the site of the bite, which may become massive. Local tissue necrosis may occur, particularly with cobra bites. Evidence of systemic involvement occurs within 30 min, including vomiting, evidence of shock and hypotension; haemorrhage due to incoaguable blood can be fatal. There is not usually any swelling at the site of the bite, except with Asian cobras and the African spitting cobrahere the bite is painful and is followed by local tissue necrosis. Vomiting occurs first followed by shock and then neurological symptoms and muscle weakness, with paralysis of the respiratory muscles in severe cases. Cardiac muscle can also be involved. Systemic features are muscle involvement, myalgia and myoglobinuria, which can lead to acute renal failure. Cardiac and respiratory paralysis may occur. A firm pressure bandage should be placed over the bite and the limb immobilized. This greatly delays the spread of the venom. Arterial tourniquets should not be used and incision or excision of the bite area should not be performed. The type of snake should be identified if possible. In about 50% of cases no venom has been injected by the snake bite and antivenoms are not generally indicated (unless systemic effects are present) as they can cause severe allergic reactions. Nevertheless, careful observation for 12-24 hours is necessary and antivenom must always be given when indicated, as the mortality of snake bite is 10-15% with certain snakes. General supportive measures should be given as necessary, as for all poisoning. These include diazepam for anxiety and intravenous fluids with volume expanders for hypotension. Treatment of acute respiratory, cardiac and renal failure is instituted as necessary. Specific measures, i.e. antivenoms, can rapidly neutralize venom, but only if an amount in excess of the amount of venom is given. Antivenoms cannot reverse the effects of the venom so they must be given early. They do minimize some of the local effects and may prevent necrosis at the site of the bite. Antivenoms should be administered intravenously by slow infusion, the same dose being given to children and adults. Allergic reactions are frequent, and adrenaline (1 in 1000 solution) should be available. Antivenoms are usually rapidly effective. In severe cases the antivenom infusion should be continued even with allergic reactions, with subcutaneous injections of adrenaline being given as necessary. Large quantities of anti venom may be required. Some forms of neurotoxicity, such as those induced by the death adder, respond to anticholinesterase therapy with neostigmine and atropine. Local wounds often require little treatment. If necrosis is present, antibiotics should be given together with initially minimal surgical treatment. Skin grafting may be required later. Antitetanus prophylaxis must be given. Antivenoms must be kept readily available in all snakeinfested areas. Scorpion stings are a serious problem in the tropics and cause 1000 deaths per year in Mexico. The poison glands are situated in the end of the tail. Severe pain occurs immediately at the site of puncture, followed by swelling. This should be treated by a firm pressure bandage to avoid the spread of the neurotoxic venom. Signs of systemic involvement include vomiting, respiratory depression and haemorrhage. Treatment is supportive. Antivenom is available in certain countries. The black widow spider (Latrodectus mactans) is found in North America and the tropics and occasionally in Mediterranean countries. The bite quickly becomes painful and generalized muscle pain, sweating, headache and shock occur due to absorption of rapidly acting neurotoxins. No systemic treatment is required except in cases of severe systemic toxicity, when specific antivenom should be given where this is available. Intravenous calcium gluconate may help the muscle spasms. Loxosceles causes many bites in Central and South America. L. reclusa, the brown recluse spider, is also found in the southern USA. Spiders are often found in bedrooms, so that patients are often bitten at night. There is a burning pain at the site of the bite, followed by a necrotic ulcer in some cases. Systemic effects, which include fever, vomiting and haemolysis, are rare. No treatment is indicated except in severe cases, when an anti venom should be given if available. Phoneutria nigriventer, the banana spider, and Atrax robustus, the Sydney funnel-web spider, can both give nasty bites, which are occasionally fatal. Insect stings, e.g. from wasps and bees, and bites, e.g. from ants, produce pain and swelling at the puncture site. Death occurs (12 per year in the UK) and is usually due to anaphylaxis, which requires urgent treatment. Patients who have severe local reactions to stings or a mild anaphylactic reaction should carry a Medi-jet syringe for self-administration of adrenaline should a further sting occur. Desensitization can be carried out, but the course is prolonged and often needs to be repeated. Marine animals There are many poisonous fish that can be dangerous. They are usually found in tropical waters but cases have been described worldwide. Stringrays and scorpion fish are two examples that sting by injecting venom through barbed spines. There is immediate severe local pain and swelling, which may be followed by tissue necrosis. Systemic effects include diarrhoea, vomiting, hypotension, cardiac arrhythmias and convulsions. Treatment is supportive. Care should always be taken in waters where these fish are known to be present. Venomous Coelenterata include jellyfish, sea anemones and the Portuguese man-of-war. The tentacles contain toxin that, following a sting, produces painful wheals at the site of contact. These wheals may become necrotic. Rarely there are systemic side-effects, including abdominal pain, diarrhoea and vomiting, hypotension and convulsions. Treatment consists of removing the tentacles, having first applied acetic acid (vinegar) to them. Alcohol compounds should not be used. Only the octopus and cone-shells are venomous to humans. The blue-ringed octopus, which is found in Australia, has saliva which contains the neurotoxin tetrodotoxin. This flows into the wounds from the beak of the octopus and can cause serious systemic effects. In cone-shells the venom is found in association with their radular teeth. A bite initially produces local numbness, which can then spread over the body and may even tually lead to paralysis. This can occur with fish and shellfish. In some cases it is attributable to toxins, but most poisonings occur as a result of pathogens such as Salmonella or hepatitis A virus. Ichthyosarcotoxic fish contain toxins in their blood, skin and muscle and are the commonest cause of poisoning. CIGUATERA. Poisoning occurs chiefly with the reefdwelling fish from around the Pacific and Caribbean. The fish contain ciguatoxins from the plankton Gambier discus. Most cases of poisoning are due to the red snapper, grouper, barracuda and amberjack fish but many other species may be responsible. The poisonous fish cannot be distinguished from identical fish that do not contain the poison. The toxin is unaffected by cooking. Symptoms occur from a few minutes to 30 hours after ingestion of the fish. They include numbness and paraesthesia of the lips, abdominal pain, nausea, vomiting and diarrhoea. Visual blurring, photophobia, metallic taste in the mouth, myositis and eventual hypotension and shock can also occur. Treatment is symptomatic, but symptoms can last for up to 2 weeks. SCROMBOID FISH. Fish such as tuna, mackerel and skipjack contain a high degree of histidine. This is decarboxylated by bacteria to histamine and, particularly if the fish are allowed to spoil, large amounts can accumulate in the fish, producing flushing, burning, pruritus, headache, urticaria, nausea, vomiting and bronchospasm 2-3 hours after ingestion. Treatment is symptomatic; care should be taken only to eat fresh fish. Tetrodotoxin-containing puffer-fish are found in both sea and freshwater areas of Asia, India and the Caribbean. Symptoms that follow ingestion are circumoral paraesthesia, malaise and hypotension, with more severe cases producing ataxia and neuromuscular paralysis. The mortality is 50-60%. SHELLFISH. Bivalve molluscs, e.g. mussels, oysters, scallops and clams, can acquire the neurotoxin saxitoxin from the dinoflagellate Gonyaulax. These protozoa colour the sea red and molluscs should never be taken from such areas. Symptoms are similar to those caused by tetrodotoxin, but are usually less severe. Treatment is symptomatic.
<urn:uuid:d7928052-03c7-4ee9-bded-82e8370723c6>
CC-MAIN-2021-43
https://www.medassignments.com/venomous-animals-8350
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.928115
2,202
3.875
4
September 12, 2018 1. The rise of Fascism and Totalitarianism Just over 100 years ago, Britain, France, Germany and Russia ruled half of the world – most of Europe, all of Africa, South Asia, most of Southeast Asia, and most of the Pacific region. They dominated China and were of course influential everywhere else. During the past century, including the two devastating World Wars when the West resorted to mechanised butchery and industrialised slaughter, more than 170 million people, mainly civilians, were killed. The West introduced unprecedented levels of totalitarianism and oppression by inventing and ruling through Communism, Fascism, Nazism, slavery and apartheid. Some six million Jews perished as the result of the Holocaust, in the same way that European adventurers and settlers had carried out the genocide of tens of millions of native populations of the Americas, Australia and New Zealand. After the end of the Second World War and the intensification of colonialism, the world was divided between the Western Capitalist camp and the Eastern Communist camp. At times, the rivalry between the two blocs brought the whole world to the brink of extinction, the best-known example of which was the Cuban missile crisis. That is one incident that is relatively well-known but there were a number of close calls, some of them not even covered by mainstream media, and some of them became known only decades later. As General Lee Butler, the former Chief of U.S. Nuclear Forces said: “We were just lucky to survive”. The rivalry between the two superpowers extended to many countries in Asia and Africa, as most countries had to attach themselves to one of the two camps in order to remain immune from the threat of the other superpower. Although fortunately the two superpowers did not engage in direct confrontation, there were many proxy wars fought between them at the expense of other people in Korea, Vietnam, and many other countries in Asia, the Middle East and Africa. With the collapse of the Soviet Union, the United States became the sole “hyper-power” and for a while ruled the world almost unopposed. Thus, for a while, we had the era of unipolar American ascendance and hegemony throughout the world. The American military boasted that it had “full-spectrum dominance” on land, in air and sea, and even in space. America’s military spending is almost equal to the military spending of all other countries combined, if one adds up the money that is spent on the CIA and other 16 American intelligence organisations. 2. Islamic fundamentalism – and Christian After the collapse of the Soviet Union, “Islamic fundamentalism” has become the great bogey. Many Western scholars have viewed the 1990s as the era of Islamic fundamentalism, and the end of the 20th and the beginning of 21st century as the era of “the Islamic threat”. The irruption of Islam into the political landscape, in Iran and in many Islamic countries, is viewed as an anachronism. The Islamic Revolution in Iran 40 years ago caught everybody by surprise, and ever since the establishment of the Islamic Republic, America has tried by different means to bring it down and replace it with a pro-Western regime. Since the revolution, Iran has been under various degrees of sanctions, as well as different plots to crush it. Since then, “Islamic terrorism” has almost become synonymous with “Islamic fundamentalism” and “Islamic fundamentalism” has become synonymous with Islam. The terrorist acts committed by a small number of militant Muslims, who often have grudges against their own rulers or against the countries that have invaded and destroyed their countries, are attributed to an inherently violent Islamic doctrine. Although most of the terrorist groups, including the Afghan Mujahedin, the Taliban, the Al Qaida and most of the terrorist groups in Syria, Iraq and the rest of the Middle East, have been created and nurtured by Sunni radicalism exported from some of the countries allied with the West, this has not reduced the hostility towards Iran. The disastrous wars in Iraq, and the Western attempts at regime change in Afghanistan, Iraq, Syria, Libya, Yemen and elsewhere have given rise to the most virulent and dangerous forms of terrorism, as represented by militant Sunni groups under various names, such as the al-Nusra Front, or the so-called Islamic State in Iraq and Syria (ISIS). These groups overran a large area of territory in Iraq and Syria and nearly toppled the governments of those countries. However, although Iran has been at the forefront of fighting those terrorist groups, no sooner were those groups defeated and on retreat that Iran again occupied the position of the main bogey and the “biggest sponsor of terrorism in the world” as the Americans would have it. It is important to point out that terrorism has not been limited to Muslims. As Olivier Roy, one of the greatest scholars of radical movements, explains in his book, The Failure of Political Islam: “A strange Islamic threat indeed, which waged war only against other Muslims (Iran/Iraq) or against the Soviets (Afghanistan) and caused less terrorist damage than the Baader-Meinhoff gang, the Red Brigade, the Irish Republican Army, and the Basque separatist ETA, whose small-group actions have been features of the European political landscape longer than Hizbullah and other jihad movements.” (See Olivier Roy, The Failure of Political Islam, I.B. Tauris, 1994, Preface, p ix). No one criticises Christianity for the activities of those terrorist gangs, but any terrorist action carried out by a crazy Muslim or a radical Islamic group is often attributed to Islam. This is not to say that terrorist acts committed by various Muslim groups against local rulers or against Western targets are not serious. They are very serious and have to be dealt with. There has been an ominous intensification of such terrorist acts in various countries, and if they remain unchecked, they may pose serious problems in the future too. The defeat of ISIS does not necessarily mean the end of terrorism, which may reveal itself in a different guise and more diverse forms, as we have seen in various European countries. America has also paid a high price as the result of the activities of some terrorist groups. We have witnessed the terrorist activities in the United States by Omar Abd al-Rahman and his associates who were originally involved in the assassination of President Sadat, and also the massive bombings at American embassies in Kenya and Tanzania where again Muslim groups were implicated. Of course, we had the most devastating example of that form of terrorism in the events of 9/11. However, an over-emphasis on the Islamic nature of these grievances can become a self-fulfilling prophecy and can create a situation that is much more difficult to deal with. At the same time, many unrelated terrorist activities in America and Europe have also been attributed to Muslims. Shortly after the Oklahoma City bombing on 19 April 1995, a leading British columnist Bernard Levin, writing in “The Times”, pondered: “Do you realise that in perhaps half a century, not more, and perhaps a good deal less, there will be wars in which fanatical Muslims will be winning? As for Oklahoma, it will be called Khartoum-on-the-Mississippi, and woe betide anyone who calls it anything else.”(Quoted in John Esposito, The Islamic Threat: Myth or Reality, Oxford University Press, 1999, p. 235). I remember seeing the cover story of a British tabloid newspaper on the same day that published a photograph of the bombed building and a dead child with the caption: “In the name of Islam.” Of course, none of those newspapers apologised for their mistake when it was made clear that the Oklahoma bombing had been carried out by a friend and associate of the Christian teacher David Koresh, the founder of a Christian messianic movement, called Branch Davidians Sect. The attack was carried out on the anniversary of the attack on Koresh’s headquarters in Waco, Texas, that had set fire to the whole compound killing Koresh and at least 79 others, including many women and children. As it happens, I was watching television when the news of the attack on Koresh’s compound was being broadcast live. The forces of US Bureau of Alcohol, Tobacco, and Firearms who had been sent to arrest David Koresh drilled a hole through the wall of the compound and pumped gas through it to force the people inside to come out. The gas was set aflame when it came in contact with fire inside, and the whole compound was set ablaze, flames spreading very fast as the result of a strong wind. It is incredible that no one had thought of having some fire engines or ambulances ready in case of the attack going wrong. It was horrendous to watch dozens of men and women and children being burnt alive before the fire engines finally arrived. Even when it was established that David Koresh had originally been a member of the Seventh Day Adventist Church who claimed to have the gift of prophecy, later calling himself a prophet, he and his movement were referred to as members of a cult, not as Christian fundamentalists or Christian terrorists. The founder of the Davidian movement, Victor Houteff, was a keen student of the Bible and taught Bible study classes, attracting large groups of Seventh Day Adventists. Like many fundamentalist Christians, Houteff believed that God will have a judgement upon his people and have a purification in his church, resulting in only 144,000 people surviving. He wanted to establish the Davidic kingdom in Palestine, Texas. Koresh shared many of Houteff’s views, but went a step further. He wanted to implement God’s orders and establish a Davidic kingdom in Jerusalem. In 1985, he traveled to Israel, where he claimed that he had a vision that he was the modern-day Cyrus, the saviour of Jews, hence his name Koresh, the Persian version of Cyrus. He believed that like Jesus he would be martyred. Until 1990, he believed that he would be martyred in Israel, but later on he said that the prophecies of Daniel would be fulfilled in Waco and that his headquarters at Mount Carmel Centre was the Davidic kingdom. There were similar anti-Islamic outbursts after the crash of the TWA Flight 800 on 17 July 1996. I remember distinctly that the day after the crash, the BBC studio announcer interviewing an American official asked if the bomb explosion on the aircraft had been connected with the attack on an American air base in Khobar, Saudi Arabia. An exhaustive and costly investigation finally concluded that the cause of the accident had been the explosion of flammable fuel vapours in a fuel tank. However, the harm had already been done, and in the minds of millions of traumatised viewers and listeners, the deadly explosion had been attributed to Muslim terrorists. After years of campaigning and many Labour Party promises (when in opposition) that if the Labour Party came to power it would allow direct-grant Muslim schools, when the Labour government announced that it would allow two Muslim direct-grant schools, it gave rise to a strong backlash. The day after the news was announced, one of the tabloid newspapers devoted its entire front page to the picture of a Muslim school with the caption “Government surrender to segregation.” Although there are hundreds of Church of England, Catholic and Jewish schools in Britain, yet in the case of one Islamic school there was the use of emotion-charged terms such as “surrender” and “segregation.” 3. More Islamophobia and hate in Britain Britain is one of the most tolerant, multicultural and compassionate societies in the world. It has provided shelter to millions of Asians and Africans and to people of all faiths and none. British Muslims are perhaps more integrated into British society than is the case with Muslims in other European countries. Nevertheless, even here there has been, and to some extent there still is, a feeling of hostility towards Muslims, that has been described as Islamophobia. A report by the Runymede Trust, a race-relations think-thank, compiled by a committee composed of some senior Christian, Muslim and Jewish scholars and religious figures published on 28th December 1996, concluded that Britain had become a nation of Muslim haters, and Islamophobia was in danger of becoming institutionalised, unless the law was changed to outlaw religious as well as racial discrimination. The report concluded: “In 20 years it has become more explicit, more extreme, more pernicious and more dangerous… [it] is part of the fabric of everyday life in modern Britain, in much the same way that anti-Semitic discourse was taken for granted earlier in this century.” (For the text of the report see ‘The Observer’ magazine, 29th Dec. 1996). Great strides have been made since then in outlawing religious discrimination, but recent terrorist attacks have again revived a feeling of hostility and suspicion towards Muslims. Share and use these hastags
<urn:uuid:b872348b-1728-46f2-bd09-7fd13f16b3d4>
CC-MAIN-2021-43
https://transnational.live/2018/09/12/part-2-%E2%80%A2-fascism-and-islamic-fundamentalism/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00669.warc.gz
en
0.974732
2,739
2.875
3
Congestive Heart Failure (CHF) is a chronic condition that affects the ability of the heart to pump blood adequately throughout the body. It’s a condition that has affected millions of people around the world. We created this study guide to provide you with an overview of this topic. We provided practice questions for your benefit as well. So, if you’re ready, let’s get started. What is Congestive Heart Failure? As previously mentioned, Congestive Heart Failure (CHF) is a progressive, chronic condition that impacts the ability of the heart muscle to pump blood throughout the body. It’s often caused by hypertension, coronary artery disease, or other valve conditions of the heart. When the ventricles are unable to pump blood effectively, blood and other fluids begin to accumulate in the heart. Then, it eventually accumulates in the lungs, abdomen, and other parts of the body. This explains why fluid overload is one of the primary signs of CHF. When fluid begins to accumulate in the lungs, this is known as pulmonary edema and it leads to several different breathing and respiratory issues. Signs and Symptoms of Congestive Heart Failure A patient with CHF may show the following signs and symptoms: - Fluid overload - Peripheral edema - Jugular venous distention Keep in mind that each patient is different, therefore, their signs and symptoms may vary. You can now get access to our Cheat Sheet Database for FREE — no strings attached. Congestive Heart Failure Diagnosis The patient’s signs and symptoms play a role in the diagnosis of CHF, however, the following diagnostic tests would also be useful: - Chest x-ray - Electrocardiogram (EKG) - Arterial Blood Gas (ABG) - Complete Blood Count (CBC) - Hemodynamic monitoring - Cardiac enzyme analysis As a Respiratory Therapist, you must know and understand what to look for when obtaining the results of each of these diagnostic tests. Congestive Heart Failure Treatment The treatment methods for CHF will vary from patient to patient depending on the severity of their signs and symptoms. As a Respiratory Therapist, one thing that you may notice is severe hypoxemia which can be treated with oxygen therapy. For example, the patient may require 100% oxygen immediately which can be delivered via a nonrebreathing mask. Diuretic agents would be recommended to treat fluid overload. Some other medications that may be considered include: - Preload reducers - Afterload reducers - Positive inotropic agents - Analgesic medications - ACE inhibitors Noninvasive ventilation may be indicated to support the patient’s breathing and help with oxygenation and/or ventilation. If BiPAP is administered and the patient continues to deteriorate, intubation and mechanical ventilation would be indicated. Congestive Heart Failure Practice Questions: 1. What is Congestive Heart Failure? Congestive heart failure is a condition where the heart cannot pump enough blood and oxygen to the body’s tissues. It is a chronic and progressive inability of the heart to pump sufficiently to meet the body’s metabolic needs. 2. What occurs with a normal heart? The heart pumps enough blood to match the body’s need for oxygen. 3. In congestive heart failure, the heart is not able to what? To move as much blood as it should with each beat. 4. In respiratory therapy school, congested heart failure is typically associated with what disease? 5. What common two causes of left-sided heart failure? Myocardial Infraction and hypertension. 6. What are the forward effects of left-sided congestive heart failure? The left ventricle is unable to pump which causes decreased cardiac output, tissue perfusion, and tissue hypoxia. 7. What are the backward effects of left-sided congestive heart failure? Blood coming into the left ventricle cannot be pumped forward. Blood backs up into the lungs. 8. What is often the earliest sign of heart failure? Dyspnea on exertion. 9. What are the three common causes of right-sided congestive heart failure? Left ventricle failure, pulmonary hypertension (cor pulmonale) and right ventricle myocardial infarction. 10. What is the forward effect of right-sided congestive heart failure? Left ventricle receives inadequate volume to pump so CO falls. 11. What is the backward effect of right-sided congestive heart failure? Right ventricle unable to pump forward so blood backs up in venous system is the main effect. 12. What is the goal of treatment for congestive heart failure? To decrease the heart’s workload and improve cardiac output. 13. Why are diuretics used for congestive heart failure? To decrease cardiac workload by decreasing the fluid volume that the heart has to pump. 14. When does congestive heart failure occur? It occurs when the heart is unable to pump sufficiently to maintain blood flow to meet the body’s needs. 15. What are the common causes of heart failure? Coronary artery disease, previous myocardial infarction, hypertension, atrial fibrillation, valvular heart disease, excessive alcohol use, and sepsis. 16. What are diagnostics used for congestive heart failure? BNP (B-type natriuretic peptide) measures the severity of heart failure, chest x-ray, and EKG. 17. What is left-sided heart failure? A left-sided heart failure happens when the left ventricle fails. This failure causes blood to back up into the lungs causing respiratory symptoms as well as fatigue due to insufficient supply of oxygenated blood. 18. What are the signs and symptoms of left-sided heart failure? Increase rate and work of breathing, rale/crackles heard in the lungs, dyspnea on exertion, and orthopnea. 19. What is right-sided heart failure? A right-sided heart failure occurs when the right ventricle has difficulty pumping blood to the lungs. It is often caused by issues within the pulmonary circulation such as pulmonary hypertension or pulmonary stenosis and backward failure of the right ventricle leading to congestion of systemic capillaries. 20. What are the signs and symptoms of right-sided heart failure? Peripheral edema, ascites, liver enlargement and jugular vein distention. 21. What management is available for heart failure? Diuretics, anti-hypertensives, smoking cessation, fluid restriction, and low sodium diet. 22. How much of the thoracic cavity should the heart cover? 23. What is traditional congestive heart failure? The syndrome in which carbon monoxide does not keep pace with peripheral demands for blood flow and oxygen cannot get to the organs for the body. 24. What early signs can be observed in patients with congestive heart failure? Reduced exercise tolerance (early fatigue). 25. What can be expected in patients with congestive heart failure? Shortened life expectancy. 26. What may be the principal manifestation of nearly every form of 27. How many deaths occur each year due to congestive heart failure? 28. How many hospitalizations occur each year due to congestive heart failure? 1 million hospitalizations. 29. What percent of patients diagnosed with systolic congestive heart failure will still be alive within five years? 30. How is congestive heart failure characterized? Intravascular and interstitial volume overload and manifestations of inadequate tissue perfusion. It doesn't get much better than this Respiratory Therapist T-shirt. Grab yours today. 31. What are signs of intravascular and interstitial volume overload? Shortness of breath, rales 32. What are symptoms of inadequate tissue perfusion? Impaired exercise tolerance, fatigue 33. Approximately how many people have congestive heart failure in 5 million people. 34. What age comprises 75% of the patients with congestive heart failure? 65 years old. 35. What are five possible causes of left-sided failure? Ischemia, hypertension, myocardial infarction, dilated cardiomyopathy and restrictive cardiomyopathy. 36. How does ischemia cause left-sided failure? Low blood flow causes damage to the heart myocardium, causing it to lose function as a pump. 37. How does hypertension cause left-sided failure? Hypertension causes left ventricle hypertrophy. A thicker wall is harder to oxygenate properly. Ischemic damage causes loss of pump functionality. 38. What is dilated cardiomyopathy (DCM)? Dilated cardiomyopathy is a disorder of four-chambered dilatation where all chambers are stretched too wide. 39. How can DCM cause left-sided failure? DCM causes the chambers of the muscles stretched to wide. This stretching of the muscles impedes the ability to contract, causing congestive heart failure. It is important to note that studies have shown that DCM patients may also have impaired Frank-Starling mechanism due to troponin mutation, which would be why you would not see the normal increase in contraction force due to increased stretch. 40. How can myocardial infarction cause left-sided failure? Dead cardiac myocytes cannot pump. 41. What is restrictive cardiomyopathy and how can it cause left-sided failure? Restrictive cardiomyopathy is a disorder in which the heart cannot fill properly. Thus the heart cannot pump sufficiently. 42. What is the primary consequence of left-sided heart failure? Pulmonary congestion is the direct consequence of left-sided heart failure. It is when the left side cannot keep up with the right where blood becomes backed up in the blood vessels of the lungs. 43. What intravascular pressure increases during pulmonary congestion? 44. What are four symptoms of pulmonary congestion? Pulmonary edema with dyspnea, PND (Paroxysmal nocturnal dyspnea), orthopnea and crackles. 45. What is orthopnea? It is difficulty breathing lying flat. It is one of the signs of congestive heart failure. 46. What causes crackles? Edema/fluid in lung interstitium. 47. What is heart failure a problem of? 48. What are the jobs of the heart and how does this relate to heart failure? The heart has to supply every organ and keep the lungs free of fluid. As soon as this is interrupted, we become symptomatic. 49. What is heart failure? Heart failure is a complex syndrome due to a structural or functional disorder resulting in an inability of the ventricle to fill with or eject blood leading to a mismatch in the metabolic supply and demands of the body. 50. What can chronic heart failure result from? Chronic heart failure may result from a wide variety of cardiac insults. The etiologies can be grouped into those that impair ventricular contractility, increase afterload and impair ventricular relaxation and filling. 51. How prevalent is CHF? In the United States, 6 million cases with 500,000-700,000 new cases per year. 52. What is the mortality rate for CHF? There is 50% mortality within 5 years of diagnosis which is a higher mortality than the combined average for all types of cancer. While 10% mortality within 1 year of diagnosis. This disease is very debilitating, no quality of life at the end, and essentially bedridden. 53. Which respiratory conditions is CHF most related to? 54. What is the primary sign or symptom of CHF? 55. What are three diagnostic tests that can help with the diagnosis of CHF? ABG, EKG, and CBC. Providing care for patients with Congestive Heart Failure (CHF) is common for Respiratory Therapists. This is why it’s a requirement to learn about this condition. Hopefully, the information in this study guide can help you do just that. We have a similar guide that focuses on Pulmonary Edema that I think you would find useful. Thank you so much for reading and, as always, breathe easy my friend. - Rrt, Des Terry Jardins MEd, and Burton George Md Facp Fccp Faarc. Clinical Manifestations and Assessment of Respiratory Disease. 8th ed., Mosby, 2019. [Link] - Faarc, Kacmarek Robert PhD Rrt, et al. Egan’s Fundamentals of Respiratory Care. 12th ed., Mosby, 2020. [Link] - “Congestive Heart Failure And Pulmonary Edema.” National Center for Biotechnology Information, U.S. National Library of Medicine, 10 Aug. 2020, www.ncbi.nlm.nih.gov/books/NBK554557. - “Heart Failure: Diagnosis, Management and Utilization.” PubMed Central (PMC), 1 July 2016, www.ncbi.nlm.nih.gov/pmc/articles/PMC4961993. - “Chronic Heart Failure: Contemporary Diagnosis and Management.” National Center for Biotechnology Information, U.S. National Library of Medicine, Feb. 2010, www.ncbi.nlm.nih.gov/pmc/articles/PMC2813829. Disclosure: The links to the textbooks are affiliate links which means, at no additional cost to you, we will earn a commission if you click through and make a purchase.
<urn:uuid:d34bfb22-55c3-4cdd-a416-0e5aa98de85b>
CC-MAIN-2021-43
https://www.respiratorytherapyzone.com/chf-practice-questions/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00670.warc.gz
en
0.893942
2,943
3.609375
4
A method of manufacturing a high-pressure cylinders (57) Abstract:The invention relates to mechanical engineering and can be used in the manufacture of pressure vessels, in particular for steel cylinders for fire extinguishers. Of the steel disk of the rolled billet cap with a convex bottom, stamp the bottom of the cap, giving it a support surface stable form. Of the cap form the cylindrical body of the container by drawing with thinning of its walls, and then form the neck of the bottle cold or hot crimping the edge of the case. The container is made from mild steel. Hood with thinning of the wall is carried out in 2-6 operations with intermediate recrystallization by annealing. Cold crimping the edge of the case is carried out in 2-6 operations by recrystallization annealing after 1 or 2 operation, and the hot crimping the edge of the case is carried out in 1-2 operation. The invention can simplify the manufacturing process of the container, to reduce the metal content of the container, to give the balloon convenient in operation stable form and to reduce the cost of the container. The invention relates to mechanical engineering and can be used in the manufacture of vessels to contain a gas under high pressure, high pressure steel pipe, manufactured in accordance with GOST standard 949-73 "steel Cylinders for low and medium capacity for gases at Pp20 MPa (200 kgf/cm2), for example, Pervouralsk metallurgical plant. The essence of the method lies in the manufacture of the container has a cylindrical shape of a segment of steel pipe of the desired size and wall thickness. The pipe ends are heated and rolled to the bottom and neck. The bottom is additionally sealed by welding. As a material used alloy or carbon steel.This tank has a welded rounded bottom, its bottom and Golovina part of the hull. The container has a large mass and high cost. This method is complex, inefficient, requires special equipment for rolling pipe and sealing the bottom and the neckline and conducting pneumatic tests for quality control of welded bottom.A known method of manufacturing a steel high-pressure cylinders, allowing to eliminate the problem connected with the necessity of sealing the bottom and pneumatic tests. This method of manufacturing a high-pressure cylinders of sheet blanks (abstracts of the "problems of development of technology and advanced equipment for production of steel, Chu is x cylinders made of sheet metal, S. 126). The cylinders are made of stainless steel. The sheet unevenly heated and pull out of her sleeve(cap) on forging equipment. Because alloy steel weakly amenable drawing, forming a body of the container required length shall conduct additional rolling or press the lug sleeve.This method is also complicated, inefficient, requires special equipment for rolling.The closest to the technical nature of the claimed method - prototype - is a method of manufacturing a high-pressure cylinders of sheet blanks, implemented by the Italian firm Faber (E. Grigoriev, and other Gas powered vehicles. M. : Mashinostroenie, 1989, S. 100-102). The essence of the method consists in the following. Of the steel sheet of the desired thickness, cut down the disk (circle) of the desired size, punched him, i.e., the fold of his hood with a continuous convex bottom, from which then, by deformation of its walls form a cylindrical body of the container and the neck. In this way, the deformation of the cap is carried out by cold rolling roller, and the neck performed by means of hot sealing: heat the end of the housing and the roll neck of the rollers. Bali method has the following disadvantages. For the manufacture of cylinders required rolled and rolling mills specifically designed for this operation. They are expensive and require skilled care. The container has a thickness in Golovino part that is determined by the operation of the seaming cap, i.e. it is metal. Balloon uncomfortable to use, because it has a convex bottom that is determined by the complexity of the processing of alloy steel. The method is complex, inefficient. The cylinder is made in this way has a high cost.The objective of the invention is to simplify the method of manufacturing a high-pressure cylinders, the sustainability of the tank, reducing its metal and reducing the cost of the container.The problem is solved as follows: in the method lies in the fact that steel round billets (drive) roll (stamp) cap with a rounded bottom, which form a cylindrical housing with an orifice, unlike the prototype after convolution cap additionally stamp the bottom of the container with a resistant outer support surface, the body of the container is formed by extrusion with the thinning of its walls, and the neck is cold or hot crimping from tensile strain and compressive deformation, above which there are cracks, tears, folds. With the aim of preserving the integrity of the material cylinders hood with thinning of the walls is carried out in several (2-6) operations, and these operations are conducted recrystallization (high temperature) annealing to restore the ductility of steel. Cold crimping the edge of the case also hold in 2-6 operations. Between operations crimping or pairs of crimping operations conduct recrystallization annealing. The number of operations for drawing and crimping is determined by plastic material properties and the amount of its deformation, i.e. the ratio of the maximum allowable deformation of the material and the size of the original piece and cylinder (thickness of the workpiece and the thickness of the walls of the container, the diameter of the workpiece and cylinder) (B. N. Romanovsky. Handbook of cold stamping. - L.: engineering, 1971, S. 103-267).The proposed method is implemented as follows. From a sheet of mild steel cut billet round shape (disk) and put her cold forging: turn it into a cap with a continuous convex bottom. Convex bottom cap additionally stamp, ensuring efficient distribution of metal in the bottom part of the cylinder and giving ellzey ledge around the periphery of the bottom. Then form the body of the container with cold steel hood with thinning of its walls. The extraction may be carried out on a hydraulic press, punch through the matrix. With the aim of preserving the integrity of the material the extraction procedure with the thinning is carried out in 4 operations. After each drawing with thinning spend annealing (heating in a furnace for the purpose of recrystallization - restore ductility steel) with subsequent removal of scale, for example by etching or mechanical cleaning, and applying the lubricant, for example a phosphate coating with soap and water.The formed cylindrical housing cylinder next, form the neck by cold crimping the open edge of the case using hydraulic presses. The crimping is carried out in 6 operations for cylinders with a diameter of 140 mm and 5 operations for cylinders with a diameter of 110 mm every 2 crimping operation is conducted recrystallization annealing. After the last operation of the crimping relieve tension in the steel its low-temperature annealing. The execution of the neck can also be carried out by hot crimping, i.e., crimping the pre-heated edge of the case. In this case, the number of crimping operations reduce to 1-2, and recrystallization annealing is not carried out. 2and I made a cylinder - 65 to 70 kgf/mm2that makes these cylinders are suitable for fire extinguishers. Due to their high ductility low carbon steel becomes possible formation of a housing cylinder hood, and cap - crimping using well-known universal equipment. Such a process is impossible with stainless steel, because it defies the hood, and when crimping cracks. Compared with the sealing cap in the formation of the neck clamped with a reduced thickness Golovino part. The use of low-carbon steel helps to form the bottom of the container sustainable forms of cold stamping. Method of cold forming is a high-performance and high-tech. Used to implement this method, the equipment is much cheaper and easier to maintain than rolled and rolling mills in the method prototype and low-carbon steel is cheaper alloy.Taken universal equipment, does not require skilled care. The method allows to reduce the intensity of the tank, to make the container easy operation stable form and to reduce the cost of the container. A method of manufacturing a high-pressure cylinders, including roll-up hood with a convex bottom of the steel disk of the workpiece, forming a housing with an orifice, wherein the bottom cap stamp to make the surface resistant form, the body of the container form a hood with thinning of its walls, the neck is cold or hot crimping the edge of the case, the hood with thinning of the wall is carried out in 2-6 operations with intermediate recrystallization by annealing, cold crimping the edge of the case - 2-6 operations recrystallization by annealing after 1 or 2 operation, hot crimping the edges of the shell - 1-2 operation, as material for the manufacture of cylinder use of low-carbon steel. FIELD: plastic working of metals, namely manufacture of high-pressure bottles. SUBSTANCE: method comprises steps of forming shell; swaging at least one open end of shell; forming bottom and mouth while making at least two mutually joined cone surfaces and increasing central angle from shell to mouth; at least after one swaging operation performing heat treatment of swaged end of shell. High-pressure bottle includes shell and two bottoms. One bottom has mouth for placing locking means. Bottom and mouth are made as one piece. Bottom from shell until mouth is in the form at least of two joined lateral surfaces of circular regular truncated cones. Each such joint is performed due to joining lateral surfaces of truncated cones along lines of bases of their small and large diameters respectively. Said joints may be smooth, with radius transitions. EFFECT: simplified design of bottle, enhanced technological effectiveness of making it. 6 cl, 4 dwg FIELD: plastic working of metals. SUBSTANCE: method comprises steps of making shell, upper bottom, mouth, lower bottom and joining all parts by welding; forming lower bottom as one piece with backing flanged ring; inserting lower bottom into shell along backing ring until flange rests upon shell; butt welding shell and lower bottom on backing ring. EFFECT: enhanced centering due to accurately inserting lower bottom to shell, improved quality of welded seam of lower bottom and shell, lowered labor consumption for making bottle. 1 dwg, 1 ex FIELD: machine engineering, possibly manufacture of sealing envelopes of corrosion resistant steels of metal-plastic high-pressure bottles. SUBSTANCE: method comprises steps of separately making metallic convex bottoms with cylindrical collar and backing ring. Thermal expansion factor of backing ring exceeds that of bottoms; outer diameter of backing ring provides close fit of bottoms onto it at 20°C; cooling preliminarily assembled backing ring with bottoms until cryogenic temperature; sliding bottoms on backing ring in cooled state until mutual touch of cylindrical collars of bottoms; heating assembly up to 20°C in order to provide close fit of backing ring in cylindrical collars of bottoms; welding bottoms along butt of assembly and then removing backing ring by chemical milling. EFFECT: simplified process of making envelopes with enhanced mass characteristics. 2 cl, 4 dwg, 1 ex FIELD: storing or distributing liquids or gases. SUBSTANCE: method comprises molding at least of two sections of the vessel whose one bottom has an opening, and joining the sections together by outer and inner seam weld. The inner welding is performed before the outer welding. The inner and outer seams are overlapped. The inner seam weld is performed through the opening in the bottom of one of the sections to be welded. EFFECT: reduced labor and metal consumptions. 1 cl, 3 dwg FIELD: plastic metal working, namely shaping hollow variable cross section bodies. SUBSTANCE: method comprises steps of shaping in die having opening for introducing hollow body and profiled inner surface corresponding to desired profile of hollow body; creating on inner surface of die temperature gradient increased in direction of hollow body motion from minimum temperature near inlet opening of die to maximum temperature in zone of least cross section of body; setting maximum temperature according to condition of largest yielding of body material; setting minimum temperature according to condition of keeping stability of body and heating body till said maximum temperature during its motion in die. In order to make metallic liner of metal-plastic high-pressure vessel from tube blank, ends of tube blank are squeezed in die with profiled inner surface. Tube blank is guided into die along its lengthwise axis. On inner surface of die temperature gradient is created in such a way that it increases in direction of blank motion along lengthwise axis of die from minimum temperature at inlet of die till maximum temperature in zone of least cross section of liner. Maximum temperature is set according to condition of largest yielding of tube blank material; minimum temperature is set according to condition of keeping stability of blank and heating blank till said maximum temperature during its motion in die. Apparatus for shaping hollow variable cross section body includes die mounted on stationary support and having inner surface of preset profile and also having inlet opening for feeding hollow body. Apparatus also includes slide made with possibility of mounting on it hollow body moving along lengthwise axis of die. Die is provided with heating unit for creating on inner surface of die temperature gradient increased along lengthwise axis of die from its inlet opening. EFFECT: improved quality of articles due to prevention of stability loss. 16 cl, 2 ex, 6 dwg FIELD: plastic metal working. SUBSTANCE: method comprises setting the blank in the mandrel provided with the cylindrical and shaped sections, locking the blank in the cylindrical section of the mandrel, and affecting the blank by deforming rollers. The shaped section is initially formed by moving the rollers in different trajectories during one or several runs. Upon unlocking the cylindrical section, the blank is locked in the shaped section and the cylindrical section is drawn by moving the rollers in the opposite direction. EFFECT: expanded function capabilities. 1 cl, 3 dwg, 2 tbl FIELD: mechanical engineering; pressure vessels. SUBSTANCE: according to proposed method, spheroidal bottom is made of sheet, thickness of 0.5-0.6 of designed thickness for initial elliptical bottom and is subjected to deformation at higher process upsetting pressure of 1.3-1.5 of designed value. EFFECT: facilitated and reduced cost of manufacture. 2 cl, 1 dwg FIELD: plastic metal working. SUBSTANCE: invention can be used for making of bottles from sheet blanks. Proposed method includes making of shell, top plate, neck, bottom plate made integral with support ring, flanging and shoe, assembling and connecting the parts by welding. Shell is butt-welded to bottom plate on support ring. Both plates are made of blanks of equal size and in form of bottom plate. When making top plate, support part of shoe is calibrated to bring its diameter to diameter of shell. EFFECT: reduced labor input. 1 ex, 1 dwg FIELD: plastic working of metals, namely manufacture of high pressure vessels from recovered ammunition. SUBSTANCE: method comprises steps of producing blanks, subjecting them to hot deformation, hot molding of vessel mouths, mechanically working vessel mouths; subjecting articles to heat treatment for providing desired strength of vessels; using as blanks bodies of recovered artillery fragmentation type or high-explosive shells with removed collar; cutting each shell in joining zone of its ogival and cylindrical parts for preparing two blanks, namely cone blank with through opening and sleeve like blank with bottom. Heat treatment is realized according to preset mode. EFFECT: lowered cost, improved factor of metal usage due to using recovered bodies of artillery fragmentation type high-explosive shells, enhanced quality of using shell steels. 4 cl, 2 dwg, 2 ex, 5 tbl FIELD: pressure vessels. SUBSTANCE: plastic-coating metallic vessel comprises outer load-bearing plastic shell and inner thin-walled welded steel shell whose intermediate section is cylindrical, two bottoms, and connecting pipe. The connecting pipe and at least one of the bottoms are made in block. The method comprises rolling out the rod from its one end thus defining the flange having flat circular surface from one side, working the unrolled section of the rod to form the outer surface and inner passage of the connecting pipe, rotating tension of the flange, moulding the bottom of a given shape, connecting the bottom to the intermediate cylindrical section, and welding the parts. EFFECT: reduced labor consumption. 4 cl, 3 dwg
<urn:uuid:ddc23a64-4247-4197-8991-11aea76954c8>
CC-MAIN-2021-43
https://russianpatents.com/patent/222/2223161.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00190.warc.gz
en
0.924165
3,564
2.671875
3
The bandwidth of a measurement system is its most important figure of merit. As part of situational awareness, we want to verify the measurement system’s bandwidth is at least 2x higher than the DUT’s signal bandwidth. This is so we do not miss the important features of the signal or introduce measurement artifacts. While we get the scope’s bandwidth from the vendor, as soon as we add a cable, probe, or amplifier to the scope, we decrease the system bandwidth. The new system bandwidth is as important to know as the scope’s bandwidth, but it is generally difficult to measure except in a calibration lab. We offer a simple method of evaluating the transfer function and system bandwidth of any probing system using a wide band noise source. Measuring transfer functions When we use a single figure of merit, like bandwidth, to describe a transfer function, we are making a lot of assumptions, like: the transfer function looks like a low pass filter, the passband region is flat, and the roll-off region is the transition from flat to a constant downward slope. We could use another figure of merit, like the filter order, to describe how fast the transfer function drops off with frequency. An example of the measured transfer function of a scope with minimal connections to the source, is shown in Figure 1. There are a number of ways to measure the transfer function of a measurement system. Unfortunately, using a VNA is not one of the. We really want to be able to include the scope’s amplifier and whatever DSP equalization is built into the scope’s electronics, in addition to the passive cables and probes. Other than in a calibration lab, we can’t pull these pieces out and connect a VNA from the input to the output. However, we could use a sine wave source, with a flat frequency response. We could sweep its frequency from 1 MHz to 10 GHz and measure the amplitude of the sine wave at various frequency steps. This requires a very flat, high bandwidth sine wave source. We could use a very fast step signal as an input. If this is part of a 10 MHz clock, for example, its spectrum would be a comb pattern of peaks at odd multiples of 10 MHz. But, the amplitude of each harmonic drops off as 1/f, so there is worse signal to noise ratio (SNR) at higher frequency, where we are more interested in the transfer function roll off. The method we introduce here uses a Noisecom NC1100 wide-band noise source with frequency components extending from 1 MHz to > 10 GHz. We measure this signal at the input to the Teledyne LeCroy WavePro HD 804 using the highest bandwidth connection we can, calculate its FFT to get the spectrum, and use this as the stimulus. Then we insert the cables and probes and measure the changed response. On a log amplitude scale, the difference between the distorted spectrum and the reference spectrum is a measure of the transfer function of just the cable-probe system. This method not only gives us information about the probes and interconnects, but it also tells us how the scope responds to the measurement system, information which cannot be measured by a VNA alone. Transfer Functions from a Noise Source A wideband white noise source will have a power density relatively constant over frequency. Compared to square wave source, there will be higher signal power at high frequency, offering a constant SNR across the frequency range. In effect, a wide-band noise source is probing all frequencies at the same time. It’s only when viewed in the frequency domain that we see all the frequency components. The process we use to measure the transfer function of the measurement system can be applied to any measurement system: - Using the fastest sampling rate, measure the voltage noise directly from the source and calculate the FFT. This gives the highest frequency limit to the FFT. - Select a time base which defines the bin size for reasonable resolution. - Average the FFT 300 times to reduce the inherent noise and get a smoother signal - Store this as the reference received signal - Plug in the probe-cable system under test, and measure its spectrum with the same settings - Subtract the reference spectrum from the DUT spectrum, on a log scale and the difference is the transfer function change from the measurement system - Use this approach to explore the transfer function of your systems. The first step is to characterize the noise source. The highest sample rate for the scope used in this example is 20 GSamples per second. This results in a highest frequency in the FFT of 10 GHz, limited by the Nyquist sampling rate. The time base is 1 usec full scale, or 100 nsec/div. This results in a bin size, or resolution in the spectrum, of 1 MHz. This is the base set up. The FFT is calculated using a von Hann window function. The time domain signal and FFT are shown in Figure 2 for the case of a single FFT sweep and then after 300 consecutive FFT sweeps are averaged. The averaged spectrum is the reference spectrum. If we assume the scope’s intrinsic transfer function is flat, this measurement of the noise source spectrum suggests the noise source is not a perfectly flat frequency response. It varies about +/- 3 dB in amplitude. But it has significant energy up to very high frequency. The role-off in the spectrum above 8 GHz is a direct measure of the scope’s rated bandwidth of 8 GHz. This measurement is not possible with a VNA. The upper plot is the calculated, normalized transfer function of the measured spectrum minus the reference spectrum. In this example, it is a flat 0 dBm, since we are looking at the reference noise source compared to itself. Many scopes offer a front-end filter to reduce the measurement bandwidth. When the signal’s bandwidth is low, reducing the scope bandwidth will reduce the high frequency noise, where there is no signal content, increasing the SNR of the measurement. Figure 3 is an example of the transfer function of the system with the scope bandwidth set to 4 GHz. As examples of using this technique to gain insight into the properties of common measurement applications, we look at: - The input coupling settings of the scope - The impact of different cables and tips - The impact of a 10x passive probes under the best case and typical conditions. Bandwidth of input coupling settings The highest bandwidth measurement is when the input coupling is set to 50 Ohms. This will use the highest bandwidth of the scope and terminate the scope-end of the cable to prevent reflections. When we want the highest bandwidth of the scope, we always should use the 50 Ohm coupling setting. When the coupling is set for 1 Meg input, either DC or AC coupled, the bandwidth drops to about 1.2 GHz. This is due to a different setting for the scope amplifier on the 1 Meg input setting. This response is shown in Figure 4. The last setting for the input coupling is grounding the input to the scope’s amplifier. This will connect the input to the scope’s amplifier to an internal ground. It will not short the DUT which would still be connected to the front BNC connector to the scope. It is important to note that there is no such thing as a short above 100 MHz. This internal short behaves like a small inductor in close proximity to the input pin. The near-field electric and magnetic field coupling between the input pin and the amplifier input is not zero. As shown in the measurement above, there is almost -3 dB coupling from 700 MHz to 1.2 GHz between the amplifier input and the BNC input. This is when the input to the amplifier is nominally grounded. When the input termination is set for 1 Meg Ohm (which is typically used so we don’t load the DUT down with a low DC resistance), the system bandwidth is reduced from 8 GHz to about 1.2 GHz with a very fast roll off. How can we have a 6 dB gain in a passive probe system? It is important to note that at low frequency, from 1 MHz to about 200 MHz, the response of the measurement system, with the coax cable and 1 Meg Ohm input resistance of the scope shows a gain of 6 dB. This is a 2x higher signal amplitude. The root cause of this behavior is a good test of how well you understand what is actually being measured by the scope. Every voltage source has, internally, some Thevenin voltage amplitude and Thevenin source resistance. In the case of the Noisecom NC1100 device, the source resistance is 50 Ohms. When we measure the voltage in the scope with 50 Ohms input impedance, we have created a voltage divider with the 50 Ohm source resistance. This means the voltage the scope measures is NOT the internal Thevenin voltage of the source, but half of this voltage. This is the normalized signal we measure: the voltage launched into a 50 Ohm load. When we set the input impedance of the scope to 1 Meg, the voltage launched into the cable from the source hasn’t changed. This signal hits the 1 Meg resistance and nearly 100% of it reflects and heads back to the source, where this reflected wave is terminated by the source series Thevenin resistance. At the scope, we measure two waves, the incident wave and the reflected wave. This results in a measurement of twice the DUT voltage, with a 1 Meg termination, compared with a 50 Ohm termination. A factor of 2 is 6 dB higher. This behavior is a direct measure of how the scope interacts with the cable-probe measurement system. It cannot be measured with a VNA, but must be measured by the scope itself. Given the measurement bandwidth drop off with a 1 Meg input resistance setting on the scope, how do we engineer a measurement with a high bandwidth, but also high impedance at DC? High Impedance AND High Bandwidth The way of getting the best of both conditions is using an active probe. An example of the measured transfer function of a rail probe RP4030 is shown in Figure 5. The rated bandwidth is 4 GHz, and matches the measured transfer function very well. The input impedance of the rail probe is 50 k Ohms at low frequency, but drops off to 50 Ohms above 100 kHz where reflections might be a concern. Bandwidth of Cables and Tips The most common sort of cable available in every lab is an RG58 coax cable. There is always some attenuation in these cables. Figure 6 shows the transfer function of four cable configurations: - A 1 m long VNA quality 50 Ohm coax cable - A 1 m long RG174 coax cable costing $7 per cable assembly - A 2 m long RG58 cable costing $10 per cable assembly - A 1 m long RG174 cable costing $2 per cable assembly These simple measurements offer a few rules of thumb. A good quality 2 m RG58 cable has a -3 dB bandwidth of about 1.5 GHz. A good quality 1 m long RG174 cable has a -3 dB bandwidth of about 5 GHz. Both of these cables have the same BNC connectors on their ends. The difference is in the cable attenuation and the length. Watch out for cheap RG58 cables if you expect to do any signal integrity measurements. The measured performance of this 1 m cheap RG174 cable shows significant ripple from reflection noise from the BNC connectors. Generally, at this price point, the connector terminations are often not very good. When the tip is not coaxial, but pulled apart using a mini-grabber, the bandwidth of the measurement system immediately drops. Figure 7 shows the measured bandwidth with a large tip loop inductance and when the wires are twisted together. While twisting the wires at the tip is a good habit, don’t expect this trick to increase the bandwidth very much. At best, it might be 200 MHz, and then strongly dependent on the details. The way to achieve high bandwidth probing is to make the tip look as much like a 50 Ohm coax as possible. As soon as we spread the signal and return paths apart when we use a mini-grabber tip, we introduce an impedance discontinuity. We can think of this as adding reflections at the tip, reflecting the higher frequency components, acting like a low pass filter, or as a discrete inductance, generating an L/R low pass filter. Either way, the larger the tip loop inductance, the larger the impact on the measurement system bandwidth. Bandwidth of the 10x Probe The 10x probe is actually a complex probing system. Built into it is a low pass filter with a roll-off frequency of about 10 kHz, a parallel high-pass filter with a pole frequency at about 10 kHz, and a special, very lossy coax cable which absorbs and attenuates any high frequency reflections in the cable. In order to use the 10x cable, 1 Meg coupling has to be used in the scope. This automatically drops the measurement system bandwidth to about 1.2 GHz due to the scope amplifier. To see the absolute highest bandwidth of the probe, the tip is replaced with a coax adaptor. This is very different from the typical 10x probe with a large loop inductance at the tip. These two cases are compared in Figure 8. Note that the transfer function of the 10x probe in both the coax tip and the large loop tip, show about 6 dB amplitude at low frequency. This is because the source impedance is 50 Ohms and the input impedance of the scope is 1 Meg, as described earlier. As a general rule, unless special care is taken, assume the bandwidth of a 10x probe is about 100 MHz. While the scope bandwidth is an important figure of merit to describe the highest frequency component the scope can measure, changing its input coupling, and adding cables and probes will decrease this measurement bandwidth. Having a simple way of measuring not just the signal bandwidth but the entire transfer function into the scope is a valuable way of characterizing your measurement system. Knowing the transfer function of the measurement system, compared to the signal spectrum from your DUT, will give you the situational awareness to know what is real about your DUT’s signal and what might be a measurement artifact.
<urn:uuid:36534b60-6287-42c4-b0d1-14d9d923d3ea>
CC-MAIN-2021-43
https://www.signalintegrityjournal.com/articles/1092-quick-simple-way-to-measure-the-system-bandwidth-of-a-scope-probe-system
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587770.37/warc/CC-MAIN-20211025220214-20211026010214-00631.warc.gz
en
0.912193
2,974
3.34375
3
Haystack: Is it a Volcano? Standing rather proudly in the shadow of the Rocky Mountains along the front range of Northern Colorado is a very unique little mountain affectionately known as Haystack Mountain, earning its name by the early dairy farmers who settled on its flanks. Haystack mountain, near the tiny railroad settlement of Niwot, and between the infamous city of Boulder and the farming community of Longmont, in Boulder County, is not known for its majestic heights like its more famous 14 thousand-foot neighbors. Nor is it famous because it catches the eye with its glistening ice fields, or forested slopes teaming with Colorado’s abundant wildlife. No, Haystack mountain enjoys prominence only because it has stood for over a million years on the banks of a small creek, and at the confluence of a historic meeting of two vastly different cultures, the Native American Arapaho tribe who called it home for a millennium and Colorado’s early gold and silver miners who followed Left Hand Creek into the mountains near Boulder to discover their fortune. Rising abruptly out of the prairie, in the center of the Left Hand Creek watershed, Haystack Mountain can only boast a three-hundred-foot summit! Covered with prairie grass, and arid landscape vegetation, this tiny pinnacle is not even shaded by a single tree! Haystack mountain can claim no ice field or glacier, either. No sparkling water falls from its mounded summit and no fantastic rock formations greet the wayward explorer. Haystack mountain cannot even lay claim to abundant wildlife. It is frequented only by a lone prairie dog, maybe an elusive rattlesnake, some field mice and a hungry coyote. At times, people have spotted a stray deer, bear or even a lonely elk, none of which would call Haystack Mt. home. Birds of prey however, find it a perfect visual vantage point and often can be seen circling its summit in search of an evening meal. And finally, if one did happen upon its rather steep slopes they would be in no danger of falling into a cauldron of hot volcanic lava, or ever being brushed with the steam of a long dormant vent, because Haystack mountain is many things, but it is not a volcano, and it never has been, in spite of the rumors often spread by earlier settlers along Left Hand Creek. “Most of the early settlers assumed it was a volcano and didn’t want to settle anywhere near it,” said Suzanne Webel, a Boulder County geologist. So how did this little mound of rock and shale and sand appear on the prairie, a focal point for the area, and the centerpiece of the Left Hand Creek watershed. Like every mountain Haystack’s story began long, long ago… What is now Haystack Mountain, was once part of Table Mountain, said Webel. Table Mountain is the plateau just northwest of Haystack Mountain, identifiable by the two large dish antennas on its northern flank. About 70 million years ago, during the late Cretaceous period when dinosaurs still roamed in what is now Colorado, an extensive shallow sea left muddy deposits that became poorly consolidated into the thick layer of Pierre Shale that underlies much of the area. Before streams went to work eroding these deposits, they used to be thicker and more continuous than they are today. This soft rock forms the bulk of Haystack Mountain. About 1.8 million years ago, long after the uplift of the present-day Rocky Mountains, during the Ice Age or Pleistocene period, there was a major outpouring of coarse sediments eastward from the mountains. These sediments contain a hodge-podge of rocks of all types, from whatever source happened to be uphill: granite, gneiss, quartzite, limestone, and sandstone. Dinosaurs had given way to woolly mammoths and saber-toothed tigers. These variably consolidated deposits of gravels and conglomerates are called the Rocky Flats Alluvium, which rests on the much older Pierre shale of the earlier period. The Rocky Flats Alluvium became a hardened layer on top of softer rock. Streams then began cutting their way through the hard layer and into the softer deposits underneath, leaving behind tables, or mesas, such as those you see along the Front Range. Left Hand Creek, and its tributary James Creek, drain the local mountains in roughly an easterly direction (oddly oblique to major, mapped faults) at a latitude corresponding to Niwot or Nebo Road. When it reaches the valley west of the Front Range (Olde Stage Road corridor), the creek diverts north and exits the mountains just west of Plateau Road (after merging with Geer Canyon Creek). It is believed by some geologists that at one time the major drainage did not divert, but rather emptied into plains just north of the old Ball Aerospace facility. At some point during the past 1.8 million years, these rivers and streams began to cut away the southeast portion of Table Mountain, opening a wider and wider gap between the bulk of the plateau and the much smaller Haystack Mountain. There’s even a saddle between the two mountains, Webel said. Haystack Mountain has held its shape thanks to a remnant of the hard layer at the top, called a caprock. Believe it or not, the very top of Haystack Mountain is all that is left of a vast sheet of Rocky Flats Alluvium (sorry, folks, this little pinnacle is not a volcano!). This conglomerate layer may be the source of the cobbles and boulders scattered over the lower slopes of Haystack Mountain. The similarity of gravels on Table Mountain and Haystack Mountain, and the conformity of their summits give support to the idea of a widespread conglomerate layer of which has protected the summits from erosion for thousands upon thousands of years. Finally, what is truly amazing is that a once roaring torrent of water, crashing out of the mountains to the west, diverted from its ancient course and has become the shorter, deeper valley of the now quiet stream known as Left Hand Creek, which now lies to the south of Haystack mountain, after having cut its path between two mountains, bringing to life what we know today as our beloved Haystack Mountain! Haystack mountain may not be a volcano, but rather is a prominent geological feature that not only has a remarkable ancient history, but more recently a profoundly important modern history. As mentioned Haystack mountain lies at the crossroads of two vastly different cultures in human history. From time immemorial, this prominent feature served as a perfect observatory for the ancient peoples crossing the plains in the shadow of the Rocky Mountains. In more recent times, legend has it that the now famous Arapaho Indian, chief Niwot spent the winters in its shadow and today there are remnants of these proud people found at the base of Haystack in the form of ancient teepee rings and fire pits. In her book, Chief Left Hand, (Niwot in the Arapaho language) by Margaret Coel, it is clear that the Arapaho Indians were the first modern culture that made their home along the banks of Left Hand Creek, using Haystack Mountain as the logical high point on the plains to look out for any marauding enemies or wayward visitors. One wonders if they stood upon its summit and saw the approach of the first white settlers who arrived in the area having followed the South Platte River from Nebraska, to the Saint Vrain, and finally the watershed of Left Hand Creek to arrive at this unusual promontory on the plains. These settlers, the Affolter family, built one of the early buildings in Boulder County. As stated in the notes of a Mr. Tom Kiteley at Old Mill park in Longmont, where this iconic cabin now resides, “The Affolter Cabin was built in 1860 near Haystack Mountain on Left-Hand Creek, west of Longmont”. Haystack is a weird, pointy hill west of town; I had a friend ride a bicycle down the hill once… idiot! He was all right until he hit the barbed wire at the bottom. There should have been good land in that area, though the Swedes settled out there and Garrison Keillor suggests they had great affinity for the rocky soil of their home lands. John Affolter was one of the earliest settlers in the St. Vrain Valley and the cabin’s extensive lifetime can be seen in the age of the notches and layout of the construction of the cabin. The original cabin craftsmanship can still be detected. Statehood waited nearly another decade and was probably won using less-than-ethical strategies that ignored the sinking population after the first gold rush flash. Horace Greeley had yet to be impressed by a specially salted mine. The cabin was donated to the Longmont Historical Society by the Dodd family who eventually farmed most of the land around Haystack, and relocated the cabin to the park. In her book Margaret Coel also mentions a first meeting in this same cabin, between Chief Left Hand, or Chief Niwot and the Affolters. Others too give witness to this infamous interaction. “While Affolter was living in the house in the mid-1860s, the Arapaho Chief Niwot visited and camped around the cabin The property was a favorite wintering spot for the Arapaho Indians who found a good supply of game and drinking water in the vicinity. Stone rings about 20 feet in diameter are still visible some 200 yards southwest of the cabin. These stones were used to hold down the edges of the tepees. Between 1867 and 1876 the cabin was used periodically as the headquarters of the F.V. Hayden Survey which made geological maps and reports of the Rocky Mountains (Darby, 1970). (information gathered from “the Dodd Property, a historical study by the University of Colorado”) Thus, the two cultures came together for a short period of time, before the great Indian chief met his untimely death at the banks of the notorious Sand Creek, in the Sand Creek massacre. How different history may have looked had he only stayed home in his camp at the foot of Haystack Mountain! However, what remains of the white settlers is clearly evident today in the ever-changing watershed of Left Hand Creek. Now Haystack Mountain golf course graces its base, and thousands of people have found residence in its shadow. How long will Haystack Mountain survive? Like Table Mountain, Haystack Mountain is slowly eroding, because of rain, snow and wind. After first being hesitant to narrow it down to anything more specific than “it could be a hundred thousand years, it could be a million,” Webel provides a more specific number. Regionally the long-term erosion rate seems to have averaged between 0.1 mm/year (and) 0.9 mm/year … Anyway, if we try an average of 0.5 mm/year, and if we assume that the tip of Haystack Mountain is 300 feet above the valley, and convert 300 feet to millimeters, it’s 91,440 mm high. That means Haystack Mountain would disappear completely in 182,880 years. Exactly. That is of course unless curious hikers slip past the watchful eyes of the current owners, the Ebel family, and hike its summit. Then the erosion they cause could rapidly bring about the untimely demise of our proud little mountain along the watershed of Left Hand Creek. As Johnny St. Vrain shared in his article on the origins of Haystack mountain, upon which much of this information relied, “Ha! … Nothing in geology is ever that clear-cut, of course, but we can still engage in fun little speculations.” Maybe it would have been better if Haystack Mountain was a volcano…its future would be much more certain! If this tiny mountain was still growing, curiosity seekers might stay off its summit and its slow demise would stop. In fact, had it been a volcano, Haystack Mountain would have had a glorious future as it continued to grow, and taken its place among the giants of Colorado! But it is not a volcano. It is a priceless monument, a witness to history and a testimony to the power of water, wind and rain along the watershed of Left Hand Creek. Submitted by Gregory K. Ames Landowner and LWOG Board Member
<urn:uuid:12aed8f3-08c2-4445-818c-2170499ae027>
CC-MAIN-2021-43
https://watershed.center/home/haystack/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00030.warc.gz
en
0.958556
2,572
3.234375
3
IGNORE NULLS: This is not the mission statement of a right-wing political party :), but an optional clause in some of Oracle’s analytic functions. Recently I posted a query on OTN that used Last_Value with this clause to simplify another poster’s solution to a grouping problem. It occurred to me then that the clause is much more powerful than is generally appreciated, and I’ll try to demonstrate that below. Oracle describes the analytic function First_Value in its SQL manual thus: ‘FIRST_VALUE is an analytic function. It returns the first value in an ordered set of values. If the first value in the set is null, then the function returns NULL unless you specify IGNORE NULLS. This setting is useful for data densification.’ Although accurate, the reference to data densification possibly undersells it: When used in conjunction with CASE expressions, IGNORE NULLS allows you effectively to include a WHERE condition on the rows processed by the function, in addition to the partitioning and windowing conditions. This is useful because the latter two conditions have to be defined relative to the current row, whereas the new condition is absolute. Let’s take an example based on Oracle’s demo HR schema. Suppose that we want a list of employees, and for each employee we want to assign another employee, perhaps as a mentor. We’ll take the following rules: - The mentor has to be in the same department - The mentor has to earn more than the employee, but not too much more, say up to 1000 more - The mentor has to have worked at the company since at least a certain date, say 01-JAN-1998 Subject to these rules, we’ll take the highest-earning (or maybe the lowest, let’s try both) employee as mentor, and won’t worry about tie-breaks for this post. The objective of maximising (or minimising) the mentor’s salary subject to the rules implies the use of Last_Value (or First_Value) with an ordering on salary (we can’t use Max because we don’t want to return just the salary). The first two conditions can be implemented as partioning and windowing clauses respectively, and operate relative to the current employee. The third condition is absolute though and can’t be implemented within the analytic clause itself, which is where IGNORE NULLS comes in. If we make the operand a CASE expression that returns the required details only for employees that meet the required condition and null otherwise, this will implement the required condition. A possible query would be: SELECT emp.first_name ||' ' || emp.last_name employee, dep.department_name dept, To_Char (emp.hire_date, 'DD-MON-YYYY') hire_date, emp.salary, Last_Value (CASE WHEN emp.hire_date < '01-JAN-1998' THEN emp.first_name || ' ' || emp.last_name || ', ' || To_Char (emp.hire_date, 'DD-MON-YYYY') || ', ' || emp.salary END IGNORE NULLS) OVER (PARTITION BY emp.department_id ORDER BY emp.salary RANGE BETWEEN 1 FOLLOWING AND 1000 FOLLOWING) mentor FROM employees emp JOIN departments dep ON dep.department_id = emp.department_id ORDER BY 2, 4, 1; Of course, there may be employees who don't have a mentor on our rules, but here are the first few records for the Shipping department (note that I deleted the department column to reduce scrolling): EMPLOYEE HIRE_DATE SALARY MENTOR ------------------ ----------- ------ ----------------------------------- TJ Olson 10-APR-1999 2100 Curtis Davies, 29-JAN-1997, 3100 Hazel Philtanker 06-FEB-2000 2200 Julia Nayer, 16-JUL-1997, 3200 Steven Markle 08-MAR-2000 2200 Julia Nayer, 16-JUL-1997, 3200 James Landry 14-JAN-1999 2400 Laura Bissot, 20-AUG-1997, 3300 Ki Gee 12-DEC-1999 2400 Laura Bissot, 20-AUG-1997, 3300 James Marlow 16-FEB-1997 2500 Trenna Rajs, 17-OCT-1995, 3500 Joshua Patel 06-APR-1998 2500 Trenna Rajs, 17-OCT-1995, 3500 Martha Sullivan 21-JUN-1999 2500 Trenna Rajs, 17-OCT-1995, 3500 Peter Vargas 09-JUL-1998 2500 Trenna Rajs, 17-OCT-1995, 3500 Randall Perkins 19-DEC-1999 2500 Trenna Rajs, 17-OCT-1995, 3500 Donald OConnell 21-JUN-1999 2600 Renske Ladwig, 14-JUL-1995, 3600 TJ Olson 10-APR-1999 2100 James Marlow, 16-FEB-1997, 2500 Hazel Philtanker 06-FEB-2000 2200 James Marlow, 16-FEB-1997, 2500 Steven Markle 08-MAR-2000 2200 James Marlow, 16-FEB-1997, 2500 James Landry 14-JAN-1999 2400 James Marlow, 16-FEB-1997, 2500 Ki Gee 12-DEC-1999 2400 James Marlow, 16-FEB-1997, 2500 James Marlow 16-FEB-1997 2500 Mozhe Atkinson, 30-OCT-1997, 2800 Joshua Patel 06-APR-1998 2500 Mozhe Atkinson, 30-OCT-1997, 2800 Martha Sullivan 21-JUN-1999 2500 Mozhe Atkinson, 30-OCT-1997, 2800 Peter Vargas 09-JUL-1998 2500 Mozhe Atkinson, 30-OCT-1997, 2800 Randall Perkins 19-DEC-1999 2500 Mozhe Atkinson, 30-OCT-1997, 2800 Donald OConnell 21-JUN-1999 2600 Mozhe Atkinson, 30-OCT-1997, 2800 In general, to find the last value of an expression in a record set ordered by a possibly different expression, with an absolute condition on the records to be considered, use the following form of the function: Last_Value (CASE WHEN absolute_condition THEN return_expression END IGNORE NULLS) OVER (partitioning_clause ORDER BY order_expression windowing_clause) Note that there is an interesting special case that arises when forming break groups defined by changes in sequential records in an ordered set. The break points can often be obtained by the Lag and Lead analytic functions, and the groups that other records belong to can then be found through expressions of the above type. However, analytic functions can't be nested, so the first step needs to be performed in a separate subquery (inline view or subfactor) -see the first embedded scribd document below for further details on the SQL for this common requirement. I stated above that we wouldn't worry about tie-breaks in this post, but it's worth mentioning that Oracle allows multiple columns in the ORDER BY only if the windowing clause includes only UNBOUNDED and CURRENT ROW terms. However, you can often pack multiple columns into a single expression by formatting numbers with fixed size and zero-padding etc. Other Analytic Functions and Null Values IGNORE NULLS can also be used with Lead and Lag and the new 11.2 function Nth_Value, which extends First_Value, Last_Value to specific ranked values. It is interesting to note that some of the other functions, such as Sum, ignore nulls implicitly: SELECT 1 + NULL added, Sum (x) summed FROM ( SELECT 1 X FROM DUAL UNION SELECT NULL FROM DUAL); ADDED SUMMED ---------- ---------- 1 In Oracle null signifies an unknown value and therefore adding null to any number, for example, results in null. Technically, you would therefore expect a sum that includes a null value to result in null, but in fact it does not as the SQL above shows. No doubt practicality won out over theory here. Again, with other functions such as Sum we can apply a condition by using a CASE expression that returns null or zero if the condition is not met, although not with certain functions such as Avg (but where we could sum and count separately and then calculate the average ourselves). Other Examples with IGNORE NULLS Here is the OTN thread mentioned earlier: Custom ranking. The table temp3 contains transactions, some of which are defined to be interest-only transactions based on a condition on two fields. The requirement is to list all non-interest transactions but to summarise interest-only transactions beneath the previous non-interest transaction. My solution, simplifying an earlier proposed solution, involved using Last_Value with IGNORE NULLS in a subfactor to associate the prior non-interest transaction with all transactions, and then doing a GROUP BY in the main query. BREAK ON trx_grp WITH grp AS ( SELECT Last_Value (CASE WHEN tran_id != 'SHD' OR flg = 'N' THEN tran_code END IGNORE NULLS) OVER (ORDER BY tran_code) trx_grp, tran_id, flg, tran_date, tran_code, amt FROM temp3 ) SELECT tran_id, flg, Min (tran_date) "From", Max (tran_date) "To", trx_grp, Sum (amt) FROM grp GROUP BY tran_id, flg, trx_grp ORDER BY trx_grp, flg / TRA FLG From To TRX_GRP SUM(AMT) --- --- --------- --------- ---------- ---------- ADV N 31-OCT-11 31-OCT-11 59586455 50 SHD Y 01-NOV-11 02-NOV-11 10 PAY N 03-NOV-11 03-NOV-11 59587854 50 PAY N 03-NOV-11 03-NOV-11 59587855 50 SHD Y 03-NOV-11 05-NOV-11 9 PAY N 06-NOV-11 06-NOV-11 59588286 50 SHD N 06-NOV-11 06-NOV-11 59590668 50 PAY N 07-NOV-11 07-NOV-11 59590669 50 8 rows selected. I have also used First_Value, Last_Value to help form range-based groups, here (if you can't see the document, 'Forming Range-Based Break Groups with Advanced SQL', it is also in the previous post, up the page): Using KEEP with the First and Last Functions FIRST and LAST are very similar functions. Both are aggregate and analytic functions that operate on a set of values from a set of rows that rank as the FIRST or LAST with respect to a given sorting specification. If only one row ranks as FIRST or LAST, then the aggregate operates on the set with only one element. and describes their value thus: When you need a value from the first or last row of a sorted group, but the needed value is not the sort key, the FIRST and LAST functions eliminate the need for self-joins or views and enable better performance. This seems at first pretty similar to First_Value and Last_Value, so we might ask what they could do in relation to our requirements above. The problem for us is that we can't include a windowing clause as it's not allowed in this case, so we'd have to accept the maximum salary within the allowed date range: SELECT emp.first_name ||' ' || emp.last_name employee, dep.department_name dept, To_Char (emp.hire_date, 'DD-MON-YYYY') hire_date, emp.salary, Max (CASE WHEN emp.hire_date < '01-JAN-1998' THEN emp.first_name || ' ' || emp.last_name || ', ' || To_Char (emp.hire_date, 'DD-MON-YYYY') || ', ' || emp.salary END) KEEP (DENSE_RANK LAST ORDER BY emp.salary) OVER (PARTITION BY emp.department_id) mentor FROM employees emp JOIN departments dep ON dep.department_id = emp.department_id ORDER BY 2, 4, 1; [dept deleted from output] EMPLOYEE HIRE_DATE SALARY MENTOR ------------------ ----------- ------ ----------------------------------- TJ Olson 10-APR-1999 2100 Adam Fripp, 10-APR-1997, 8200 Hazel Philtanker 06-FEB-2000 2200 Adam Fripp, 10-APR-1997, 8200 Steven Markle 08-MAR-2000 2200 Adam Fripp, 10-APR-1997, 8200 James Landry 14-JAN-1999 2400 Adam Fripp, 10-APR-1997, 8200 Ki Gee 12-DEC-1999 2400 Adam Fripp, 10-APR-1997, 8200 James Marlow 16-FEB-1997 2500 Adam Fripp, 10-APR-1997, 8200 Joshua Patel 06-APR-1998 2500 Adam Fripp, 10-APR-1997, 8200 Martha Sullivan 21-JUN-1999 2500 Adam Fripp, 10-APR-1997, 8200 Peter Vargas 09-JUL-1998 2500 Adam Fripp, 10-APR-1997, 8200 Randall Perkins 19-DEC-1999 2500 Adam Fripp, 10-APR-1997, 8200 Donald OConnell 21-JUN-1999 2600 Adam Fripp, 10-APR-1997, 8200 However, I thought these functions worth mentioning in this post because they can be very useful but seem to be not very well known. People often simulate the functions, in aggregate form anyway, by means of another analytic function, Row_Number, within an inline view but, as is generally the case, the native constructs are simpler and more efficient. I benchmarked various approaches for the aggregation case here (if you can't see the document, 'SQL Pivot and Prune Queries - Keeping an Eye on Performance', it is also in the previous post, up the page):
<urn:uuid:28358c17-4a9f-463e-9a02-badd56290288>
CC-MAIN-2021-43
http://aprogrammerwrites.eu/?tag=last_value
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00671.warc.gz
en
0.810372
3,101
2.5625
3
Henry Ford was one of the most brilliant entrepreneurs in creating the automobile assembly line, it was his controversial characteristics and unorthodox approach towards administrating the Ford Motor Company which resulted in the conglomeration of one of the most successful corporations in the world. At the turn of the century everything was booming! The growth of the economy and stock market increased the job opportunities as well as morals. As a result of this industrial revolution, out of the woodwork came a humble yet driven man, Henry Ford. Between the five dollar/day plan, his policies on administrating the company, and his relations with his customers, Ford was often presented as a suspicious character. This controversial behavior epitomized the success of the company, it did not lead to his own downfall as many suspect. The Anti-Semitic accusations, and the belief that Ford was taking advantage of his customers, were by far overshadowed by his brilliance and strong hand in running his company. Of course, there were not always supporters of Henry Ford. If fact, there were many critics, critics who believed that Henry Ford was so controversial that it prevented the potential of Fords from becoming greater than it is today. By the mid twenties the Fords was already the worlds most successful automobile company, but their great reputation would soon decline. Fords $5/day plan for all employees signified the overwhelming success of the company. Many believed this success was short-lived with the new policies dealing with the workers which soon followed. With the need to increase production and lower costs, in the mid 30s Ford cut all Ford workers wages in half. Workers were expected to work faster, and harder. Department heads were ordered to ban all talking and whistling while work was in progress. All of this was a ploy by Ford to make sure he knew every move of his workers, he was very possessive. Secondly, Ford began to fire older workers and hire younger workers. His ideology was that the younger workers could work more productively and more efficiently. Which in turn would send more money flowing into his pockets. What resulted was quite humorous in fact. Black hair dye became a hot seller in the Detroit area . Older workers tried to disguise their age by dying their hair black. Fords manipulation of his workers was immoral and unjust. There was no industrial democracy, workers were forced to do what they were told or would be out of a job. Henry Fords controversial behavior reflected badly on himself and on the Ford Motor Company. The Anti-Semitic views expressed by Henry Ford could never be denied. It was common knowledge in fact that Henry Ford was prejudice. He wrote an article in the Dearborn Independent expressing his ideas that Jews were the cause of many peoples problems. Henry Ford was sued by a man by the name Aaron Sapiro in the early 1930s. Sapiro had evidence that Ford threatened himself with Anti-Semitic sentiments. Ford was recorded as saying, Sapiro is a shrewd little Jew. The bible says Jews will return to Palestine, but they want to get all the money out of America first. Sapiro should be kicked out because he is trash..The result of the trial was humiliation for the Ford company and Henry Ford himself. After a hung jury in the first trial, the case was dropped when Ford wrote a lengthy retraction and apologized for his statements. Fords was declining in profits and production among the worlds best. All as a result of Henry Fords ego. Thus, by 1931 Ford lowered in the ranks, controlling only 28% of the market 2nd to GM with 31%. Henry Ford was the godfather of the automobile industry in the early 1900s. The development of his River Rouge plant was considered Cathedral.Hundreds waited month after month in front of the employment building hoping to be hired. To foreign immigrants it meant hope and a successful future. The River Rouge plant employed over 50,000 employees. Pols, Lithuanians, Germans, almost every western Europe country could be represented at the Ford Plant. Like a father Henry Ford began educational programs, teaching his illiterate employees how to read English. Company picnics, and dinners were all part of Fords policies that were so unusual, yet so brilliant at that Of the most controversial actions of Ford was his hiring of criminals. In fact it was said that, thousands of former criminals were taken on the Fords payroll over the course of the years, all at Mr. Fords Requests.Not only was this a highly questionnable decision, but it startled everyone. It was odd, especially when there was such a demand to work at Fords. Why would Henry Ford want to take the risk of hiring potentially dangerous felons? Nobody would be able to answer this question better than Fords right hand man Harry Bennett. Bennett has said that Henry Ford was very sympathetic towards criminals, even that he would try and, in a sense, rehabilitate them. Not only did the new workers please Henry Ford, but they also helped the company itself. Fords controversial new policy of hiring criminals not only surprised the River Rouge workers, but it swept across the nation. Many news articles were printed concerning Fords policies. In effect Ford was receiving free advertising. Whether it was his intent or not, Fords ideas, sometimes eccentric helped market the company for the good. In 1914 Henry Ford hired John R. Lee to update the companies labor policies. $5/day was to be split into half wages and half profits. Ford employees would only receive profits when they met specific ezdards of efficiency and were cleared by the sociology On January 5, 1914 Henry Fords announcement of the incredible $5 dollar/day plan swept the newspapers across the nation. The Detroit Journal announced, The surprise of the labor leaders and the consternation of manufacturers,, Henry Ford announced on Jan 5, 1914 that a minimum wage of $5 dollars/day would be instituted immediately in the Ford plants, along with a profit sharing plan for all male employees. Not only did Henry Fords new deal shock the nation, it sent a tremendous number of workers to Detroit. For the next ten years people would do anything to become a worker of one of Henry Fords plants. It was unheard of to be offered $5/day by any automobile company. In fact the average salary for most was a mere $2.50/day at GM and Chryslers.But Henry Fords $5/day plan was truly an illusion, it allowed for greater control of his workers. It was said that The 5 dollar/day plan was an important early attempt at implementing a corporate welfare program. Ford wanted to see his company prosper, his employees were a part of this company. The development of the Sociology department would allow Henry Ford to exploit his employees private lives. Employees were advised by investigators on how to live in order to receive his/hers share of the profits.The result of this was a tight knit community with no corruption. This department also monitored the daily happenings in the plant. In fact, the department had over 1000 informers who would notify the department if any stealing or illegal plans were taking place. Social workers conducted extensive interviews on subjects ranging from household finances to sexual patterns. It was stated at that time that, the intrusion into workers lives, in the minds of Ford officials, was a small price to pay for increased wages, efficiency, production, and in the end profits for the Ford Motor Company. Many felt that this socialist system was infringing upon the democratic rights of the workers specifically the right to privacy. Observers claimed that workers were forced to act like robots in order to keep their jobs, but this was not the case. Henry Ford created the stability and order that any corporation needed to succeed in the early 20th century. Some may say that Ford was a sort of father to the workers he employed. After all, a father is always harshest to the ones he cares for most. And that was what Henry Ford was. The financial success was extraordinary. By 1914 Fords had over 600 cars daily rolling off the assembly line. Between 1914 and 1921 earnings soared from 25 million to 78 million.All of Fords efforts and expectations came to a pinnacle when, at the close of 1923 there were 6,221 passengers cars in the city of Detroit, one for every 6.1 persons. Of these 6,221 cars, 41% were Fords. Henry Ford was not a greedy man, his sometimes unorthodox behavior and policies epitomized the success of the company. Throughout the depression he offered a sense of hope for his employees. By offering jobs to outcasts he became very controversial, but he had reasons. Ford wanted his workers to be moral citizens, people that could offer The Ford Motor Company loyalty, leadership, and trust. A result of this was the financial success of the company. Henry Ford knew what he had to do in order to accomplish his goals. Ford knew he might not always be accepted in the community, he also knew that this was the risk he had to take. It was all clear when he said, Were going to expand this company, and you will see it grow by leaps and bounds.How amazing that his prophecy has came true! American Decades 1910-1919 New York: Gale Research Co., 1996. A contemporary survey on the backround of Henry Ford, and the Ford Collier, Peter. An American Epic.New York: Summit Books Co.,1987. A chronological study of the political and financial success of the Lacey, Robert. Ford, The Men And The Machine. New York: Ballantine Books Co., 1986. A more personal study of the Ford family and the contraversy surrounding the success of the Ford Motor Company. Marcus, Paul: Ford: We Never Called Him Henry. New York: Tom Doherty Associates Co., 1951, 1987. A primary piece of literature related by Harry Bennett offering personal insights in the life of Henry Ford, including conspiracy and controversy. The Annals of America. New York: Encyclopedia Britannica. Co., 1976. A primary source referring to the financial success of the Ford Motor Company as well as the financial policies administered. The Great Depression (no other info available)An interesting presentation offered by past employees of the Ford Motor Company re-telling the triumphs and demise of the Ford Motor Company.
<urn:uuid:9fd2a6f1-5156-4c14-9dcd-2ce1075a7426>
CC-MAIN-2021-43
https://redcowonline.com/biography-of-henry-ford-767-essay/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00551.warc.gz
en
0.974433
2,293
3.6875
4
With the longer daylight hours and warmer temperatures, summertime brings the outdoors to life. Now is the time to be making sure the pond is prepared for all that summer brings, and to ensure health and vitality in your fish, plants and wildlife. There is plenty of preparation needed for ponds to thrive during the summer months. A good maintenance routine will help prevent water quality issues and health related problems in your fish. Water quality issues after winter After a long winter with not much activity, the pond may begin to spring to life without much interference. One of the most important things to check early and regularly in the season is the water chemistry of the pond. Sludge at the bottom of the pond can reduce pH to dangerous levels and provide a breeding ground for disease-causing organisms to hide. If physical removal isn’t an option, regular dosing of Pond - Sludge Remover will digest sludge using a blend of enzymes and bacteria. Heavy rainfall throughout the winter and spring can reduce hardness levels in the pond. This is due to a lack of minerals present in rainwater (this is also why it is not recommended to use a water-butt in the pond). Without minerals (hardness), dangerous fluctuations and crashes in the pH can occur. A sudden pH crash (acidification) of the pond will generally kill most livestock. To restore adequate hardness and a safe pH after heavy rainfall, use Koi Care - KH Buffer Up. In some areas, the local tap water contains significant mineral content, and so a partial water change may be all that is required. Use Pond - Tap Water Chlorine Remover for any new tap water added to the pond to remove harmful chlorine, chloramine and heavy metals found in tap water. Ammonia and nitrite levels are also often found to be higher this time of year. This usually occurs once the fish begin feeding again, before the bacterial colonies in the filter have re-established after winter. Adding cultured bacteria into the filter early on will help prevent any sudden ammonia or nitrite spikes. NT Labs offer two liquid filter bacteria formulations: Pond - Live Filter Bacteria for ornamental fish ponds or Koi Care - Filter Bugs for koi ponds. Both can be added regularly throughout the season to help maintain healthy bacteria numbers in pond filters, or to re-establish the filter after adding a strong medication. These products can also be safely double dosed at the start of the season. If ammonia is already present in the water, chemical filtration will reduce it quicker than biologically. Pond - Ammonia Remover or Koi Care - Zeolite is a natural rock that will absorb ammonia from the water. Removing ammonia will also reduce nitrite and nitrate. When using zeolite, it is important to test for ammonia as the rock has a finite absorption capacity. Koi Care - Zeolite also includes recharging instructions on how to regain absorption capacity. It is still recommended, however, to replace 50% each time it is recharged as it never regains total capacity. Many pond filters rely on layers of foam to act as both mechanical and biological filtration. Foam needs manual cleaning to remove trapped dirt but without killing off the bacteria inside. To clean filter foams, use a bucket of water from the pond itself or untreated rainwater. Do not use tap water near filter foams as any chlorine and chloramine present will kill off the biological filtration. Many pond keepers believe having thoroughly clean filter foams is ideal. They are unaware of the duality foam possesses in also growing the bacteria that performs the nitrogen cycle. Over time foam loses its efficiency and will need replacing. Most filters contain multiple layers of foam or different media for growing bacteria. It's recommended to change the media in stages to prevent complete removal of biological filtration. Green water issues A common maintenance task that is too often left until it is too late is the changing of the UV clarifier bulbs. These ultraviolet bulbs need changing every spring before the sun gets too much of a chance to shine. Once summer arrives, the sun will proliferate free floating, single-cell algae until the clarity of the pond is lost to a pea-green soup effect. Installing a fresh UV lamp will combat green water over time. To quickly resolve a green or cloudy pond, use Pond - Clears Green & Cloudy Water. This flocculating treatment will clump small algae or dirt particles, causing them to sink to the bottom of the pond. If the pond is filtered, these larger dirt clumps will then be trapped in the filter sponges. In an unfiltered pond, use a sludge remover treatment. Some precautions need to be observed when using Magiclear. Ensure adequate oxygenation of the water, a minimum carbonate hardness level of 6 dKH and a pH above 7.0. If the pond is very green, it is recommended to change 50% of the water before treating. This will help prevent severe oxygen depletion that could occur when using the treatment to combat green water algae. As temperatures increase, saturated oxygen levels in water depletes. Water rarely exceeds 10ppm dissolved oxygen, whereas atmospheric air contains on average 200,000ppm. It is important to increase and maintain adequate surface agitation to allow as much oxygen into the pond as possible. Fountains, waterfalls and air pumps all help in keeping oxygen levels high during the summer months and these should be running 24 hours a day. A muggy, stormy night will reduce atmospheric oxygen levels. Plants (and algae) absorb oxygen at night (reverse photosynthesis), contributing to hypoxic conditions. Many pond keepers wake up to a pond full of dead fish because they turn off their ‘noisy’ pond at night. Changing temperatures alters feeding habits With the rising temperatures of summer, pond fish find their metabolisms increasing too. A higher metabolism will stimulate their appetite, so their diet should be modified to cater for their changing needs. Medikoi Staple with Colour Enhancer contains spirulina for natural vibrant colour enhancement. Medikoi Health contains propolis, a natural antimicrobial to aid fish recovery after infection and Stimmune to support the fish’s immune systems. Medikoi Growth has a higher percentage of protein (40%) for rapid growth rate. A combination of all 3 will provide your pond fish with a complete diet that supports colour, growth and vitality. For the ultimate in fish nutrition, Medikoi Probiotic contains specialist gut bacteria and prebiotic ingredients that will enhance absorption and breakdown of waste. When feeding foods that contain high amounts of protein, many pond keepers find a buildup of white froth on the water surface. This is not usually harmful but can be very unsightly. If the problem persists, use Pond - Foam Control to safely breakdown the froth. Removing this foam will increase gaseous exchange and maintain the correct pH. Adding fish to the pond A trip to your local specialist aquatics retailer to add new fish to your pond should always be a pleasurable and enjoyable experience. Whether it’s a small selection of goldfish or gigantic koi carp, care should be taken to ensure the fish you’ve chosen are of good condition. Healthy livestock - what to look out for Any reputable aquatic store will have plenty of quality livestock for sale. It is important to ensure that the fish offered are healthy and in good condition. Look out for any of the following to avoid purchasing ill fish: Listless / Diseased fish - look out for any fish with obvious diseases - white spot, fungus, tail or fin rot. Fish that are not swimming with the others is often a sign of illness. Emaciated / Skinny fish - a fish that has been deprived of food will have a weakened immune system, making it more susceptible to disease. Dirty enclosures - a dirty enclosure would suggest a lack of upkeep and care of duty towards the livestock. Many pathogens thrive in dirty conditions and the water quality may also be less than ideal. Once the new fish have been chosen and taken home, it’s important to introduce them safely to their new environment. Temperature acclimatisation: Float the fish bag on the surface of the pond for 20 minutes to allow for the water temperature inside the bag to equalise to the ponds temperature. More often than not, this will be a cooling process as fish bags usually warm up during transport. Equilibrium of water parameters: After the initial floating period, open the bag and roll down the sides to create a floating ring. Slowly add water into the bag over the next 20-30 minutes. This will mix the pond water with the transport water and allow the fish to adjust to any differing water parameters gradually. In exceptionally hot conditions, many aquatic stores will add oxygen to the fish bag before transport. Once the fish bag has been opened at home, it is advisable to acclimatise the fish faster than recommended to prevent oxygen depletion in the bag. Be sure to stay up to date with product news, announcements and behind the scenes info on our Facebook, Instagram and YouTube pages! Koi Water Barn and NT Labs have a long history of working together, we feel that the products in the Koi Care range offer our customers a quality product at a realistic price. This is backed up by extensive research and development that is carried out at the NT Labs HQ located in Kent. They are not far away from our shop at Coolings (Knockholt, near Sevenoaks), this is an added bonus for our customers as they are buying from a local UK based company. Keith Holmes - Koi Waterbarn It’s been great to watch this highly successful and innovative company develop its product range to meet the challenging needs of Koi Keepers. It’s great to have a definitive go to brand. Nigel Caddock - Nishikigoi Yearbook Magazine It will be manufactures like NT Labs who give support to retail shops that be rewarded in the long term for their support to the trade Kernow Aquatics & Reptiles We have been working closely with NT labs for many years now. We stock their Pond range and test kits, we strongly believe they are the best liquid test kits on the market, we use them in-store and have sold many over the years. Last year we have began stocking more of their fantastic ranges including their Pro-f food range and Aquarium range of water treatments. Nicholas Cox - The Aquatic Store Bristol Excellent technical department that knows it's chemicals as well as it's fish! Matt Sands At FishCove Aquatics we are huge fans of NT labs range of products we have seen fantastic results using the food and even bigger results with the liquid fertiliser. Tried and tested by us recommend by many. FishCove Aquatics - Dorset “I Just wanted to say thank you for the help with bacterial issues I had in my pond. I can happily say that the mouth rot of one fish and the fin rot of two others has cleared up to near perfection now. I have been really impressed with the service from your technical team. I have already recommended online that people ask NT Labs about their products and recommended usage.” Richard Pashley Your Probiotic food is absolutely brilliant! I bought twenty one 1 year old Japanese Koi via a fish farm in Ogata, Japan about 3 years ago. I’ve been feeding them on Probiotic and Probiotic Growth in the summer months. Some of are now 2ft in length and of superior quality. It’s not cheap, but you get what you pay for. Paul Munday - Consumer review I have used many products for the eradication of aiptasia over the last 15 years. Often with poor results. After purchasing the Anti-Aiptasia, I was astonished with the results! Absolutely fantastic and not like the competitors which normally contains some form of hydroxide. I would highly recommend NT Labs Anti-Aiptasia. This is arguably the best in the market! Viv Samuels-Lee - Consumer Review We use Blanketweed Balance, and find it highly successful in the battle against Blanketweed. We recommend it to all of our customers and feedback is very positive. It is now our best selling Blanketweed treatment. Another top quality product from NT Labs Roydon Hamlet Water Garden I work at a pet store in Durban (South Africa) and have had the best results with my salt water aquariums that I have EVER seen. I have been in the industry for about 10 years now and seriously recommend the products supplied by NT Labs. All our customers that have used anti-aiptasia swear by this product and have said that it over rules every other product out there. Highly recommended. Brandon Baker - South Africa "I called your offices for support and was put in touch with one of your Aquatic Biologists. He has been incredibly helpful... Without your help, my fish would be suffering and I might have had to stop keeping Koi. I cannot recommend your products, or staff, enough." John - Customer Service Review I’ve used and sold NT Labs products for many years. Now I have my own shop I always recommend their products as I know the quality is there and trust that my customers will see this too. They are a great company to deal with and always find Nigel, my sales representative is so helpful. Daryl - Harborough Aquatics
<urn:uuid:0fd16c88-6423-472b-ad32-6b80ffe3f0a7>
CC-MAIN-2021-43
https://www.ntlabs.co.uk/knowledge-hub/how-to-prepare-your-pond-for-summer/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00350.warc.gz
en
0.940101
2,780
2.9375
3
Komodo National Park Komodo National Park lies in the Wallacea Region of Indonesia, identified by both the WWF and Conservation International as a global conservation priority area, and is located in the centre of the Indonesian archipelago, between the islands of Sumbawa and Flores. Komodo National Park includes three major islands: Komodo, Rinca and Padar, as well as numerous smaller islands creating a total surface area (marine and land) of more than 1,800km². The boundaries include part of the island of Flores, where there are actually even more dragons than on Komodo itself. As well as being home to the Komodo Dragon, also known as the Komodo Monitor, or Ora (to Indonesians), the park provides refuge for many other notable terrestrial species. Moreover, the Park includes one of the richest marine environments. Komodo National Park was established in 1980 and was declared a UNESCO World Heritage Site and a Man and Biosphere Reserve by UNESCO in 1986, both indications of the Park's biological importance. The park was initially established to conserve the unique Komodo Dragon and its habitat, first known to people outside the region in 1910 when Lieutenant Van Steyn van Hensbroek of the Dutch Infantry visited as a result of hearing tantalising rumours of their heroic size. Since then, conservation goals have expanded to protecting its entire biodiversity, both marine and terrestrial. The majority of the people in and around the Park are fishermen originally from Bima on the island of Sumbawa, and from Manggarai, South Flores, and South Sulawesi. Those from South Sulawesi were originally nomadic and moved from place to place in the region of Sulawesi to make their livelihoods. Descendants of the original people of Komodo still live in Komodo, but their culture and language is slowly being integrated with that of recent migrants. Little is known of the early history of the Komodo islanders. They were subjects of the Sultanate of Bima, although the island’s remoteness from Bima meant its affairs were probably little troubled by the Sultanate other than by occasional demand for tribute. Flora and fauna The number of terrestrial animal species found in the Park is not high, but the area is important from a conservation perspective as some species are endemic. Many of the mammals are Asiatic in origin. Several of the reptiles and birds are Australian in origin. These include the orange-footed scrubfowl, the lesser sulphur-crested cockatoo and the nosy friarbird. The most famous of Komodo National Park's animals is the Komodo Dragon (Varanus komodoensis). It is the world's largest living lizard and can reach 3m or more in length and weigh over 70kg. Other animals include the Timor deer, the main prey of the Komodo dragon, wild horses (kuda liar), water buffalo, wild boar (babi liar), long-tailed macaques, palm civets, the endemic Rinca rat (Tikus besar rinca), and fruit bats. Also beware of the snakes inhabiting the island, including the cobra and Russel’s pit viper, both of which are extremely dangerous. As far as the marine fauna is concerned, Komodo National Park includes one of the world's richest marine environments. It consists of over 260 species of reef building coral, 70 different species of sponges, crustaceans, cartilaginous (incl. manta ray and sharks) and over a 1,000 different species of bony fishes (over 1,000 species), as well as marine reptiles (including sea turtles), and marine mammals (dolphins, whales, and dugongs). Tropical all year round, and both extremely hot and dry (> 40 degrees Celsius) during August and September. The ferry service (to and from the cities of Sape, on the eastern tip of Sumbawa, and Labuanbajo, on Flores) drops off passengers on Komodo once or twice every week. There is no port on the island, so passengers are unloaded onto small vessels which take them into the island's only village. (Note that not all departures have this service -- check beforehand.) Daily flights are available between Denpasar Ngurah Rai airport and the Komodo airport at Labuan Bajo. Round trip from Denpasar costs Rp 1,700,000 (USD140). Travelers coming in from Sape to the west (those travelling overland through Sumbawa and also those arriving at Bima airport) should note that the once-daily ferries from Sape (costs Rp 55,000) can be suspended indefinitely due to bad weather, so if you want to be sure of your travel arrangements, flying to Labuanbajo is a much safer bet. (If you get stranded at Sape, the best Bima airport will be able to offer is a flight back to Denpasar on Bali.) - Perama Tour. The Hunting Komodo by Camera trip leaves every six days from Lombok. The route is not really on open water because it travels along the coastal line and most importantly it has navigation and safety equipment. Stops are made along the way in Labuanbajo and Komodo. Price for a cabin is around Rp 4,000,000 deck class is Rp 3,000,000 where you get to sleep with a thin carpet. You need to buy the appropriate fees and permits at one of the park headquarters when you arrive on Rinca island or Komodo island. They are supposed to be valid for three days, even though the ticket might state otherwise. For foreign visitors, these are as follows (as of May 2012): - Entrance fee Rp 50,000 - Conservation fee Rp 20,000 - Photo camera fee: Rp 50,000 - Video camera fee: Rp 150,000 - Ranger/guide for each island: Rp 80,000 (per group) Additional fees include activities (eg. diving is Rp 75,000, snorkelling Rp 60,000), research and documentation for commercial purposes. On land: On foot, only, as there are neither roads nor motor transport. On sea: By chartered boat, only, as there are no regular connections. As of Januari 2014, common price for two days boat charter to Rinca island and Komodo island is Rp 2,000,000 (USD160), always negotiate with the boat captain. The small boat can accommodate 4 people. There are also more luxurious cruises from Bali. You may wish to wear long pants, sunglasses and a hat as you walk in the interior. The main reasons to travel to Komodo National Park are the Komodo Dragons, the superb beaches and the unspoilt corals. Keep in mind that there are also wild pigs, monkeys and horses on Pulau Rinca, one of the two largest islands in the park. If you return by sea at night, you can also see legions of flying foxes (fruit bats whose wing span may exceed 4 feet) flying in the twilight sky. At night on the Flores Sea, you also have a magnificent view of the stars. Depending on the time you have available, one or more guided tours on the islands of Rinca and Komodo itself. Please note that it is neither permitted nor advisable to do any tours without local guides, as the Komodo Dragons are dangerous when they attack. This area is inhabited by more than a thousand different fish species, making it one of the world’s richest marine habitats. You may also swim in the Flores Sea on your incoming or outgoing boat trip to one of the islands. Beware of sharp corals on the sea floor near some of the small islands. - Kanawa Island Diving, Kanawa Island, Flores (on Kanawa Island an hour out of Labuan Bajo on the edge of the Komodo Marine Park), ☎ +62 821 4480-2882. Based on Kanawa Island and working alongside Kanawa Beach Bungalows, 14 rustic bungalows located on the beach with restaurant and daily free transfer from Labuan Bajo. Offers daily 2 or 3 dive trips into the marine park as well as diving day and night from the beach on the house reef. A full range of PADI courses from Discover Scuba to Divemaster. The position of the island on the edge of the marine park means a saving on travel time to the dive sites in the central and northern areas of Komodo Marine Park. - Komodo Liveaboards (Komodo Liveaboards). Komodo Dive Liveaboards (Rinca Islands Liveaboard) cruise gives divers the ability to explore some of the most pristine and diverse marine habitats on the planet, while living right on the Komodo Liveaboard diving boat. - Komodo Kayaking ([email protected]), Eco-Lodge hotel, ☎ +62 817 573 0415. Many of the islands in the chain are either inaccessible to large boats or difficult to access. However with a sea kayak, you can travel anywhere you like. Into small grottos and bays, around rocky points and slowly above shallow reefs brimming with fish. - Sea Kayaking and SUP'ing Komodo Islands ([email protected]), Komodo Islands, ☎ +61 3 9598-8581. There are dozens of uninhabited islands within and just outside the Komodo National Park. Many of these are only accessible by unmotorised vessels such as sea kayaks and SUP's. These trips are the only full supported expeditions in the park using proper expedition style sea kayaks, not sit on tops. This agency uses these sit in kayaks as the seas can get turn from tranquil to treacherous quite quickly, and a proper sea kayak is the only vessel that can handle it safely.They also have a 20m support boat. People taking this trip sleep on isolated beaches by pristine reefs, stay in safari style tents with bush showers and toilets available, and eat wonderful Indonesian cuisine. There are 3-day and 5-day sea kayak and SUP trips all year round with the best times of year for paddling between April and November. From AUD1800. - Wicked Diving, Komodo (Located on Main Road in Labuan Bajo - directly below Gardena Hotel), ☎ +62 821 46 1165538, e-mail: [email protected]. Small dive centre in Labuan Bajo operating their own Komodo liveaboard for 3 and 6 day tours. Offering daytrips, training and snorkelling tours. - Uber Scuba Komodo (http://uberscubakomodo.com/diving-komodo), Jl. Soekarno Hatta, Komodo National Park, e-mail: [email protected]. 7am - 8pm Everyday. A professionally run scuba diving center with extremely experienced staff, Uber Scuba is also the only Freediving/Apnea operator in Labuan Bajo. They offer daily diving and Komodo liveaboards too. Centrally located and very welcoming. On Pulau Rinca near the park headquarters you may buy hand carved wooden komodo dragons along with park stickers and park t-shirts. Prices may be cheaper in Labuan Bajo, Flores than on Pulau Rinca. - Ombak Biru (Komodo Dancer), Kuta Poleng A3, Jalan Setiabudi, Kuta, Bali, ☎ +62 36 176-6269. A luxury liveaboard that has been operating in Komodo National Park for the past 10 years. Part of Dancer Fleet Inc., mainly offering 10 night trips to explore North & South Komodo and surrounding islands. A limited selection of food is available near the park headquarters on Pulau Rinca, and the prices are not high by Western standards. Under no circumstances drink any tap water. The tap water is not potable. Near park headquarters on Pulau Rinca, you may purchase water and soft drinks. If you go trekking into the island's interior be sure to take a large bottle of water with you. You will need it! Kayaking and camping The Komodo Islands are made famous by the greatest lizard on the planet, the Komodo Dragon. But the Komodo chain of islands offers so much more than this. Pristine reefs, uninhabited islands, white sandy beaches, marine life second to none and land life as fascinating as the Dragon itself. Many of the islands in the chain are either inaccessible to large boats or difficult to access. However with a sea kayak, we can travel anywhere we like. Into small grottos and bays, around rocky points and slowly above shallow reefs brimming with fish. The Komodo Dragon has a history of attacking humans. Beware of getting too close, and if you are visiting via the park's office (which you should), ask for a guide and stick close to him. Do not wander off or do anything without his consent. Komodos may approach the guest rest area during daily feeding time, but in this time, find a building (which are usually elevated) and stay clear from the railings. Komodos can and will jump to obtain food if necessary. Park rangers are usually present at these events and will deflect any Komodos trying to get in (which they can do). You may be given a large pole with a split on the end, forming a "Y" shape. This can be used as a walking pole or for moving things on your path - however, if wild animals threaten, it can be used as a last form of defence (despite being hardly useful against Komodos). Overall, try keeping a watchful eye and steer clear of any wildlife. Komodos are extremely dangerous if close enough. They can run faster than humans (and accelerate very quickly), so best not approach if necessary. Jumping into water (as Komodos are often found near the beach too) doesn't help either, as they can swim faster than humans, can dive, and can also swim against strong currents (in fact, sometimes Komodos are found on neighbouring islands, suspected of swimming there). Zoologists formerly believed that the main problem was the dragon's septic bite from the rampant bacteria residing in their mouth. More recently theories have been put forward that the Komodo Dragon is actually venomous, and that the biggest problem when bitten is shock and massive blood loss due to the ferocity of the bite. In either case, getting bitten is not a good thing. The absence of crocodiles on Komodo Island (due in part to a lack of suitable habitat) leave the Komodo dragons with no natural predators. Younger Komodos may live in trees. While not as dangerous as their parents, they can still jump off suddenly and cause panic. Snakes, monitor lizards, and other animals are also present and may cause minor problems. Saltwater crocodiles are not present on Komodo Island but they may be present on the surrounding islands and in the ocean. So take extra caution in any area with estuaries and river mouths because the islands are within the natural range of that species of crocodile. It was once believed by Indonesian natives that monitor lizards (including the Komodo dragon) were capable of warning humans of a crocodile's presence, but don't count on this for your safety. - Bali - the Island of the Gods is two hour's flying time away and is a popular combination trip with Komodo.
<urn:uuid:d85d05bd-20fb-4619-9335-48c508a67a49>
CC-MAIN-2021-43
http://75-3-247-200.lightspeed.sntcca.sbcglobal.net:81/wikivoyage_en_all_2016-04/A/Komodo_Island.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00230.warc.gz
en
0.934556
3,299
3.15625
3
The United States has long been a symbol of freedom and democracy, yet some people find it so hard to gain access and eventually citizenship. Immigration into the United States is not hard for most people, buying property, learning English, and gaining a green card. For others it can be hard, not having the money or the resources to enter the country legally is usually the main issue. People from Central and South America have their own problems that they wish to get away from; corruption and crime run rapid through many of their countries, and for some the only answer is to come to the United States. Nearly 3,300 attempt to find refuge in the United States illegally each day, but only 800 of those actually make it to “freedom” Obama has put immigration as his priority, but fails to take the reins on reform. Over the past decade illegal immigration from the southern United States has been growing at exponential rate, creating new and ever difficult problems. Congress has been attempting to solve this problem of illegal immigrants, but the U.S. conference of Catholic Bishops has begun to show an interest in the topic as well. In a report released by the U.S. Conference of Catholic Bishops on January 30th, they forecast that nearly 60,000 unaccompanied minors from South and Central America will be entering the United States this year from the southern border. This has escalated from less than 25,000 the year before, and an even larger leap from a decade ago of just 5,800. Many of the minors who are caught, are released to their relatives already within the United States, who in many cases are themselves illegal immigrants (Millman). Many critics of immigration see this as a way for more and more illegal immigrants to flow in to the United States without and worry of deportation. Sadly, to some, many of the minors who cross over the border illegally are mashed through a mix of government agencies all with the soul goal of supervising the children and teens before deporting them back to their home country. Even with this, some manage to find legal refuge within the United States. While some American’s only see the UACs (unaccompanied children) as people who wish to find a better place to live and “steal” the jobs away from true American’s. They fail to see where the UACs came from, what hardships they faced before deciding to venture to the United States. “Data compiled by that agency show that 95% of what authorities refer to as UACs come from Honduras, Guatemala and El Salvador, Central American republics” (Millman). Within all these countries crime has sky rocketed, they drug trade plays a major role in escalating the problem to it heightened level. The report from the Catholic bishops stated the main reasons why these UACs choose to come to the United States: poverty, opportunity for an education or the urge to join family members already residing within the United States. The absolute main reason for UACs for crossing into the United States would be the growing amount of crime and violence within their home countries. “According to a 2011 report by the United Nations, homicide rates increased — in some cases more than doubling — in five out of eight countries in Central America over the previous five years” (Millman). The breakdown of law and the blurred line of government and crime have and are causing many UACs to flee their home countries. In one case a young girl attending high school in El Salvador was being harassed by gang member to join their group. She declined to join the gang, which lead to many death threats being sent to her and her family staying with her. Her grandmother, who was taking care of her at the time, contacted a relative living the Los Angeles area, who agreed to share the $6,000 it costs to transport her granddaughter (Millman). During her voyage to the United States she and the group of people that she was traveling with were spotted by U.S. Border Patrol agents in Texas last September. She was placed in to a youth shelter, where she lived for nearly a month before being released to her relatives within the United States. The process of finding, sheltering, and then releasing of illegal minors was carried out nearly 20,000 times in 2013. (Millman) This change treatment of children and young adults who have come over the border illegally has changed dramatically over the past decade. New legislature has made it so that these youngsters can have a new life, one without any, or I shouldn’t say any, but one with mush less fear and corruption as in their home country. The Minors who cross into the United Sates are treated very differently than those who are above the age of 18, those seen as adults. President Obama has made this very clear in his resent years in office, coordinating what some would say an “aggressive and sharp-elbowed campaign” (Hennessey). Advocates for immigration have urged President Obama to ease the deportation of illegal immigrants. President Obama claimed immigration reform as his priority while running for his second term, but some would argue that he has taken the wrong steps to work towards his goal of reforming the process. .Illinois Representative Luis Gutierrez pointed out that “President Obama has detained more immigrants in jails, prisons and detention facilities than any other president” (Hennessey). Luis stated this while pointing to Obamas predecessors, former presidents Bill Clinton and George W. Bush. The accusations against Obama are not new; Obama has faced much criticism for his unprecedented number of deportations, nearly two million since his oath in to office (Hennessey). For the time being the administration and Democrats in general have chosen not to strike back at the criticism it is receiving for its stance on immigration. The Democratic Party is choosing not to upset the Latino voters, for they care a lot about immigration reform because it can affect them and their families. President Obama says that he is not the reason for the vast amounts of deportations but it is just that he is at the head of a stalled immigration reform effort. While speaking at a White House town hall meeting, president Obama argued that the constraints on him by the law do not allow him to do the right thing for immigrants, but reform needs to come first. Many advocates for immigration do not take Obamas statements as “face value”. President Obama has made similar remarks on his administrations work towards immigration reform, by stating the limits of his executive power during 2012 (Hennessey). Fortunately not to long after Obama made that statement did he issue his Deferred Action for Childhood Arrivals executive order, which allows illegal immigrants brought into the United States as children to apply for work permits in order to avoid deportation. Advocates for immigration want Obama to expand on his executive order to also include those who have strong ties to the United States (basically those who have been in the United States a number of years) and with no criminal background (Hennessey). Advocates for immigration have asked Obama to put an end to the Secure Communities program, which checks the legal status of people fingerprinted at state and local jails. If someone is found to be here illegally then it also gives them the right to notify immigration authorities. They are also asking to cancel the partnership between local law enforcement and immigration officials so that it is easier for people crossing the border to make it safely into the United States. They also want to stop Operation Streamline, which criminally charges people who have crossed in to the United States illegally (Hennessey). The administration does acknowledge these remarks for reform, but defends its stance on the immigration policy. The administration does not want it to just be anyone any time can come on into the United States, but to just make it a little easier for them to gain legal access to the United States. It has tried to address the issues brought to it by making sure immigration agents focus on the deportation of people with criminal backgrounds first, before those with strong family ties in the United States who don’t pose a threat to public safety (Hennessey). President Obama does have the executive power to do something about immigration reform, but he is cautious to do so. “President Obama has the right to stop deportations, he just don’t want to do it,” said Molina, an owner of and American tortilla company, who’s husband was recently deported back to Mexico (Hennessey). Many people would argue that president Obama needs to not worry about what his party wants of him, but what the people and citizens of America want of him. One of the largest advocates for deportation among the states would have to be Arizona. In 2010, Arizona sought to stop the illegal immigration of immigrants traveling over the Arizona-Mexico border with the Support Our Law Enforcement and Safe Neighborhoods Act, commonly referred to as S.B. 1070 (Arizona). This took the Federal law against illegal smuggling and the requirement to have papers on an individual at all times and wrote them into State law. Arizona law makers have taken that one step further by allowing police to arrest anyone suspected of a criminal charge that could lead to deportation. This in turn allows them to hold individuals in custody until they have done a complete check through the federal government to make sure he/she is not in the country illegally. People who support S.B. 1070 argue that the Federal Government has failed to regulate immigration law, resulting in Border States, such as Arizona, to be overwhelmed by illegal immigrants (Arizona). Brining increased levels of crime and applying pressure on the States social services (police, hospitals, etc.). People opposed to the law say that it would only lead to further racial discrimination against people of Hispanic descent and would have many of them being detained for no reason. After much debate and what seemed like a never ending war of supports and the Hispanic people, the Federal Government finally stepped in. (Arizona). In July 2010, a preliminary injunction issued by a Federal district court prevented major issues of S.B. 1070 from going into effect. “These included State penalties for failure to carry documentation and applying for or gaining employment as an undocumented worker, granting the power to police to arrest those they suspect are deportable, and the requirement that police conduct immigration status checks on anyone they arrested, detained, or lawfully stopped whom they suspect is in the country illegally” (Arizona). Arizona appealed this injunction and was granted certiorari by the United States Supreme Court. Arizona’s lawyer argued for S.B. 1070, stating that the law didn’t make any new criminal charges. It just simply allowed Arizona State law Enforcement to enforce the Federal Laws already set in place. In addition, he said that state and federal law enforcement official s already coordinate on issues of immigration, and that the detaining of suspected illegal immigrants in order to check their legal status is a process that usually only takes about an hour, seemly not an inconvenience (Arizona). In June 2012, in a 3 to 5 ruling, the Supreme Court chose to affirm almost all of the circuit court’s decision against S.B. 1070 (Arizona). In an opinion written by the Supreme Court Justices stated that, “Federal law preempted the portions of Arizona’s immigration law that made it a State crime for aliens to be in Arizona without legal papers or to apply for or obtain work in Arizona as an undocumented alien, and that allowed police to arrest anyone they suspected was deportable” (Arizona). This means that Arizona’s law would violate Federal Immigration laws, along with other constitutional rights. The decision made by the Supreme Court is generally considered as a win for those opposed to S.B. 1070 and the Obama Administration. Justice Scalia issued a counter argument to the ruling, which he argued that they were not respecting the sovereignty of the State of Arizona, and that the Obama administration was ignoring the problem at the Nation’s borders (Arizona). The Supreme Court’s decision to forbid portions of Arizona’s law will most likely affect similar states and their immigration policies. The decision will also set the stage for any future constitutional arguments involving immigration, consequently causing the United States to be more lenient with illegal immigration (Arizona). The United States is gradually moving towards a more lenient state of mind of immigration. One policy issued as President Obamas DREAM Act. It was first suggested in 2001, legislation named the Development, Relief, and Education for Alien Minors Act, or DREAM Act. This would help undocumented students who attend United States schools by qualifying them for in-States aid, leading them on to a path to citizenship. The DREAM Act would encompass all immigrants who arrived in the United States as minors, who have lived within the United States for at least five consecutive years, and who are of good morals (not convicted of any crimes). A student would receive temporary residency for six years, in that time they would need to complete at least two years of a college degree, or serve two years in the United States armed forced. Supporters of the DREAM Act say that the minors who enter the country illegally do so without any consent, or knowingness of what they are doing. They also say that instead of the minors growing up and working day jobs, they could be getting an education to obtain a better job and ultimately paying taxes. “They note that the Defense Department has listed passage of the legislation as one of its official goals for helping to maintain a mission-ready, all-volunteer force” (DREAM). People who oppose the DREAM Act argue that this would only increase the problem of illegal immigration, stating that it would provoke more to venture in to the United States to seek refuge. Opponents of the Dream Act want strict legislation passed on immigration, stronger border protections, and cooperation from Mexico on human and drug trafficking across the border (DREAM). The DREAM Act was sent to the Senate where it failed to pass by a vote of 56 to 43, only eight short needed to bring the Act to a vote. Supporters for the Act will continue to try and get it passed so that millions of people residing within the United State scan have a hope for a better more sustainable life. With the recent War in Iraq and Afghanistan, complimented with the recession taking place, it seems as though the DREAM Act has been set aside for now so that the administration can focus on greater matters concerning the United States (DREAM). Illegal immigrants, in many cases, who come to the United States, are fleeing form oppression in their home country. Immigrants attempting to gain asylum within the United States must demonstrate that if they are to be returned home, “they will be persecuted based upon one of five characteristics: race, religion, nationality, membership in a particular social group, or political opinion” (Asylum). immigrants seeking asylum must first go through the United States Citizenship and Immigration Bureau, where they determine if their documentation is fraud or not, and if they have a plausible case for asylum. If they are found to be carrying fraudulent documents, they are placed into immediate deportation. This is unless there are uncommon circumstances. Once that step is completed they head to the Executive Office for Immigration Review, where they are placed before a judge who determines whether or not the United States will grant them asylum (Asylum). The Immigration and Nationality Act (INA) states that the Attorney General can exercise his/her power in granting or rejecting asylum into the United States. Anyone who is found to have torchers or persecuted others while in their home country, are denied access to asylum (Asylum). The Act lists many other reasons for instant denial of asylum in to the United States, “including when the alien has been convicted of a serious crime and is a danger to the community; the alien has been firmly resettled in another country; or there are reasonable grounds for regarding the alien as a danger to national security” (Asylum). The strict guidelines for asylum make it so that the United States does not accidentally let in someone who could harm its citizens, property, or anyone/anything residing within the United States (Asylum). Immigration in the United States has become a “hot topic”; it seems that the country is split on how we should go about our immigration policy. Some people wish to have reforms to make it easier on immigrants trying to gain access to the country. Supporter of this support their standing by describing the harsh conditions of crime and corruption that many people face, causing them to at least try to make it to the United States, to make it to freedom. Much of the illegal immigration that flows into the United States comes across the southern border with Mexico. Poverty, limited human rights, and a very low, or no minimum wage in South American countries drives people to come to the United States. People who are for strict legislation on immigration and stronger border protection choose to support it because they say it is allowing people a “free ride”. People who manage to enter the country illegally do not have to pay taxes, do not have insurance, are undocumented, and take away from the working men and women of America. Over 70% of all farm hands in the US are undocumented workers. Companies in the US also tend to hire illegal immigrants because they will work for less than the minimum wage, and cannot form a union since they are undocumented. Even with this the majority of Americans right now feel that we need to close off our border to illegal immigrants, but this view is heavily influence by the current recession. America is still a beacon of freedom to the world, and its citizens need to recognize that many people around the world have a much harder life. Freedom is meant to be shared, not kept in a capsule of capitalism. Millman, Joel, and Miriam Jordan. “Flow of Minors Tests Border.” Wall Street Journal. 30 Jan. 2014: A.3. SIRS Issues Researcher.Web. 17 Mar. 2014. (Millman) Hennessey, Kathleen, and Brian Bennett. “Obama Urged to Reduce Deportations.” Los Angeles Times. 08 Mar. 2014: A.7.SIRS Issues Researcher. Web. 17 Mar. 2014. (Hennessey) “Arizona’s Immigration Law,” Congressional Digest 15, no.6 (September 2012) (Arizona) “The DREAM Act,” Congressional Digest 89, no.9 (November 2010) (DREAM) “Asylum in America,” congressional Digest 12, no.3 (March 2009) (Asylum)
<urn:uuid:dc64f367-f906-4467-9414-a21d6d4757f1>
CC-MAIN-2021-43
https://procollegeessays.com/examples/illegal-immigration-freedom-for-some-is-hard-to-achieve/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00471.warc.gz
en
0.974223
3,811
2.515625
3
It provides a legal framework to protect and manage nationally and internationally important flora, fauna, ecological communities and heritage places — defined in the EPBC Act as matters of national environmental significance. Under pressure from big business Federal, State and Territory governments are moving forward with an aggressive plan to wind back our environmental protection laws. By cutting 'green tape', handing important federal approval powers to the states, and fast tracking approvals for large development, federal protection for our most special places and wildlife will be removed, and mining and other destructive development in our forests, woodlands and along our coasts will be accelerated. History has shown us that the Federal government has a critical role in protecting matters of national environmental significance. Short-sighted development proposals have threatened Australia's natural heritage several times in the past and the Federal government has stepped in to prevent irreversible harm. Without Federal intervention, the Franklin River would be dammed, there would be oil rigs on the Great Barrier Reef and pristine Shoalwater Bay would be home to a large coal port. Without Federal protection the KOALA will be doomed to extinction. The Places You Love campaignwas founded by a group of 30 organisations, including HSI concerned with the proposals to wind back our system of environmental laws. We strongly believe that the reforms proposed will set us back decades on hard won protection for our land, water and wildlife PUBLIC MEETING @ PARK RIDGE PUBLIC MEETING @ PARK RIDGE PUBLIC MEETING @ PARK RIDGE TUESDAY 8 NOVEMBER 2011 Park Ridge (Baptist) Church Main Hall 3922 Mt Lindesay Highway Park Ridge 4125 (Take the exit off highway at Park Ridge and follow the eastern service lane for the Mt Lindesay Highway – brick building with car park area provided) NO LOSS OF BIODIVERSITY IS VIABLE We want koalas not new tollways. We want quolls not more multi-lane roads. Bushland habitat is home to many native wildlife and wetland areas like Jerry's Downfall which is part of Chambers Creek Catchment Area provide critical filtering system for the rivers and creeks of Logan. Bushlands wetlands and river systems all contribute to those essentials to maintain human life - clean air, clean water and food. Sustainable development, meaning ecologically sustainable, requires that our human settlement - and moving around - does not destroy the natural envionment to introduce more built structures. COME TO THE PUBLIC MEETING TUESDAY 8 NOVEMBER 2011 @ PARK RIDGE @ 7pm - SAME VENUE PRC INFO SESSIONS Contact Karen 3802 2353 , Rod 0408 740 144 or Anne 3297 0624 for more details and contributions to meeting. Urban Land Development Authority Act 2007 - removes all rights of citizens E petitions are one way to let government know how you feel about an issue. Link at end of article. This legislation creates a new Authority, the Urban Land Development Authority to plan, carry out and promote development in declared areas. The purpose of that Act focuses on housing creation. When an area is declared an urban land development area by regulation, the declaration must make an interim land use plan for the area. The Authority must make a new development scheme for the area which is like a planning scheme. There is mandatory public notification of the new planning scheme and public submissions must be taken into account. The development schemes must be published on the ULDA website. The development scheme sets out if any individual development applications must be publicly notified. The development scheme prevails over plans, policies and codes made under SPA or another Act. Effectively, the Urban Land Development Authority becomes the decision maker in lieu of council. Only the Authority may go to the Planning and Environment Court to seek enforcement orders or declarations- whereas under SPA any person has that entitlement. Governments - Queensland Government especially with the SOUTH EAST QUEENSLAND REGIONAL PLAN SEQRP have been writing and talking about ecologically sustainable development ESD and sustainability. The eight hundred local residents who gathered on the grounds at Greenbank - outside the hall - to hear CEO for ULDA Mr Paul Eagles and his chief planner Steve Connor appreciate the time given by these busy men to listen to our community concerns. That is also a huge contribution from the local citizens to help our planners developers and government achieve the best possible outcome for our global community. However using a word or expresion to describe an activity doen't make it so. In 1992, the Commonwealth Goverment offered its own definition of ESD Ecologically sustainable development is using, conserving, and enhancing the community's resources so that ecological processes, on which life depends, are maintained, and the total quality of life, now and in the future, can be increased. This description can in no way be applied to the current high density housing proposal which may / may not be appoved before any semblance of community engagement with freedom to access background studies ground truthed and peer reviewed by local specialists. The Australian Youth Climate Coalition and World Vision, through this Youth Decide vote, www.youthdecide.com.au have created an opportunity for young Australians to lay their cards on the table and express what kind of future they want to choose. This campaign is original and creative enough to cut through. The young organizers can help to present a unified youth voice calling for a very different kind of future to the path we're currently on. Voting opens Monday 14th and closes Monday 21st September and if you're aged 12 - 29 and live in Australia you're eligible to vote. Choose wisely - it's your future. Queensland residents draws to the attention of the House the community's total opposition to the construction of this power line because it will be a permanent obtrusive and dangerous structure that will cause: - damage to this unique environment by destroying vegetation, native animal habitat and resulting in erosion of the riverbank, the loss of community amenity and visual amenity (restricting current and future uses of the river by visitors and residents and spoiling this beautiful green landscape and valuable community asset); - potential health and safety risks to residents, maintenance and emergency workers during floods and to grazing and native animals; and - adverse impacts on property values. The House should note that the construction of this power line totally contradicts the principles of the Queensland Government's Towards Q2 plan (Green Qld - Protecting our Lifestyle and Environment) which stresses the need to 'protect our natural landscapes' and to 'retain the green spaces between neighbourhoods and regions that create a natural break in our built environment and protect areas that support our unique native wildlife and fragile ecosystems'. Getup current campaignalerts all citizens to this rather anomalous situation. Millions of ordinary people are taking personal actions to reduce their energy use and greenhouse gases. Under the proposed scheme to reduce emissions 5-15%, the other aspect is that the big polluters will be able to increase emissions. GetUp National Director Simon Sheikh through questions asked of the Department of Climate Change and the Commonwealth Treasury presents this information. Target Range: The Government?s household support package, its industry support package and the carbon price it is using are all based on a 5% target. So while they are attempting to package this up as a 5-15% target range, they are in actuality locking in a meaningless 5%. The Government did not, as expected, leave the door open to a 25% target. The target can ONLY be changed AFTER 2020. There exists a blatant contradictory flaw in state legislation that permits areas deemed worthy of 999 year conservation agreements between state and landholder to be quashed by short-term mining ventures. In particular, we draw attention to 'Bimblebox Nature Refuge' in the Desert Uplands which was part-funded by the Commonwealth National Reserve System Programme, on account of its outstanding floristic values. It has since become a stable base for numerous scientific research projects relevant to the entire bioregion. 'Bimblebox' is now threatened by the development of a large open-cut coal mine. For further information go to website. Several research programs and monitoring are occurring at the valuable biodiversity site. Coal exploration activities are likely to affect the results of these ongoing monitoring activities, by creating increased human presence in a relatively isolated area, increasing ‘edge effects' on woodland fauna, and resulting in significant amounts of clearing. Black Duck Valley is closed to customers. This petition to The Honourable Judy Spence Minister Department of Sport and Recreation has been started by individuals hoping to continue the operation of the facility: Petition Online link.can be accessed and signed here. Black Duck Valley has been a sports and recreational facility for motocross riders. Queensland population, particularly in the south-east, has grown exponentially, and sales of motocross bikes has also increased, but riders now have fewer facilities then a few years ago. Not all people are unhappy about the closure as this archived information shows. Current opportunities to support current Queensland Parliamentary EPetitions are available here including recycled sewerage effluent, daylight savings, Gold Coast Hospital,light rail for Brisbane, proposed destruction of Fairy house to upgrade roads, mining in Wildlife Reserves. Other opportunities to comment and help shape our future are available online as follows: According to Minister for Natural Resources and Water Queensland has a number of river systems which have been almost untouched by development and are therefore in near natural condition, with all, or almost all, of their natural values intact. They are important because they: • help sustain healthy ecosystems for native plants and animals • support sustainable economic activities, such as grazing, fishing and eco-tourism • provide unique opportunities for recreation and tourism. One way of preserving this valuable part of our natural heritage for the benefit of current and future generations is to designate them as ‘declared wild rivers'. Do you agree with this statement? Read more about this process and have a say. You can do this online from here. Biosecurity Queensland classes camphor laurel as a Class 3 weed - meaning it cannot be sold and should be removed because it is capable of replacing native trees and disrupting power facilities. Yet changing the proposed route of the proposed line to protect such a tree this is the only request that ENERGEX staff are accepting. This story appears in the ALBERT AND LOGAN TIMES Wednesday, 11 March 2009 which you can read here 030911_energex_save_weed.pdf 181.23 Kb11/03/2009, 12:45 VETO had a public rally 2pm Saturday 14 March at Logan Village Green. Visit their website www.veto.org.au to find out how you can help this campaign to protect the iconic Logan River. Contact spokesperson Marie Slingsby for further information. The petition of residents of the State of Queensland draws to the attention of the House issues relating to the proposed fast-tracking of Greenfield housing developments on the Sunshine Coast. [Though this petition relates to Sunshine Coast all Queensland residents are eligible to vote. Perhaps other petitions can be requested to cover other areas where land will be fast tracked for release?] Closing date was 24 August 2008. GetUp is an independent, not-for-profit community campaigning group who uses new technology to empower Australians to have their say on important national issues. They receive no political party or government funding, and every campaign run is entirely supported by voluntary donations. If you'd like to contribute to help fund GetUp's work, you can donate online. You may have missed it, but the Tasmanian Government last week unbelievably signed an agreement handing over Tasmania's forests to the Gunns pulp mill for the next 20 years - in the very same week Professor Garnaut warned them of the dire climate change consequences facing us. If we don't act now, bulldozers will start clearing land for the mill that will contribute 2% of Australia's greenhouse emissions - at a time when we're being told we need to drastically cut our emissions. But unfortunately Australia's forests were largely left out of Garnaut's recent interim report. We have only one opportunity to put them in the picture. A proper assessment in his impending Climate Change Report of our native forests' climate change value may just sink the mill project. Click here now to sign the petition asking Professor Garnaut to examine the full climate impact of this mill madness and the logging of Tasmania's native forests. The Sunshine Coast Environment Council has launched a campaign in response to the Premier (Anna Bligh) announcing that Greenfield sites will be fast tracked for development within the next 12 months. One of the many aspects of this campaign is to send bulk letters to the Council of Mayors urging them to ‘band together' and say NO to the State Governments' inept plan. It focuses on South East Queensland (not just the Sunshine Coast). Act now to protect Moreton Bay's endangered sea turtles and dugongs! Please help Moreton Bay's threatened marine wildlife today. Give 5 minutes of your time to sign this online letter to the Minister for Sustainability, Climate Change and Innovation asking for greater protection in the Marine Park, then forward to your friends and family. Over the summer the Queensland Government released a draft zoning plan for Moreton Bay, earmarking a mere 15% protection in Marine National Park zones. These are areas where we are free to enter- to swim, boat, dive, and snorkel - but where all wildlife is safe from harm. While 15% is better than the current protection of less than 1%, it does not go far enough for our threatened wildlife. Marine scientists around the globe say that it is critical that at least 30% of all ocean habitats, such as seagrasses and corals, are given Marine National Park status. Our chance to support our turtles and dugongs is closing fast. Public comments were due by 5pm Friday 7 March 2008. Moreton Bay Marine Park is only reviewed every ten years, so this is a once in a decade opportunity. Click here! Act now and sign the letter; With overwhelming public support we really do believe that the Government will provide more than just a mere 15% protection for our precious marine wildlife. Don't let this chance go by. Our turtles, dugongs and other wildlife need your support today. Your grandchildren will thank you for it. More than one million people worldwide have signed the WSPA Animals Matter petition for a Universal Declaration on Animal Welfare at the United Nations. Every country in the world is now represented on the petition. The Australian government has appointed a representative to liaise on this issue to help integrate animal welfare into the UN agenda. This is another great step forward towards a Declaration that will help protect all animals, everywhere. Please sign the Animals Matter petition to help make the Declaration a reality. Go to the link below: Karawatha Forest is a conservation area of unique value. It is listed in the globally respected PPBio with items such as the Amazon Rainforest. It is a sensitive environment for both local plants and animals. It is part of a State Government gazetted corridor stretching from Karawatha Forest to the QLD NSW border. The Brisbane City Council has an excellent program to develop Oxley Creek as a unique Flora and Fauna watching experience for local and international visitors. A visit to Karawatha Forest can be a supplementary activity for these visitors to see unique wildlife in another Australian ecosystem. This helps bring tourist dollars and jobs to the area. The Forest has been acquired with funds from the Bushland Levy and with donations from the State Government “to preserve the natural environment by conserving the ecology of the Forest”. We petition the Brisbane City Council to abide by the original objective for the acquisition of all parts of Karawatha Forest and to: develop safe walking trails in Karawatha Forest have signage encouraging people to use ONLY walking trails have an advertising program encouraging ecotourism in Oxley Creek and Karawatha Forest exclude any wheeled vehicle, particularly mountain bikes and quad bikes. ( Vehicles for the Disabled and prams are allowed ) exclude pets ( Assistance Dogs allowed ) resource effective monitoring of conditions of entry in order to safeguard the biodiversity and sensitive integrity of the Forest. Karawatha Forest has 2 road frontages with Logan City and is a great asset to residents of and visitors to Logan. LACA encourages all who appreciate and value the ecosystem services provided by a forested area to sign the petition.
<urn:uuid:bc72cdec-6e33-4bcb-8f26-a4c0702997cc>
CC-MAIN-2021-43
https://laca.org.au/index.php/people-power/petitions
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00550.warc.gz
en
0.933669
3,411
2.71875
3
The majority of funding for North Carolina’s public schools comes from the State. Each county, however, is required to annually fund most capital expenses and at least some operating expenses of its local school administrative unit(s). Questions often arise as to whether, and how, a county can direct its appropriations to specific operating expenditures. (I’ve blogged about a county board’s authority to direct specific school expenditures here.) This issue is further complicated by the fact that a local school unit must allocate a portion of its operating monies to charter schools attended by students located within the school district. There has been considerable conflict between local school units and charter schools over how to interpret this mandate. The North Carolina Court of Appeals has weighed in several times on this issue, most recently last week. The legislature also has made statutory changes over the past few years in an attempt to clarify the directive. This blog summarizes the current law governing how county appropriations and other revenues may be allocated among the various funds to local school units for operating expenses. It defines “fund” and describes the authorized funds for local school units. It then details how a local school unit may allocate monies among the various funds while complying with the statutory mandate to direct certain operating monies to charter schools. Local School Unit Funds What is a fund? A fund is a separate fiscal and accounting entity having its own assets, liabilities, equity or fund balance, revenues, and expenditures. Government activities are grouped into funds to isolate information for budgeting and accounting purposes. G.S. 115C-426 directs the State Board of Education to promulgate a uniform budget format for local school units. According to the statute, the uniform budget format must include at least three funds—the State Public School Fund, the local current expense fund, and the capital outlay fund. The statute specifies the types of revenues (and in some cases the types of expenditures) that must be accounted for in each of these funds.The statute allows for the creation of additional funds to account for specified revenues or expenditures. Pursuant to this authority, the North Carolina State Board of Education, through the North Carolina Department of Public Instruction, has established a uniform chart of accounts, which authorizes up to nine different funds. State Public School Fund The state public school fund, referred to as Fund 1 in the uniform chart of accounts, must include appropriations for the current operating expenses of the local school unit from monies made available by the State Board of Education. Capital Outlay Fund The capital outlay fund, known as Fund 4, must include any revenues allocated for the local school unit’s capital expenses, regardless of the source of the funds. Local Current Expense Fund The local current expense fund, or Fund 2, is the primary fund to which county appropriations and other local monies are budgeted. Specifically, G.S. 115C-426(e) states that the appropriations to the fund shall be funded by revenues accruing to the local school administrative unit by virtue of Article IX, Sec. 7 of the Constitution [penalties, fines, and forfeitures moneys], moneys made available to the local school administrative unit by the board of county commissioners [direct county appropriations], supplemental taxes levied by or on behalf of the local school administrative unit pursuant to a local act or G.S. 115C-501 to 115C-511, State money disbursed directly to the local school administrative unit, and other moneys made available or accruing to the local school administrative unit for the current operating expenses of the public school system. The statute exempts “the appropriation or use of fund balance or interest income” from the local current expense fund, even if the fund balance or interest income derives from county appropriations or other monies in the local current expense fund. G.S. 115C-448(d) also exempts “special funds of individual schools.” The local current expense fund is not the only fund to which monies may be allocated for operating expenses. A local school unit is authorized, but not required, to establish other funds to account for certain revenues, specifically “reimbursements, including indirect costs, fees for actual costs, tuition, [sales tax revenues distributed directly to a school unit that has a voted supplemental tax], sales tax refunds, gifts and grants that are restricted as to use, trust funds, and federal appropriations made directly to local school administrative units. . . .” The statute also allows a local school unit to establish a separate fund to account for any “funds received for prekindergarten programs,” which may include county appropriations or other local funds. The uniform chart of accounts authorizes a local school unit to establish Fund 8 to account for most of these revenues and expenditures. Charter School Allocations What funds are charter schools entitled to? 1. Charter schools are entitled to a proportional share of state appropriations to a local school unit. The State Board of Education must allocate to each charter school an amount equal to the “average per pupil allocation for average daily membership from the local school administrative unit allotments in which the charter school is located for each child attending the charter school . . . .” G.S. 115C-238.29H. Excluded from this calculation are funds for children with disabilities or limited English proficiency. A charter receives separate allocations to serve these student populations. 2. Charter schools also are entitled to a proportional share of monies in the local current expense fund. Specifically, G.S. 115C-238.29H(b) requires a local school unit to “transfer to [each] charter school an amount equal to the per pupil share of the local current expense fund.” The North Carolina Court of Appeals has held that the statutory allocation applies regardless of the source of the revenues in the local current expense fund or regardless of their intended expenditure. See Sugar Creek Charter School, Inc. v. Charlotte-Mecklenburg Bd. of Educ., 195 N.C. App. 348 (2009) (Sugar Creek I). Furthermore, if monies are allocated to the local current expense fund, they must be shared with charter schools even if those monies could have been allocated to another fund. See Thomas Jefferson Classical Academy v. Rutherford County Bd. of Educ., 215 N.C. App. 530 (2011) (Thomas Jefferson I). To illustrate consider monies allocated to a local school unit for pre-K programs. According to G.S. 115C-426(c), these monies may be accounted for in another fund, such as Fund 8. If a school unit receives an appropriation from the county (or monies from another source) to support a pre-K program and allocates the monies to Fund 8, the school unit is not required (or authorized) to share the monies with a charter school. If, however, the local school unit allocates the monies to Fund 2 (local current expense fund), then it must share a proportional amount of the monies with an eligible charter school. This is true even if the charter school does not have a pre-K program. In calculating a charter school’s proportional share of the local current expense fund, a local school unit must include all monies in the fund, even those that are restricted as to use. See Thomas Jefferson I. The number of pupils included in the calculation, however, may only include those legally entitled to enroll (and actually enrolled) in the public school system or charter school. A local school unit may not, for example, include pre-K students in the calculation. Although some of the funds in the local current expense fund may be expended on pre-K programs, only children who meet the enrollment criteria in G.S. 115C-364 may be counted. 3. Charter schools are not entitled to a share of monies allocated to any other fund, including the capital outlay fund. Only monies allocated to the local current expense fund must be shared with a charter school. Revenues properly allocated to any other fund authorized by the uniform chart of accounts are not shared with a charter school. That means that there is no statutory requirement (or authorization) for a local school unit to distribute monies allocated to the capital outlay fund to a charter school. Moreover, as discussed in this post, the court of appeals has held that a charter school does not have a constitutional right to receive funding for capital outlay expenses. See Sugar Creek Charter School, Inc. v. State of North Carolina, 214 N.C. App. 1 (2011) (Sugar Creek II). According to the court, there is “no basis for constitutional concern arising from the use of differing funding mechanisms to support different types of public schools that are subject to different statutory provisions.” What monies for operating expenses may be allocated to another fund (other than the local current expense fund)? Over the past few years, in light of the court of appeals decisions cited above, local school units have increasingly allocated monies for operating expenses to funds other than the local current expense fund. (In most cases, the monies have been allocated to Fund 8 instead of Fund 2.) However, the court of appeals recently held that allocating monies to the various funds is “not solely in the discretion of the local school board. . . .” Thomas Jefferson Classical Academy Charter School v. Cleveland County Bd. of Educ., No. COA13-893 (June 3, 2014). In fact, the default requirement of G.S. 115C-426 is that all monies appropriated, made available, or accruing to the local school unit to fund its current operating expenses be allocated to the local current expense fund (and thus shared with a charter school). The court recognized that the legislature has carved out an exception to this default allocation, allowing a local unit to direct some operating monies (which the court refers to as “restricted” monies) to other funds. According to the court, though, the exception is limited to the categories specifically listed in G.S. 115C-426(c): - Reimbursements, including indirect costs - Fees for actual costs - Sales tax revenues distributed using the ad valorem method pursuant to G.S. 105-472(b)(2) - Sales tax refunds - Gifts and grants restricted as to use - Trust funds - Federal appropriations made directly to local school administrative units - Funds received for prekindergarten programs (regardless of their source) Thus, these categories, most of which are identified by revenue source, are the only operating monies that may be allocated to Fund 8 (or any fund other than the local current expense fund). Can a local school unit change allocations among funds after their initial allocations/ appropriations? If a local school unit allocates one or more of these categories of monies to the local current expense fund, may it subsequently amend its budget resolution to move the monies to another fund? The answer is “yes,” if the amendment is done in the current fiscal year and involves monies that have not yet been expended. See Thomas Jefferson I. A school unit may not reallocate funds from prior fiscal years, though.
<urn:uuid:c5398e70-43aa-4ccb-9787-a3cede71a8fe>
CC-MAIN-2021-43
https://canons.sog.unc.edu/allocating-operating-monies-among-local-school-unit-funds-local-current-expense-fund-vs-fund-8/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00669.warc.gz
en
0.943494
2,350
2.5625
3
“Justice delayed is justice denied” As propounded by William Ewart Gladstone, when proper justice is not timely delivered to the aggrieved party, not only he/she suffer violations of his/her rights but the society as a whole also suffers a loss of faith in the efficacy of its justice system. This justice system needs to initiated and be apprised of any violation of the legal rights of the citizens, for it to work properly. F.I.R (First information report) thus becomes an important tool as it sets the machinery of the criminal justice system in motion. AN F.I.R, filed under Section 154 of The Criminal Procedure Code, 1973 is written information about the alleged commission of a cognizable crime. It contains all the vital information about the date, time, the occurrence of the crime, eyewitnesses, complainant’s statement etcetera. The F.I.R so filed is then registered and duly numbered and the police then commence investigating that matter. What is Zero F.I.R AN F.I.R is usually filed at the police station which has territorial jurisdiction to act over the alleged crime. However, in certain cases, it is not possible for the victim to file an F.I.R in the concerned police station. He/she can instead file a ‘Zero F.I.R’ in the nearest accessible police station. Zero F.I.R has to be lodged by every police station irrespective of lack of territorial jurisdiction. The Zero F.I.R is registered and marked with serial number zero (thus the name ‘Zero’ F.I.R). Such police station might even make primary inquiries and investigations if it urgently required as per the nature of the case. Then, the Zero F.I.R is duly transferred to the competent police station under whose territorial jurisdiction the crime took place for them to undertake proper investigations in the matter. In the case of Satvinder Kaur vs State[i] , the court, referring to Section 177 and 178 of The Criminal Procedure Code,1882, said that even if the investigating officer of a police station was sure that the alleged crime had not been committed within the territorial jurisdiction of his police station, an F.I.R. has to be lodged which shall be forwarded to the police station having jurisdiction over the area in which crime is committed. Court also said that this would not mean that if a case that required investigation, the police officer could refuse to record the F.I.R and/or investigate it. The concept of Zero F.I.R was first introduced by the Justice Verma Committee constituted in the aftermath of the horrific Nirbhaya Delhi rape case, to suggest amendments to India’s criminal justice system. These recommendations were later incorporated into the Criminal Procedure Code, 1973 by the Criminal Law (Amendment) Act, 2013 (also called the Nirbhaya act). In the case of Lalita Kumari vs Govt.of U.P.& Ors[ii], the five-judge bench of the Supreme Court restated that, a police officer is bound to register a First Information Report (F.I.R) upon receiving any information relating to the commission of a cognizable offence. Further. the court said that no preliminary inquiry shall be made before filing any F.I.R except when it is to the extent of ascertaining whether the information revealed any cognizable offence. It was introduced so that the victim does not get stuck in procedural technicalities of law at the time of distress. It was also introduced to enable the concerned police station’s SHO to initiate investigations in a matter as soon as possible so that justice is not denied to any of the parties involved. In event of a police officer, refusing to register the F.I.R, he/she shall be liable for punishment under Section 166 of the Indian Penal Code, 1860 (relating to a public servant disobeying any direction of the law) A Zero F.I.R is not to be confused with multiple territories, continuing offences. These are the cases where different parts of an offence take place in different territories. In such cases, all the police stations of these territories have jurisdiction and F.I.R can be filed anywhere without the need of it being transferred to the place where the first part of the offence took place as the offence will be said to have taken place in each of the territories. Importance of Zero F.I.R Zero F.I.R is particularly important in sensitive cases where the reporting of the crime to the justice system should be done as soon as possible so that appropriate steps could be taken to deliver justice to all. Example – In rape and sexual assault cases, evidence like physical examinations, semen samples, fluid tests have to be collected as soon as possible as their evidentiary value runs a high risk of deteriorating with time which might lead to a lack of evidence. Further, homicide cases especially the heinous ones require the alleged perpetrator to be caught before he/she causes more harm or absconds from the law. An appropriate piece of evidence, eyewitnesses, and other circumstantial details in these cases can be taken efficiently once F.I.R has been duly registered Investigation and evidence collection can only happen once the F.I.R is duly lodged and registered. Zero F.I.R provides a short cut in the whole procedure and allows the F.I.R to be filed anywhere so that it can be forwarded to competent authorities and investigation can begin immediately. It also becomes an important tool in cases of crimes perpetrated during traveling. The victim, in these cases can reach out to the nearest police station en route his journey and lodge a Zero F.I.R which shall be sent to the appropriate police station which shall then, immediately commence action, instead of the victim filing An F.I.R in the place where the crime had taken place and taking the inconvenience of reporting to that police station time and again. Zero F.I.R also proves to be helpful in cases where the victim is unaware or unsure about the correctly territorial jurisdiction his case falls into. Therefore, instead of sending him away from justice, An F.I.R can simply be lodged and due action can be taken. Lack of awareness of Zero F.I.R As good as the reform sounds on paper, it’s execution in the field remains dicey. Firstly, the police itself is not aware of the reform despite the amendments in the Criminal procedure code and multiple guidelines being issued to them. There have been countless instances where the police officers out of ignorance have turned the victim away, citing lack of territorial jurisdiction as the reason. The most recent example of police officers refusing to file an F.I.R is that of Hyderabad rape case. The victim’s parents were turned away from their nearest police station when they went to file an F.I.R against the perpetrators citing lack of territorial jurisdiction as the reason. Secondly, the citizens themselves are not aware of their legal right of lodging a Zero F.I.R. Even six years past the amendment of 2013, making Zero F.I.R lodging to be a statutory duty, the police across the county are wrangling over territorial jurisdiction issues. Not only this, despite several strict guidelines of the Ministry of Home Affairs, mandating the concerned departments to compulsorily register Zero F.I.Rs, ignorance prevails at large in the grassroots levels. Abuses of Zero F.I.R While this reform is hailed revolutionary, there have been abuses of the power conferred by it. There have been instances of parties colluding with the police to delay and hamper the preliminary investigations by lodging Zero F.I.R in territories where they exercise significant influence, leading to delayed and tainted investigations. The case of Bimla Rawal and Ors. v State (NCT of Delhi)[iii] illustrates this abuse. In the case, a crime was perpetrated in Mumbai but the police in collusion with the powerful perpetrator lodged a Zero F.I.R in Delhi. Upon a writ petition regarding the mala fide intentions of filing such F.I.R, the Supreme Court found it to a case of misuse of police power at the perpetrator’s behest and consequently, the Supreme Court quashed the Zero F.I.R lodged in Delhi and ordered a new F.I.R to be lodged in Mumbai. The legal right of lodging a Zero F.I.R is a very noble one. It inspires confidence in the minds of the general public that upon any wrongdoing, their voices will be heard rather than being muffled under procedural technicalities of law. However, due to certain maladies like police’s and citizens’ ignorance of law, intentional abuse of the right to file Zero F.I.R can destroy this confidence and trust in the law. For law and order to exist, it is essential that this confidence is maintained. The recent refusal to F.I.R in the Hyderabad rape case had sparked the fire of dissent among the public towards the legal system. Making distressed victims run hither and thither to fulfill a mere time-consuming procedural technicality doesn’t say much about the efficiency of the justice system. Where landmark judgments, amendment acts and even the strict guidelines by the Ministry of Home Affairs have remained inefficient, steps should be taken by the government to compulsorily ensure that the citizens, as well as the functionaries of the justice system, are aware of their right to Zero F.I.R. Though there have been cases of abuse of the reform, it can not be used as an excuse to dissolve the powers of the right to file Zero F.I.R. Instead, fixing responsibility right from the lower rungs of the criminal justice administration without letting them claim ignorance of the law, should be beneficial. Moreover, F.I.R is just the first step in the criminal justice system. Latter steps like free and fair trials of alleged criminals are equally important. This was again compromised in the Hyderabad rape case when all four alleged perpetrators were shot dead by the police without their proper trails to confirm their guilt. The criminal justice system remains riddled with these problems and as appreciable as these reforms are, without proper implementation measures in place, they remain of no use to the victim or the perpetrator. Endnotes[i] 1999 Supp(3) SCR 348 [ii] (2014) 2 SSC 1 [iii] (2003) 2 CALLT 23 SC Oshin is a fluent and eloquent writer. He is a student of the prestigious NALSAR University of Law, Hyderabad. His hobbies include writing, illustrating and listening to music. For any clarifications, feedback, and advice, you can reach him at [email protected]
<urn:uuid:0bd2969b-8c9c-4f3c-9d42-3e9071b29ed9>
CC-MAIN-2021-43
https://lawcirca.com/what-is-zero-f-i-r/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00591.warc.gz
en
0.953215
2,247
2.59375
3
They showed a piece of paper saying “eminent domain.” —Buckner and Garcia, Wreck It, Wreck-It Ralph (2012) Think about it: What the Court said was that the government can take your property from you and give it to someone else simply if it believes that someone else will make better use of it. —Malcolm Gladwell, “The Nets and NBA Economics” (2011) Disney’s latest hit, Wreck-It Ralph (2012), shows us the behind-the-scenes life of arcade game characters. The protagonist, not-yet-hero, is Wreck-It Ralph, “a giant of a man.” In the game he spends all his time trying to demolish an apartment building, while his adversary, Fix-It Felix, Jr., works diligently to repair the damage. Behind the scenes, Ralph is depressed at having to play the bad guy and at being treated like one by the other residents of his game-world. Eventually, Ralph sets out to win a hero’s medal, and we follow his adventures. Along the way, we learn to like, even respect, Ralph—or at least the person that Ralph is becoming. But how does Ralph’s growing heroism square with his role as the game’s bad guy? What really makes Ralph tick? Ralph’s motivations finally come out in the lyrics of the closing credits. In a Buckner and Garcia song, we’re told: He was minding his own business on the day they came They showed a piece of paper saying “eminent domain” They built an apartment building saying progress was to blame So he got mad And he turned bad Brick by brick he’s gonna take his land back “Eminent domain” is a legal term. It expresses the right or power of the State to take private property for public use, with or without compensation. The apartment building amounts to a city-sanctioned confiscation of Ralph’s property. So all along Ralph has been fighting to push off the city-sanctioned squatters and take back his land, “brick by brick.” From Ralph’s point of view, he’s the victim and the underdog. In Real Life Ralph’s situation is reminiscent of two prominent cases that turned on the issue of eminent domain. One is Kelo v. New London (2005). Suzette Kelo sued the city of New London, Connecticut for giving her home to a pharmaceutical company in the name of economic development. She argued that it was unconstitutional for the city to take private property from one individual or corporation and give it to another. The United States Supreme Court disagreed. In a five-to-four decision, the Court announced that “public purposes” qualifies as “public use” as it’s described in the Fifth Amendment. A second case involves the Brooklyn Nets. New York real-estate developer Bruce Ratner discovered a choice piece of partially undeveloped land in Brooklyn, perfect for luxury high-rises. But fourteen acres of older businesses and homes stood in the way of any sort of redevelopment project. Ratner needed those structures condemned, and so he fastened on a traditionally sanctioned “public use” for otherwise questionable property—a stadium. And so he bought the New Jersey Nets. The City of Brooklyn jumped onboard, exercised its power of eminent domain, and handed the land off to Ratner (2010). But along came the recession, and Ratner had to rearrange his capital. He sold the Nets and scaled back his building plans. He still ended up—by his own projections—with an annual return of 10 percent. The Theology of Eminent Domain Where do property rights end and the good of society begin? To answer the question biblically, we must consider the issue of original ownership. “The earth is the LORD’S, and the fullness thereof; the world, and they that dwell therein,” the psalmist tells us (Ps. 24:1). That is, God as Creator owns the whole earth. He owns all the real estate, and He owns the men, women, and children who walk its surface. He owns all the land and all of society. All property rights are, therefore, His. Through His law, God has delegated temporary and limited ownership of property to men. This ownership is defined and delineated by His law. It is summarized in the Eighth Commandment: “Thou shalt not steal” (Ex. 20:15). Jesus put the matter in more positive terms when he has a property owner in one of His parables say, “Is it not lawful for me to do what I will with mine own?” (Matt. 20:15). Note the word “lawful.” The issue is not property rights versus the needs of society; the issue is the law of God. Scripture nowhere gives the State the power to take property, landed or otherwise, from law-abiding citizens. Quite the contrary: at the beginning of the monarchy, God warned Israel that their kings would eventually assume eminent domain over their people’s lands: And he will take your fields, and your vineyards, and your oliveyards, even the best of them, and give them to his servants. (1 Sam. 8:14) In Ezekiel’s vision of the Restoration Covenant, God turned this warning into an explicit prohibition: Moreover the prince shall not take of the people’s inheritance by oppression, to thrust them out of their possession; but he shall give his sons inheritance out of his own possession: that my people be not scattered every man from his possession. (Ezek. 46:18) Scripture also touches on the matter of eminent domain and legalized theft in a story in 1 Kings. Naboth owned a vineyard just beyond the walls of king Ahab’s summer palace in Jezreel (1 Kings 21). The vineyard was beautiful and well situated. Ahab thought it would a make a very pleasant and useful addition to the royal estate. He spoke of making it “a garden of herbs” or vegetables. So Ahab made Naboth an offer: He would give Naboth a better vineyard in trade or the equivalent in hard cash. So far, so good, at least legally. Ahab made Naboth a legitimate offer, and the Mosaic Law did allow for the short-term sale of land (what amounted to a lease) until the Jubilee, which came every fifty years. But Naboth wasn’t interested in the offer. His motivations were religious and theological. He said, “Yahweh forbid it me that I should give he inheritance of my fathers unto thee” (1 Kings 21:3). Naboth’s land had been entrusted to his family by God Himself as a perpetual inheritance. And though the letter of the law allowed for a temporary sale or lease, Naboth believed that the land was his responsibility and stewardship under God. He believed that selling the land would be a betrayal of that stewardship. So he simply said, “No.” Ahab went home and pouted. “Heavy and displeased,” he lay down on his bed, turned his face to the wall, and refused to eat (v. 4). When his queen, Jezebel, asked for an explanation, he told her exactly what had happened. Jezebel arranged for the elders of the city to proclaim a public fast (v. 9). They were to set Naboth in a position of honor. But at the same time they were to arrange for false witnesses to accuse Naboth of blasphemy, a capital crime in Israel. Based on this false testimony, they were to execute Naboth (v. 10). The elders did as Jezebel required. Then they sent word that Naboth was dead. Jezebel told her husband and sent him off to take possession of his new vineyard (v. 15). But when he came to the vineyard, he found the prophet Elijah waiting for him. Elijah’s prophecy put the blame for Naboth’s death squarely on Ahab: Thus saith the LORD, Hast thou killed, and also taken possession? … In the place where dogs licked the blood of Naboth shall dogs lick thy blood, even thine. (v. 19) Moreover, Elijah prophesied the brutal death of Jezebel and the doom of all Ahab’s house. Only Ahab’s superficial humility and repentance stayed the sentence for a little while (vv. 27-29). Obviously, the concept of eminent domain was foreign to Israel, even in her apostasy. Ahab knew the only legitimate way he could acquire the land was by an un-coerced economic transaction. Naboth had to freely agree to the sale. Jezebel, a Canaanite princess by birth, also understood the law, but saw no reason that a real king should be hampered by it. So she used conspiracy, perjury, and legal formalities to commit murder and theft. But when God reckoned up the final account, He laid the blame on her husband, on Ahab. No plausibility deniability. No “I didn’t get the memo.” It was Ahab’s responsibility to know, and, in fact, he did know. God held him accountable, and when judgment came, it was terrible. A society that rejects the sovereignty of the Creator will not abandon the idea of sovereignty. It will simply relocate it to somewhere within the creation—usually in the State. Eminent domain as it exists in American law is merely a recognition of such sovereignty. There can be no appeal against this sovereignty from within the system. God, however, can bring judgment from outside the system. He can disinherit the oppressors and in fact, He has promised to do just that. (1 Sam. 2:5-10; Luke 1:51-54) For Further Reading: Rousas J. Rushdoony, “Eminent Domain” in The Politics of Guilt and Pity (Nutley, NJ: The Craig Press, 1970). Rousas J. Rushdoony, “Eminent Domain” in The Institutes of Biblical Law (N. p.: Craig Press, 1973). Alyssa Rosenberg, “Real-Estate Developers and Wreck-It Ralph,” ThinkProgress.org, Nov 8, 2012, <https://thinkprogress.org/alyssa/2012/11/08/1163081/guest-post-real-estate-developers-and-wreck-it-ralph/>. Malcolm Gladwell, “The Nets and NBA Economics,” Grantland, Sept 26, 2011,
<urn:uuid:d8677cd1-7ce4-4ff1-8394-bfc3fdcf8285>
CC-MAIN-2021-43
https://www.offthegridnews.com/religion/wreck-it-ralph-and-eminent-domain/print/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00029.warc.gz
en
0.961324
2,297
2.515625
3
Good nutrition isn't just important for the health of our bodies but essential for our mental health. With the news this week that childline has been inundated with anxiety calls as children express fears over global events, its more important than ever that we understand mental health and the role of nutrition both for ourselves and the next generation. Stress and anxiety play a crucial role in health and can have long term effects and influence the course of a chronic illness. (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2568977/) I work on a holistic basis meaning I view the entire body as interconnected and can only be viewed as whole rather than just one part. When I look at someone's health, I take into consideration mental and social factors not just the symptoms a person may be expressing. Recent evidence shows that food plays an extremely important role in the development and prevention of specific mental health problems such as depression, ADHD, schizophrenia, and Alzheimer’s disease. Nearly every chemical that controls the brain has been identified in the gastro-intestinal tract. Interesting? Scary? We really are what we eat... NUTRIENTS TO HELP IMPROVE MOOD: IRON: Lack of iron in the diet can leave us feeling tired and lethargic and increases the risk of anaemia. Include a good supply of iron rich foods such as red meat, poultry, fish, tofu, lentils and pumpkin seeds. Avoid drinking tea with meals and try and include a vitamin C rich source of food (e.g. broccoli, oranges and strawberries) alongside meals to help increase absorption of iron. OMEGA 3: Omega 3 from fish has been studied in terms of the positive effects on mood and lowering the risk of depression. Fish highest in omega 3 include salmon, sardines, mackerel and herring. SELENIUM: Too little selenium in the diet may leave us feeling depressed or low. Brazil nuts, legumes, lean meat, seafood, seeds and wholemeal bread are good sources of selenium. VITAMIN D: More and more we are learning about how crucial this vitamin is to our mental health and well being. Our body is able to synthesise vitamin D from exposure to the sun but for the majority of people living in Northern Europe, this isn’t always possible year-round. A few foods contain vitamin D so good to include in your regular diet: fatty fish such as salmon, tuna and mackerel; eggs and beef liver are the highest sources. B VITAMINS: Lack of B vitamins can result in irritability, tiredness and feelings of depressed mood. The B vitamins are crucial in how energy is produced in the body and can be found in a wide variety of foods. Folic acid (folate) and vitamin B12 are particularly important for older adults in preventing mood disorders and dementias and can be found in liver, green leafy vegetables, citrus fruits, broccoli and beans. TRYPTOPHAN: Although research is on-going into the effects of this amino acid, it is known that tryptophan helps make serotonin (‘the happy hormone’). So including it in your diet is certainly a good idea. Food rich in this include bananas, walnuts, brown rice, sunflower seeds and animal protein rich foods such as turkey, eggs, chicken and fish. FOODS THAT CAN GIVE YOU A LOW: ALCOHOL: It might seem strange but alcohol is a depressant and can result in lowering your mood. SUGAR: Sugar and refined foods tends to cause an initial ‘high’ which we find pleasurable. However, that soon wears off as the body increases its insulin production, leaving you feeling tired and low. CAFFEINE: Although caffeine is known to give us energy bursts, caffeine raises Cortisol levels in the body (known as the stress hormone). Best avoided if you are feeling under stress anyway. Don't underestimate the power of a few lifestyle changes which can make all the difference to mood and anxiety. Exercise is well known for its stress relieving abilities. It doesn't have to be high intensity running or exercise classes if that isn't your thing. Yoga, pilates even just taking a walk in the park can do immense good. Finding an activity which you find relaxing - gardening, cooking, reading the paper whatever it is find your 'thing' and enjoy it! Lastly, for those of you who want to try something new. A technique known as 'earthing' or 'grounding' where quite literally a person takes time to reconnect with the Earths surface electrons by walking barefoot outside. This advocates a general feeling of well being and even reports of physiological changes including reducing pain, stress and improving sleep. Sound a bit woo-woo? What have you got to lose - plus there is actual scientific research behind this: Diet and Mental Health (2015) Available at: http://www.mentalhealth.org.uk/help-information/mental-health-a-z/D/diet/ Food and Mood (2014) Available at: https://www.bda.uk.com/foodfacts/foodmood.pdf This is a beautiful recipe (adapted from the ocado website) which I love for three reasons. 1. It is super quick, easy and tasty. 2. From a nutritional point of view, its a winner. 3. People don't eat enough artichokes, which are, in my opinion a 'super' food. Artichoke is a great source of vitamin K, vitamin C and folate as well as the minerals calcium, magnesium and potassium. Plus full of fibre. Which we love. It's also full of antioxidants, in fact a study by the American Society for Clinical Nutrition (http://ajcn.nutrition.org/content/84/1/95.abstract), concluded it has a higher antioxidant status than blueberries and dark chocolate! Artichoke also contains constituents which have liver protective qualities. Which lets be honest, at this time of year, when our bodies, immune system and livers get a battering, can only be a good thing! It can increase the production of bile (okay sounds gross but totally necessary) which helps speed up the transit of food through your digestive system, reducing bloating. In fact they were used as a digestive aid in Egyptian times - and of course now we know exactly why. Its also a prebiotic, which feeds the probiotics (or 'good' bacteria) that resides in your stomach. Artichoke is also reported to be beneficial for those with: Whilst these more serious issues may warrant taking artichoke leaf extract (under the guidance of a doctor or qualified nutritional therapist), eating more artichokes is almost certainly going to be beneficial to health. The heart of the artichoke is eaten because it is softer and the most edible part of the plant. Whilst it would be recommended to buy and prepare your own artichokes, this is, well...hard work to be honest! Buying a jar of artichoke hearts is very acceptable. Make sure it is preserved in olive oil with no / little added salt. So hopefully I have persuaded you about why the artichoke is so great. Here is a wonderful way to serve it at a dinner party, or just smother it on toast or crackers for an incredible health boosting taste sensational snack. This recipe also contains raw garlic (with potential cholesterol lowering, antibacterial, anti-fungal, blood pressure activity) virgin olive oil (with antioxidant, vasodilating, and antiplatelet properties, and potentially cholesterol lowering) and fresh basil (antibacterial, anti-inflammatory and cardiovascular health benefits). So really this is a superfood dip! Serves 2-4. Takes 5 minutes maximum. In a food processor, blend together all the ingredients. Whizz until smooth. Add a squeeze of lemon juice and top with a little zest before serving with crudités or wholewheat breadsticks. ENJOY! Yes autumn (aka snot season!) is most definitely here. And whilst its lovely to be having wood fires again and wrapping up in hats and scarves, it does bring the inevitable sniffy noses, feeling run down and days off school or work.....big sighs..... After completeting my studies and learning a little more about the importance of supplementing in certain situations, I regularly make sure I supplement my children's already pretty healthy diet. I see this as strengthening their own natural immune defences and keep them in optimal health throughout the year. Nothing in this area is totally black and white and it really is up to the informed choice of the parent, but I would always say when asked why I choose to supplement, which is better - acting preventatively or waiting till the child is unwell then dosing up on Calpol or antibiotics? Most people now know the risk of antibiotics which should really only be given in an emergency situation and not for many seasonal illness' (remember antibiotics kill bacteria infections NOT viral!) But recently there has been much in the mainstream media about the overuse of Calpol with children. It should always be remembered that Calpol whilst suitable for children, is still paracetamol, a pharmaceutical drug which a small body has to process and detoxify. Paediatricians are now warning that overdoing paracetamol or giving high doses increases the risk of developing asthma, as well as kidney, heart and liver damage. It should only be given to a child in pain and discomfort or with a very high fever and not for general malaise or a slight temperature. Whilst given with the best intentions, parents can often do more harm than good. So, to avoid these situations I try and make sure a. my children's diet is good (sometimes easier said than done!) and b. I give them supplements for areas I feel they may be deficient. There is much evidence and advice now given by the NHS about the importance of supplementing children under 5 living in the UK with vitamin D, A and C all of which play a crucial role in our immune defence. I choose to give these vitamin to my children with an omega 3 fish oil high in DHA (Lamberts) to support cognitive function plus brain and eye development - again very well researched and documented health benefits. My children don't eat a lot of oily fish as much as I try (!) so I feel happier knowing they are getting a source from elsewhere to support their continuing development: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3738999/ I also feel a probiotic is justified in the winter months and there is some research on the reduction of common infectious diseases in children who are given probiotics https://www.ncbi.nlm.nih.gov/pubmed/20729255 And if you have had to given your child antibiotics, it is a good idea to look into building up their 'good' bacteria reserves again with a good quality probiotic supplement if possible. I like the brand Optibac as all their probiotics are fully backed up by current research. Lastly, I use Sambucol, an elderberry extract, during winter also. Elderberry is a traditional method of warding off colds and flu. https://www.ncbi.nlm.nih.gov/pubmed/27023596 And recent studies have also found it effective in the treatment of flu, cutting the severity and duration of the illness if taken soon after symptoms appear: http://www.webmd.com/cold-and-flu/news/20031222/elderberry-fights-flu-symptoms And most excitingly, no side effects. Always great to have in your bathroom cabinet I say! For years we have been told that skipping breakfast is unacceptable. Now intermittent fasting is all the rage, so confusingly it seems en-vogue to skip breakfast again. So what's the answer? Truth be told, like everything with nutrition, it depends on the person. I personally believe, backed up by scientific studies, that skipping breakfast for most people leads to poor food choices throughout the day and increased calories through the day. (Read here: http://www.ncbi.nlm.nih.gov/pubmed/15699226 & http://www.ncbi.nlm.nih.gov/pubmed/23672851 & http://www.ncbi.nlm.nih.gov/pubmed/11836452?dopt=Abstract) Another mistake commonly made is when people choose overly sweet, sugary breakfasts over healthy, filling breakfasts. A lot of common breakfasts choices are actually akin to eating a dessert first thing in the morning. Packed full of sugar which is going to drive up insulin and lead to cravings mid-morning. If you are trying to lose or even just maintain weight, this isn't going to help. The first meal you eat in a day is most likely going to set your metabolic intentions for the day. If you start off eating rubbish, most likely you will continue this throughout the morning and afternoon. Breakfast is also the meal you are least likely to want to change. If you are someone who DOES eat breakfast regularly and within the first hour of waking, you will most likely reach for something convenient and habitual. Meaning, humans as creatures of habit find it difficult to think about eating something new, first thing. Last week on my Instagram feed (instagram.com/gingerandpicklesnutrition) I recorded seven days of my breakfasts to help people with variety and to try something new, quick and easy. They weren't all perfect, there were a couple of cheat days, but I had something different everyday and tried to make them as protein rich as possible. Protein rich breakfasts have been shown to to reduce down the hunger hormone ghrelin so keep you fuller for longer which means less snacking between meals. Winner. Here are some of the ideas you could try at home. Eggs are my go to breakfast most days or else a smoothie but I wanted to try something different everyday to show that variety is possible, easy and delicious.
<urn:uuid:6536813c-3cb8-475f-b091-39464c687bee>
CC-MAIN-2021-43
https://www.gingerandpicklesnutrition.co.uk/blog/category/nutritional-therapist
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00510.warc.gz
en
0.951619
2,974
3.09375
3
Outlining the key players and their stakes in the region, Clare Lemlich takes an anti-imperialist look at what’s behind Russia’s integration push in neighboring Belarus. During the last few months of 2019 Russian leader Vladimir Putin and his Belarusian counterpart Alexander Lukashenko met several times to discuss closer integration between the two countries. No agreement has yet been reached, but the negotiations sparked long-standing fears that Belarus will be subsumed as a province of greater Russia. Since independence from the Soviet Union in 1991, Belarus has traditionally been an ally to Russia and heavily dependent on its powerful neighbor for most of its trade. As geopolitical alliances and imperial interests have shifted over the last decade, Belarus has sought trade partnerships beyond Russia, especially in China. Chinese investment in Belarus has grown by 200 times in the last decade, according to Belarusian state media. Although Belarus has a strained relationship with the West and has faced sanctions over its human rights abuses, the country has recently oriented more toward the European Union. Last year Belarus and the US reopened diplomatic relations after a long period of hostility between the two. Eyeing China and the West, these latest integration talks are Russia’s attempt to pressure Belarus back into its fold. “The last dictatorship” Nearly any English-language article about Belarus will refer to the country as “Europe’s last dictatorship.” Lukashenko has ruled the country since 1994, elections are not democratic, and he oversees an extremely authoritarian state. Media and labor unions are state-controlled, opposition movements are repressed, and the right to protest is seriously curbed. Surveillance is rampant and the country has one of the highest police to citizen ratios in the world. Most former Soviet countries went through complete economic restructures at the end of the Cold War. In Russia and Ukraine, which border Belarus to the east and south, post-communist “shock therapy” lead to massive economic crisis and the rise of an oligarchy. In places like Lithuania, Latvia, and Poland, which border Belarus to the west and north, the economies have recovered and grown with Western investment and EU membership. But Belarus did not pursue the same neoliberal path as its neighbors and has retained many Soviet economic and social vestiges. In fact, Lukashenko was the only politician in Belarus to vote against the dissolution of the Soviet Union in 1991. Soviet symbolism is everywhere. There is still a Lenin statue in nearly every town square across Belarus and streets are still named after Marx and Engels, revolution, and internationalism. This is not to say Belarus observes any kind of socialism. For instance, Lukashenko recently introduced a regressive “social parasite” tax that fines people for being unemployed. Popular hatred of the tax forced Lukashenko to backtrack, but this gives a sense of the kind of anti-worker policies pushed from the top in Belarus. Much like the Soviet Union before it, Belarus is a repressive and corrupt state-managed capitalist country, whose ruling class uses a veneer of communist rhetoric. In order to prop up this kind of economy, Lukashenko has relied on a combination of loans and cheap energy from Russia. But things have shifted in Belarus over the last few years. In limited ways, the country seems to have softened some of its domestic authoritarianism. The social atmosphere in Belarus, especially in metropolitan areas, is freer and more open than it was even two years ago. Lukashenko has historically clamped down on Belarusian cultural activism and enforced a strong Russian language and culture agenda. But in an apparent attempt to contain opposition and as part of his moves away from Russia, Lukashenko has made some concessions on this front. Crucially, Belarus has relaxed visa and travel restrictions, and is pursuing new international trade partnerships. This troubles Russia. Energy prices, term limits, and China The integration negotiations have included joint tax, customs, and trade plans, and the adoption of a single currency across the two countries. One of the major sticking points is energy. Discounted Russian crude oil props up domestic Belarusian markets and the economy relies on export revenue from processing and transporting the oil to the rest of Europe. Russia has gradually rolled back energy discounts, but Belarus still pays as low as half of what Western European countries do for Russian gas. Putin is now threatening Belarus with full-price oil by 2025 and says that in order to keep getting subsidized energy, Belarus needs to agree to integration with Russia. Lukashenko says Belarus should be able to retain access to cheap energy in exchange for existing military and strategic collaboration with Russia. Putin is also pushing integration for domestic reasons. His term as Russian president expires in 2024, but if Belarus and Russia integrate further or even form one nation, it would trigger constitutional changes that could both allow Putin to stay in power longer and run for president again in the future. Putin has maneuvered around term limits before. He once switched jobs with Prime Minister Dmitry Medvedev for several years so that he could run for president again. This was met with protests in Russia and Putin is looking for a way to keep the presidency with less backlash this time. It’s a difficult position for Belarus, a country of just under 10 million people with a per capita GDP half the size of Russia and China’s, a seventh of the EU’s, and a tenth of the US’s. Although Lukashenko has pushed for alternative geopolitical partnerships the last few years, the country is still heavily reliant on Russia. Recently one Belarusian military official claimed the country would consider participating in NATO’s 2020 “Defender Europe” project — a suite of US-led military exercises that will take place across the continent, which will be the third largest of its kind since the Cold War. Belarus also announced plans for a $500 million Chinese debt relief loan that was originally slated to come from Russia. This is in addition to the $15 billion line of credit China extended to Belarus and the potential $5.5 billion China says it plans to invest in a new industrial park near the capital Minsk called Great Stone. Even with these international overtures, without cheap Russian energy and subsidies, Belarus would likely face an economic crisis. This is why, at the time of writing, the integration negotiations stand at an impasse. Despite Belarus’ recent history of violently repressing protests, several thousand people have demonstrated against integration plans over the last month. Many Belarusians consider integration to be a “soft annexation” akin to Russia’s 2014 takeover of Crimea (formerly part of Ukraine). Their fears are legitimate. Many of Russia’s former provinces and satellite states have joined NATO and the EU over the years and since taking power, Putin has fought to keep a ring of buffer states and allies around Russia as a bulwark against the West. He has clear aspirations to restore Russia as an empire and needs consistent subservience from countries like Belarus to do that. Russia would rather win loyalty through geopolitical negotiations and economic pressure, but Putin has been prepared before to invade (as in the south-western border states of Chechnya in 1998 and Georgia in 2008) or annex (as in Ukraine in 2014). While Belarus matters a great deal to Putin, the Belarusian integration talks haven’t yet grabbed US headlines in the same way that comparable imperial tensions in Ukraine did in the past. This is partly because of the US’s generally shambolic foreign policy since Trump was elected. But it is also because Belarus plays a less important (although not insignificant) role in US imperialism compared with Ukraine. All eyes were on Ukraine in 2014 because Crimea provides naval access to Black Sea ports. In 2008 Russia won the war in Georgia against pro-West forces because it had access to these ports, so regaining Crimea in 2014 was a huge boon to Russia’s regional power. Also, much of Europe’s energy flows from Russia through Ukraine. At the time of the Crimean annexation and ensuing civil war on the Russia-Ukraine border, disrupting Russian energy exports through Ukraine would have thrown Europe back into recession, which would have had repercussions for the US. The US is still keeping close tabs on these negotiations. Secretary of State Mike Pompeo was set to visit Minsk this month during a trip to Europe and the Middle East, but this was postponed due to the recent attack on the US embassy in Iraq. The EU is starting to panic because around 10% of Europe’s oil and 6% of its gas comes from Russia via Belarus. As the integration talks stall, Russian energy companies are now diverting crude oil exports away from Belarus. The US is, of course, vexed that Russia is trying to reassert full dominion over Belarus. But both Russia and the West are especially concerned that China is encroaching, particularly since Belarus shares a border with the EU. Whether integration happens, and exactly what that will look like, remains to be seen. Whatever the result, ordinary Belarusians are caught between the deepening imperial rivalries of Russia, the West, and China. Global competition has intensified in the last decade and this polarization means people feel like there are no political alternatives to the major imperial blocs. The opposition groups in Belarus are mostly united against Russian integration, but the scale of state repression makes it extremely difficult for them to develop a viable political alternative to Lukashenko. In general, opposition to Russian influence in Belarus also goes hand-in-hand with a set of pro-EU politics and an uncritical stance on Western imperialism. This is understandable. Belarusians have lived under a Russian-backed dictatorship for decades, which makes the EU look like an appealing alternative. Popular support among Belarusians is always couched in terms of the EU’s original rhetoric: human rights oversight, economic opportunities, and freedom of movement — not the neoliberal loan packages, vicious austerity, and racist immigration controls that the EU is better known for now. No ordinary Belarusian stands to gain much from the geostrategic jockeying of the international ruling classes over oil prices and spheres of influence. A local Belarusian campaign rejecting all imperialisms, supported by an international solidarity movement also united against all imperialisms, could change the situation in Belarus and beyond. But until that kind of movement exists, integration with one imperial bloc or another seems inevitable in Belarus for the time being.
<urn:uuid:632f064d-b2ba-455d-a4a2-f6d6edda8418>
CC-MAIN-2021-43
https://marx21us.org/2020/01/03/belarus-russia-integration-talks/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00671.warc.gz
en
0.962651
2,127
2.71875
3
The Very Low Frequency (VLF) band is located in the frequency range between 3KHz and 30KHz. This band has the unique characteristic of having a portion fall within the audio frequency range of our ears. What’s more, the bulk of a lightning strikes energy is deposited between 2KHz and 10KHz. These two points form the basis for everything which follows.So how does one go about building a VLF receiver anyway? You might be surprised to find out that it is not very difficult to build one. In it’s simplest form (and not so simple forms) a VLF receiver is nothing more than an audio amplifier attached to an antenna. One of the most popular uses for a VLF receiver is for listening to lightning strikes from around the world, and the interesting effects that this activity has on our atmosphere. The snap, crackle, and pop sounds from lightning activity which one can hear on a VLF receiver, or AM and short wave radio for that matter are called atmospherics or spherics for short. These are the most common sounds heard on the VLF band and can be heard 24 hours a day. Spherics show up as wide band bursts when plotted on a spectrogram. This is because a lightning strike is not a narrow band event as is say a military VLF transmitter like NAA in Cutler Maine at 24KHz. These types of wide band signals are characteristic of natural EMF activity from Earth as well as our solar system and beyond. So it shouldn’t surprise anyone that another name for this hobby is “natural radio”. I have been interested in natural radio for a good ten years now. Over the years I have built a number of VLF receivers of the E-field and loop antenna type. As a matter of fact the Schumann resonance receiver which I use, found here, can easily be modified to receive the entire VLF band. I’m currently using a homemade receiver for this project which I detail below. Another option is the VLF-3 from the Inspire Project. The Inspire Project is a NASA sponsored (among others) project involved with interactive NASA space physics ionosphere radio experiments. They are doing very interesting stuff. The VLF-3 kit which they provide is of very good quality and easy enough for just about anyone to build in a few quiet nights in the radio shack. I recommend this receiver for anyone interested in getting into this hobby. That is if you don’t want to build your own receiver. Ok, you might be asking yourself what’s the big deal about popping sounds from lightning in the VLF band anyway. Yes I agree, after a short while spherics can get very boring. Thankfully, there is more to natural radio than just spherics. During dusk, dawn and the intervening night hours the ionosphere goes through a transformation that has a profound effect on spheric activity. At dusk the D layer (lowest layer) of the ionosphere fades away leaving the higher E and F layer only. This is due to the lack of ionizing radiation from the Sun during the night hours. This phenomena can easily be seen on a SID receiver plot as a sharp rise in the received signal strength of a monitored VLF transmitter at dusk and the subsequent signal drop at dawn. During the daylight hours spherics can travel upwards of 2000 to 3000 kilometers from the source of the lightning strike. These spherics reach the receiver via the wave guide created by the D layer and the Earth’s surface. At night though it’s presumably the much higher E and F layers that are responsible for the sky wave component of a VLF signal. This allows VLF signals to travel considerably further at night. During the night hours, spherics from much further than 2000 or 3000 kilometers can be heard! As a matter of fact during night hours it is possible to hear spherics from clear across the planet. These types of spherics are called tweeks. Tweeks get their name from the distinctive “tweek” sound they produce. They can easily be distinguished from normal spherics just by their ringing sound alone. On a spectrogram tweeks generate a small tail at around 2KHz and it’s harmonics. It is this “tail” that gives tweeks their unique sound. Here’s an example of a tweek with a very long tail captured with my setup. Tweeks are spherics that have traveled for many thousands of kilometers through the Earth-ionosphere wave guide, and because of this they undergo dispersion. Dispersion is the process of higher frequencies arriving at the receiver slightly faster than lower frequencies. The lower frequencies only lag by a few hundredths of a second, but this is enough to change how a spheric sounds at great distances. Dispersion is associated with the cut-off frequency of a wave guide. All wave guides have a cut-off frequency and the Earth-ionosphere wave guide’s cut-off frequency is around 1.7KHz. This is why we see tweek tails at this base frequency range. I say “range” because the Earth’s wave guide varies with the reflecting height of the ionosphere. There is a nice paper that goes into more detail about tweeks, dispersion and cut-off frequencies in the Earth-ionosphere wave guide with formulas and all, and it can be found here. Dispersion plays a very important role in natural radio and there is another phenomena that takes dispersion to the extreme, whistlers. Whistlers are less common than spherics and tweeks, but are truly worth the effort of detecting them. Whistlers are associated with intense lightning strikes, and research has shown that they might be linked to upward electrical discharges from thunderstorm tops. These types of lighting are a hot subject in the scientific community today, so VLF receivers are on the cutting edge of science! The theory goes (highly simplified) that energy from intense lightning strikes can get coupled into the magnetosphere through the ionosphere, and this energy then becomes trapped inside magnetic field lines within the magnetosphere. As the trapped energy/electrons travel up the magnetic field line they are guided back down to the magnetic conjugate point on the opposite hemisphere. A VLF receiver in the opposite conjugate point will hear a steady decreasing tone that last for about one second. The whistler energy has literally traveled thousands of kilometers into space, many times further than the circumference of the Earth to generate such an extreme dispersion in the signal. It’s also possible for the whistler to be reflected again back up the magnetic field line by the ionosphere and be heard at the originating conjugate point as a whistler echo. These whistler echoes normally last two to three seconds! This is the link between natural radio and space physics and the reason why I feel that natural radio has a place in this astronomy web site. If you would like to calculate the magnetic conjugate point for your location go here: IGRF/DGRF Model Parameters and Corrected Geomagnetic Coordinates at SPDF. Now to the project itself! After initial tests with the homemade receiver in the radio shack I realized that for serious natural radio monitoring the receiver most be placed outside and away from a homes electrical wiring. The power grid 60Hz hum and its harmonics are just horrendous inside a home. Therefore, I used a military ammo box for the outdoor enclosure to house the receiver and a 12 volt rechargeable battery for powering the receiver. The ammo box has a rubber seal around the cover that does a great job at keeping humidity out. I used a long run of CAT5 network cable between the house and the receiver. The receiver is located about 500 feet away from the house. Here’s a Google Earth view of where I installed my receiver. The yellow dot is where the receiver is located and the red line is the feed line. On either end of the cable run I installed a 1:1 isolation transformer to isolate the receiver from the house and to balance the feed line. This is an important step and shouldn’t be omitted! After all, the whole point of putting the receiver outside is for isolating it from the house’s electrical systems. I used an 8 foot CB steel whip antenna bolted to the ammo box for the antenna and two 8′ foot copper ground rods for the ground. An E-field receiver needs a good ground for proper operation so don’t cut corners here. A good ground system can cut down on the hum level so double your efforts here. I add desiccant inside the ammo box to keep the setup as dry as possible. Installing a receiver outside adds to the complexity of the installation, but for natural radio there isn’t much of an option. And remember, when it comes to natural radio it’s all about location, location, location. So search for the location that has the lowest hum level possible. Here is the schematic for the outdoor receiver and the indoor gain and filter stage. All resistors are metal-film. I had to add a 1.8Kohm resistor in series with the antenna as well as looping the antenna cable around a ferrite bar to correct an “intermodulation” like issue I was having with a nearby broadcast FM station. In the house I have the second stage of gain and filtering built around a TL071. The gain of this stage is between one and seventeen which is set to give a total gain of x1000 whenever possible. Low pass and high pass filtering is important when designing a natural radio receiver because the goal is to have a full sounding receiver. Therefore the frequency response has to be tailored via filtering to achieve this. I would like to thank Paul Nicholson for his assistance in adjusting the parameters of these circuits. So what does the VLF band look like during a quiet day? Below is a spectrogram of what one can expect from a typical VLF receiver. In this spectrogram all the vertical lines are spherics and tweeks. The waterfall scroll interval was set a little to slow for the tweeks to be visually detectable in this plot. At 11.9KHz, 12.6KHz, and 14.8KHz there is a faint train of pulses from the Russian Alpha navigation system. The audio software I use is Spectrum Lab by DL4YHF and is a must have for natural radio! Don’t worry it’s free. Even with the receiver outside you need additional filtering of the 60Hz signal and its harmonics or you won’t see or hear much of anything except the power grid! This is of course unless you are fortunate enough to live in the middle of nowhere away from all civilization. One and Two Hop Whistlers Here are three spectrograms of one hop whistlers captured with my setup on the night of March 7th 2009. This is what I was after when I set out to build a natural radio receiver so I’m very exited that I finally have a receiver that is sensitive and quiet enough to capture whistlers in Florida! In many cases the source spheric or tweek which generates the whistler can also be seen. For example the next spectrogram shows a tweek about half a second before the whistler. The dispersion time of the whistler categorizes it as a one hop whistler meaning that you would expect to possibly see a tweek preceding it if propagation conditions are favorable. Therefore, the tweek marked source originated in the South Pacific off the coast of Chile near the Arctic Circle, and it shows the dispersion one would expect to see for a spheric which has propagated via the Earth Ionosphere wave guide for thousands of kilometers. Some of that same energy was ducted into the Magnetosphere and arrived at my location via a magnetic field line(s), and this signal shows dispersion consistent with this mode of propagation (whistler mode). There’s a bit of ambiguity as to which of the two tweeks in the above spectrogram caused the whistler (if either…). In chapter 4 of Robert A. Helliwell’s book “Whistlers and Related Ionospheric Phenomena“, he details the steps used for identifying the sources of whistlers. The first method and most used when available is to compare several whistlers in the same run. If there are at least three whistlers to compare from the same run high reliability can be achieved just by using this method. Fortunately I had a number of other whistlers from this same run which I superimposed on top of each other in Photo Shop. I then aligned the whistlers and looked for causative impulses which aligned to within 1mm of each other. You’re looking for impulses within a second of the whistlers here (one hope whistler). In the case of the above whistler, the strong tweek to the left of the marked “source” tweek was just outside of the threshold while the weaker marked tweek fell within the 1mm range called for in the book. Therefore there is a good chance that the above marked tweek was the source of the whistler. And here are three recordings of whistlers along with their spectrograms taken with my setup on March 18th 2009 (you might need to right click and then click on Save Target As): When hunting for whistlers one should look for strong lightning activity some hundreds of miles away from your location as well as near your magnetic conjugate point. In the case of Florida, its conjugate point is off the coast of Chile. The orange circle on this map roughly shows Florida’s conjugate point. Lightning activity near the receiver is important because these nearby storms can at times generate whistler echoes. Here are a few examples of two hop echoes taken on the morning of May 23rd 2009. On the second example there are actually four different echoes which can be heard. (you might need to right click and then click on Save Target As): With two hop echoes the source spheric which causes the echo is normally always identifiable. In the above case the source spheric is one of the strong strokes just before the echo. With some practice one can predict which spherics will generate echoes! The dispersion times for one, two and subsequent whistler echoes depends on Latitude. For example, whistlers in Florida will always sound shorter than say whistlers in the UK. Regardless, single and echo whistlers can always be differentiated easily. Here is a comparison of a short whistler and a two hop echo. The dispersion time deference between the two is clearly evident. Both of these examples where from the same run on the 23rd of May 2009. Here are a few examples of rare whistler trains from Florida. This activity was recorded during a strong geomagnetic storm on March 3rd 2012. About twelve echoes can be counted on the spectrograms. Natural radio activity usually increases during geomagnetic activity. During solar storms whistler as well as chorus activity tends to increase greatly, particularly in the higher latitudes. Therefore, one should also keep track of solar weather when hunting for whistlers and the like. Spherics and the Earth-ionosphere Cavity Resonance I also operate an ELF receiver which I use to detect the earth-ionosphere cavity resonance or as it’s better known, the Schumann resonance. Spherics and the Schumann resonance are intimately related. It is the hundred or so lightning strikes per second occurring around the globe which energize the earth-ionosphere cavity which in turn causes it to resonate like a tuning fork at it’s base resonant frequency. One can calculate the earth-ionosphere cavity resonance (highly simplified) by taking the speed of light (300,000 km) and dividing it by the circumference of the planet (40,000 km). The result of the division is 7.5Hz. The actual base mode of the Schumann resonance is at around 7.8Hz give or take .3Hz. Below is a spectrogram of the VLF range with a spectrogram slice of the ELF range superimposed showing the first three modes of the Schumann resonances. They are the three ripples below the 20Hz marker. The red lines point to where the Schumann resonance is located in relation to the towering wide band spheric activity. I find it fascinating that something like spherics which sound like little inconspicuous pops and crackles on a VLF receiver are actually pumping tremendous amounts of energy into our atmosphere and causing our very biosphere to ring at it’s resonant frequency. Just think about this for a minute. The fact that technology has reached a point that one can detect these phenomena with a simple electronic receiver, a computer and a soundcard is truly amazing. It is possible to use Spectrum Lab by DL4YHF to stream a live VLF stream so one can see the VLF sounds as well as hear them. Instructions on how to accomplish this using Spectrum Lab can be found here. My receiver along with a number of other natural radio receivers can be accessed here: http://abelian.org/vlf/. Have fun, and please email me if you have any questions or comments.
<urn:uuid:1ac01e26-dc6b-4564-8be6-2290ccff048d>
CC-MAIN-2021-43
https://www.backyardastronomy.net/natural-radio-receiver/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00110.warc.gz
en
0.950915
3,599
3.03125
3
[sc_embed_player fileurl=”http://learn-biblical-hebrew.com/wp-content/uploads/2016/02/blessing-b4-torah-study.mp3″]Blessing to God before study (IMPORTANT: if you are not yet familiar with how we transcribe Hebrew sounds and words, you can download (or read online) the reference guide.) The most common type of Hebrew prepositions are those that stand alone, i.e., are not attached (e.g., as a prefix). These are called independent prepositions. All prepositions in English, for example, are independent – on, with, below, above, over, between, and so forth. As you have already learned, Hebrew also has three inseparable prepositions which are written as prefixes. The prototypes for these prepositions are; בְּ ,לְ, and כְּ. In this lesson we will learn five common, independent prepositions. As always, we begin by learning to identify the prepositions by how they sound. The five prepositions are vocalized as f0llows: - /ahl/ – rhymes with tall, fall. - /beyn/, /veyn/ – (rhymes with pain, rain). - /min/ – rhymes with sin, tin. More rarely rhymes with /een/ as in seen, preen - /el/ – rhymes with ‘bell’, ‘tell’ and more rarely with /ale/, /pail/. - /eem/ – rhymes with seem, team. [sc_embed_player fileurl=”http://learn-biblical-hebrew.com/wp-content/uploads/2016/03/2.3-independent-prepositions-1.mp3″](see above, 1-5) Now, listen for these prepositions in the following recordings: [sc_embed_player fileurl=”http://learn-biblical-hebrew.com/wp-content/uploads/2016/02/gen1v4.mp3″]/beyn/, /veyn/ [sc_embed_player fileurl=”http://learn-biblical-hebrew.com/wp-content/uploads/2016/02/gen1v9.mp3″]/el/ (HINT: Listen for /el·maqom/) Next, try as best you can the following exercise: Listen carefully to the next recording for each of these independent pronouns. Which of the 5 prepositions were you able to pick out (HINT: only two are present in the recording)? answer((/ahl/ 2, /beyn/ 2, /el/ 0, /min/ 0, /aeem/ 0)) The spelling of these new prepositions introduces 3 new Hebrew consonants, but no new vowel pointings. The consonants are: - Nun נ (and final Nun, ן) – makes the /n/ sound. - Min מ (and final Min, ם)((remember: when a Hebrew consonant has a final form, its pronunciation is identical to its normal form.)) - Ayin ע – makes the /m/ sound. – silent. Like the Aleph consonant, it serves only to vocalize the vowels. Now that you’ve learned these new letters, here are how the prepositions are spelled and pronounced (Read from right to left): עַל בֵּין בֵין מִן אֶל עִם Search the following verse (Exodus 9:22) for the עַל preposition. How many can you find?((4)) וַיֹּאמֶר יְהוָה אֶל־מֹשֶׁה נְטֵה אֶת־יָדְךָ עַל־הַשָּׁמַיִם וִיהִי בָרָד בְּכָל־אֶרֶץ מִצְרָ֑יִם עַל־הָאָדָם וְעַל־הַבְּהֵמָה וְעַ֛ל כָּל־עֵשֶׂב הַשָּׂדֶה בְּאֶרֶץ מִצְרָיִם Find both spellings of the בֵּין preposition. וַיַּ֧רְא אֱלֹהִ֛ים אֶת־הָאוֹר כִּי־ט֑וֹב וַיַּבְדֵּל אֱלֹהִים בֵּין הָאוֹר וּבֵין הַחֹשֶׁךְ Next, we study the מּן־ preposition. Note the hyphen-like character, the Maqqef. In Hebrew, the function of the Maqqef is uncertain, but it does not appear to have any semantic weight. Just recognize that this preposition is almost always written with a Maqqef connecting it to a noun((there are quite a few other spellings – some of which can be quite complex, but they are not relevant to your learning at this point)). Now, find all of the instances of מּן־ in this verse, Genesis 2:9: וַיַּצְמַח יְהוָה אֱלֹהִים מִן־הָאֲדָמָה כָּל־עֵ֛ץ נֶחְמָד לְמַרְאֶה וְטוֹב לְמַאֲכָ֑ל וְעֵץ הַחַיִּים בְּתוֹךְ הַגָּן וְעֵץ הַדַּעַת טוֹב וָרָע In the verses below, what independent prepositions can you find (HINT: There are three of the five): וְהָאָרֶץ הָיְתָה תֹהוּ וָבֹהוּ וְחֹשֶׁךְ עַל־פְּנֵי תְה֑וֹם וְרוּחַ אֱלֹהִים מְרַחֶפֶת עַל־פְּנֵי הַמָּיִם׃ ♦וַיֹּאמֶר אֱלֹהִים יְהִי א֑וֹר וַיְהִי־אוֹר♦ וַיַּ֧רְא אֱלֹהִ֛ים אֶת־הָאוֹר כִּי־ט֑וֹב וַיַּבְדֵּל אֱלֹהִים בֵּין הָאוֹר וּבֵין הַחֹשֶׁךְ This preposition is also very common, there being almost 900 occurrences in the Hebrew Bible. It is most often translated as ‘over’, ‘above’, ‘against‘, and ‘on’. In many cases, it is joined to its object by a maqqef, for example in the two word phrase below, the preposition is connected to its object (the Hebrew word פְּנֵי) using the maqqef (colored red). Remember, there is no semantic difference between the Hebrew maqqef and the English Hypen. בֵּין , בֵין Usually translated as between or among, it expresses the notion an “interval,” or “space” between two objects and occurs 172 times in 165 verses. For example, in Genesis 15:7 it is used to express how the fire walks between the pieces of the dismembered animals of Abraham’s sacrifice. In Exodus 13:9, the prep describes the space between one’s eyes or between two walls (Isaiah 22:11) and so forth. When used to indicate a space separating two objects, the preposition is repeated, e.g. the space or distance between you and between your God (Isaiah 59:2). For example, Genesis 1:4 illustrates how this preposition is used. And God saw that the light was good; and God separated the light from the darkness((NRS, NIV, RSV translations)) and-saw God the-light that-good and-separated God between the-light and-between the-darkness((Mechanical, word-for-word translation)) מּן־ – This preposition is the ninth most frequently used word in the Hebrew Bible and is most frequently translated to “from”. Note the other meanings are all related to (or are synonymous with) ‘from’. Here they are: - “from” – With verbs of motion or separation; to go from, or to be away from, i.e. without; or away from in relation to some other spot or direction. This is often translated as “out of” when referring to a land or nation. For example, “out of Egypt” is the usual translation of “min-mitz·ra·yeem”, literally “from Egypt”. - “on account of our transgressions” – as in arising from our transgressions. - “time from when” – In which the preposition expresses the time from when something occurred. - Comparison – usually “more than”, “greater then”, “less than”, “bigger then”, etc., and sometimes “too much for”, “too great for”. In English this would be literally translated, for example in comparing Bob’s house to Judy’s larger house, Judy’s house is bigger from Bob’s. אֶל – This preposition expresses the idea of motion toward someone or something((Also, see the discussion of the Lamed (לְ) preposition in the previous lesson.)), i.e., directional motion. As such, it occurs in a wide variety of contexts expressing motion, attitude, direction, or location. However, אֶל is also and often translated as “into”, for example as in “into the ark“(Gen 6:18) or “into His heart (Genesis 6:6). The preposition has also been translated as “against.” For example, although motion toward him is explicit, the typical translation of Genesis 4:8, is that Cain “rose up against Abel”((RSV, NRS, NAS, NAUI, etc.,)). Here אֶל no doubt retains something of the original sense of directional motion. עִם – With, beside, by, among, accompanying, from among. The preposition expresses the concept of inclusiveness, togetherness, company. If you feel comfortable with this lesson, move on to the lesson summarizing the contents of Lessons 1 & 2 Now, go and study
<urn:uuid:640cb366-0d37-4409-a4b0-b457ea4e08e6>
CC-MAIN-2021-43
https://learn-biblical-hebrew.com/lessons/lesson-2-prepositions/lesson-2-3-standalone-prepositions/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00431.warc.gz
en
0.820247
2,879
3.3125
3
Start with Part I to learn about the basic concepts of vasopressors. Let’s discuss two of the most common inopressors in the ICU: norepinephrine and epinephrine. Widely referred to in the US by the trade name Levophed, and in British-descent nations as “noradrenaline,” norepinephrine has become our first-line pressor for most routine use. The history of norepinephrine has been a tumultuous one. Several decades ago, it was notorious for its poor outcomes—particularly the prevalence of distal ischemia, such as renal failure and toes falling off—that eventually earned it the moniker, “Leave ’em Dead Levophed.” Its use waned. However, it gradually made a come-back, with the understanding that the old approach of using it (in very high doses without adequate fluid resuscitation) was more to blame than the drug itself. Norepinephrine is an inopressor. Its primary effect is vasoconstriction via alpha-1 agonism. This effect predominates clinically, to the extent that some providers believe it to be a “pure” vasoconstrictor—but not so. It has a small but important degree of beta-1 activity. How much? This is an apples versus oranges comparison; we might call it “70/30” or “80/20” or some other balance, but those numbers would be arbitrarily invented and clinically meaningless. Think of it this way instead: norepinephrine induces vasoconstriction along with approximately enough inotropy to balance the increase in afterload. Afterload, you should remember, is the resistance against which the heart must beat in order to push blood forward. Afterload is mainly determined by vascular tone, so vasoconstricting agents—like Levophed—increase afterload. This means that in order to preserve forward flow, the heart will have to work harder. It’s like downshifting the gear on a car: the load on the engine (the heart) is increased, so output will only be improved if the engine is able to meet that load. Otherwise, cardiac output can actually drop. A young, healthy pump may be able to meet an increase in afterload without any help. However, in many older, critically ill patients, the heart is already at its limits and won’t have the reserve. It will need some extrinsic support—a positive inotrope—on top of the pressor. This is essentially what you get from norepinephrine. You get a primary vasopressor with just enough activity on the pump to help it push through the added afterload. The net effect is therefore an elevated blood pressure with a fairly neutral effect on the heart. For most hypotensive ICU patients, this is what we want. Thus, norepinephrine is rarely the “wrong” choice in a patient needing pressors, and is generally a reasonable place to start. However, it may not be optimal in every case. For a patient needing a pure inotrope or inodilator, it would not be ideal. Additionally, it might not be the first choice in a patient whose cardiac activity is already hyperactive. For instance, in a patient exhibiting significant tachycardia (a 20-year-old trauma patient with sinus tach in the 140s), one could argue that the chronotropic effects of norepinephrine—relatively weak as they may be—are not needed and may even be deleterious. An even better example might be the patient who is experiencing tachyarrhythmias, such as episodes of rapid atrial fibrillation or ventricular tachycardia; in that case it may be a good idea to completely discontinue all drugs with potential proarrhythmic effects, and select a pure vasoconstrictor instead. Or not. There isn’t much evidence supporting these arguments. This gets to a deeper problem, which is that there is relatively little data for most pressor-versus-pressor debates. Studies do exist, but generally have failed to show meaningful differences, suggesting that the subtle physiological reasons we believe one drug may be superior to another in individual patients may not be easily demonstrated in large-scale trials. The best evidence for norepinephrine is in sepsis, particularly compared against dopamine, and the current guidelines (from the 2016 Surviving Sepsis campaign) do recommend norepinephrine as the first-line pressor in that situation. Dose range is generally from .01 mcg/kg/min to a maximum that depends on unit policy, usually somewhere between 1.0 mcg/kg/min and 3.0 mcg/kg/min. (Weight-based dosing is a good practice, but some units still use straight doses, for which a norepinephrine dose is around 1–300 mcg/min.) It is a potent vesicant, meaning that tissue ischemia and infarction can readily occur if it extravasates from a peripheral IV site. Peripheral norepinephrine is therefore a bit sketchy; in most centers it is acceptable in low concentrations through a reliable IV as a temporary measure, but should be switched to a central line as soon as possible. Don’t be afraid to run it peripherally in a sick patient while you place a line—it’s better than leaving them hypotensive–but do place the line ASAP, then switch it over. Epinephrine, aka “adrenaline” across the pond, is the sassy little sister to norepinephrine. Like norepi, it is a catecholamine. And like norepi, it avidly binds at alpha-1 adrenergic receptors. Unlike norepi, it is also a potent beta-1 agonist. (It also binds beta-2, causing bronchodilation, which explains its role in anaphylaxis and asthma but is generally irrelevant when we use it as a pressor.) Clinically speaking, it can be thought of as providing equal parts vasoconstriction and inotropy. It is still a strong vasoconstrictor, although perhaps slightly less so than norepi (for various reasons including the presence of beta-2 receptors in the peripheral vessels). However, it is far more cardioactive. This is both its greatest strength and its greatest weakness. Epi is probably too cardioactive to use as a routine first-line pressor. At least, that is how most of us feel, although some do practice this way, and—remember that dearth of clear evidence?—it’s hard to prove them wrong. However, you will find that it tends to provoke significant tachycardia in many patients at therapeutic doses, or even subtherapeutic doses. When the heart rate is 150 and your MAP is still 50, do you keep turning up the epi? In fairness, although sinus tachycardia is common, epinephrine does seem to cause fewer actual arrhythmias than dopamine (the other popular pressor with strong inotropic and chronotropic effects). Thus, despite the sun setting upon dopamine, epinephrine still enjoys popularity as a second- or third-line pressor for diseases like sepsis. If you have “maxed out” your Levophed, maybe this is what you add next. (A caveat and a question to ponder: if have indeed reached the maximum dose of norepinephrine your unit believes to be safe and effective, you might legitimately wonder whether stacking on a second catecholamine—although common practice—is likely to be either safe or effective. After all, despite its different binding affinities, epinephrine is in the same class as norepinephrine, and in fact is synthesized from it in vivo. Perhaps a better choice would be an agent that works upon different receptors; see more on vasopressin later.) At low doses, it can also be used as a fairly pure inotrope, with little pressor effect. Running epinephrine to help support the heart while using a separate norepinephrine or dopamine drip as your pressor is one approach to the patient with both distributive and cardiogenic shock, and allows separate titration of each drug to treat the separate problems. Cardiac surgeons are sometimes fond of this. An idiosyncrasy of epinephrine is its propensity to elevate the serum lactate. Lactate is traditionally viewed as a marker of anaerobic metabolism (Type A lactic acidosis—although this idea has come under attack in recent years), and hence is often followed as an endpoint of resuscitation; it can therefore be vexing when you start an epinephrine drip and the lactate jumps from 3.0 to 5.0. But don’t fret; this is not a sign of worsening perfusion; it is a direct medication effect, or Type B lactic acidosis, related to glucose metabolism. Epi is probably not much safer than norepinephrine when given peripherally. The exception is when given as a bolus during cardiac arrest, which is a universal practice; as an extension of this concept, it is probably acceptable to push smaller, non-code doses through a good peripheral IV when needed as well. (Norepinephrine, in contrast, is almost never pushed.) Dosing is around .01–1 mcg/kg/min, or around 2–20 mcg/min; as with norepi, the maximum dose varies widely. In summary, epinephrine is usually used either when a greater degree of inotropy is needed than norepinephrine can provide, or it is added to norepinephrine as an additional drug. Proceed to Part III, where we’ll discuss the pure vasoconstrictors.
<urn:uuid:d1e2ebb7-a406-4f39-80dc-82e22f86c357>
CC-MAIN-2021-43
http://critcon.org/archives/222
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585504.90/warc/CC-MAIN-20211022084005-20211022114005-00070.warc.gz
en
0.93571
2,092
3.03125
3
CityDoc-UK’s Premier Vaccine Specialists CityDoc is a trusted provider of expert medical services at our clinics throughout the UK, with nearly 50 travel clinics in London and 100+ Nationwide you can be sure to find a clinic nearby. Established in 2006 we have over 250 000 registered clients We have over 100 + Clinics across the UK, who provide chickenpox vaccinations. Chickenpox is a highly infectious disease, caused by the virus, Varicella Zoster. It mainly affects children under 10 years of age, where risk of serious complications are not as common as in adults. Chickenpox virus is spread through personal contact with an infected individual or through coughing and sneezing. It is possible to develop Chickenpox from contact with someone who has shingles. However, you cannot contract shingles directly form someone who is infected with Chickenpox. The virus is most infectious 1-2 days before the rash occurs and for around 5 days after (or until the rash crusts over). 90% of household contacts of someone infected with Chickenpox will catch the infection if they have never had it before. Chickenpox has an incubation period of 3 weeks. This is the time taken from contracting the infection to developing the symptoms. The classic symptom is a rash, which is very itchy and can be widespread affecting the face, chest, arms and legs. Sometimes, the blisters can occur inside the mouth. There is often fever and cold symptoms also. The symptoms tend to improve after 1 week. The illness can vary from mild symptoms with a few spots to itchy rash covering the whole body, which can be very distressing, affecting sleep, school and work and causing scarring to the skin. In children, complications of Chickenpox are rare, but include: - Superimposed bacterial infection of the skin, which can be widespread - Neurological complications such as encephalitis (inflammation of the brain) or meningitis (inflammation of the lining of the brain). - very rarely-inflammation of the kidney and arthritis. Adults who catch Chickenpox are more likely to have severe illness with complications, including: Chickenpox in Pregnancy is a serious disease for the mother and especially the baby. Therefore, it is important to know before trying for pregnancy whether you have immunity to this illness and if not, vaccination may be appropriate to protect you. There is no specific treatment for Chickenpox as most children will recover spontaneously. The mainstay of managing the infection includes pain medications, antihistamines and soothing skin lotions such as calamine. In severe infections, antiviral drugs can be used to modify the illness. It is important if you are working closely with children or in health care to check whether you have already had Chickenpox, as there is vaccination is available to protect you. As the disease is very infectious, if you are vulnerable to severe infection or have never had Chickenpox, then vaccination should be considered. The Chickenpox vaccine has been used routinely in the childhood immunisation programme in the United States since 1995 and is safe and effective prevention against Chickenpox infection. Many other countries also routinely provide the vaccination in their immunisation schedules. The vaccine is live containing weakened virus. Two doses of the vaccine provides 98% protection in children and 75% protection in adults against Chickenpox infection. In both groups, if breakthrough infection does occur, it is much milder and of a shorter duration than in those who have never been vaccinated. The vaccine can be given to anyone over 12 months of age: - to prevent development of Chickenpox infection in those who have never had it. - to protect occupational groups, such as those working with children and health care workers who have never had Chickenpox infection. - to prevent healthy susceptible contacts of immunocompromised patients from transmitting natural infection to them. For example, siblings of a leukaemic child, or a child whose parent is undergoing chemotherapy. - to prevent development of Chickenpox infection in those who have never had the illness and have been in close contact with a person with Chickenpox. The vaccine must be given within 3 days to prevent infection from occurring. |Age can be given||Method of Administration||Dosing Schedule||Interval between doses||Booster dose requirement| |12 months onwards||Intramuscular injection to the thigh or deltoid muscle depending on age||2 doses||4-8 weeks||None| Post Exposure Prevention To prevent infection from occurring in those who have never had chicken pox and have been exposed to infection (post exposure prevention): 2 doses of Chickenpox vaccine is required. The first dose must be given within 3 days of the exposure to prevent the disease from developing. The first dose can be given 3-5 days from exposure to modify the severity of the disease. After 5 days from exposure, there is no evidence that the vaccine will change the course of the infection and therefore, is not beneficial. The second dose should be given after 4 to 8 weeks. The vaccine cannot be given to the following groups: - Anyone with suppressed or weakened immune system caused by diseases such as leukaemia, lymphoma, severe HIV infection or due to drugs such as oral steroids, cancer therapies. - In the presence of a illness with a high temperature (above 38.5 degrees Celsius) - If there is a previous history of severe allergic reaction to Chickenpox vaccine or to any of the ingredients in the vaccine (see FAQ section) - Anyone with active Tuberculosis - Anyone with a uncontrolled neurological disorders, such as epilepsy not responding to medications. - Pregnant women The Chickenpox vaccine cannot be given to pregnant women under any circumstances. If a pregnant women is not immune to Chickenpox and encounters the disease, they must see their NHS healthcare provider as soon as possible to start immunoglobulin treatment (passive antibodies against Chickenpox given via injection). Pregnancy must be avoided during the vaccination course and for a further 1 month after the second dose has been received. The Chickenpox vaccine can be given to breast feeding mothers. Studies have shown that the virus is not transmitted in breast milk to the infant. Common Side Effects - Local reactions at the injection site-including pain, redness and swelling - Chicken pox like rash-occurs in 10% of adults and 5% of children who receive the vaccine. The rash is either localised around the injection site or generalised. across the body. On average, there is usually around 5 spots. - The vaccine virus can stay in the body for life and reactivate as shingles, but the risk of this occurring is substantially lower than with naturally occurring infection. Risk of transmitting infection There have been isolated cases where the vaccine virus has been transmitted from the vaccinated individual to non immune contacts. As a general rule, contact with any individual with normal immune system is not a concern as the vaccine virus is weakened and will easily be dealt with by the immune system. However, because of the potential serious complications of Chickenpox infection in certain groups, we advise that close contact is avoided for a period of 6 weeks after the administration of the first dose with the following individuals: - Pregnant women who have never had Chickenpox infection. - Newborn babies (those within 28 days of birth) of mothers who have never had Chickenpox infection - Anyone with poor or suppressed immune system such as those receiving cancer treatments. However, it is important to bear in mind that the risk of transmission has only occurred from those individuals who have developed the rash following vaccination and is extremely rare. This is opposed to the highly infectious nature of Chickenpox itself. Interactions with Other Vaccines Chickenpox can be safely given at the same time as: - Diphtheria, tetanus, polio, pertussis vaccines - Meningitis B vaccine - All travel vaccines including yellow fever Chickenpox and MMR vaccine Interactions Chicken pox vaccine must be given either on the same day as the MMR vaccine or separated by interval of 4 weeks. This is because the MMR vaccine causes an increased response to the Chickenpox vaccine, which means that breakthrough infection with Chickenpox is more likely if this interval is not respected. However, the data available shows that the breakthrough infection with Chickenpox in these cases tends to mild and not full blown severe Chickenpox infection. Where both vaccines have been given within 4 weeks of each other, it is advisable to consider a further dose of the vaccine given second. So whether you would like to protect your child or yourself against chicken pox, visit your local CityDoc clinic today. Frequently Asked Question 1) How safe is the Chickenpox vaccine? There has been extensive clinical studies and also post marketing experience with the Chickenpox vaccine, which have not demonstrated any serious adverse effects with this vaccine. Additionally, it has been used routinely in the US since 1995 without any safety issues being identified. The decision to vaccinate would be based on the assessment by the clinician during the consultation. 3) Why is the Chickenpox vaccine not part of the NHS childhood schedule? The Joint Committee on Vaccination and Immunisation (JCVI), which advises the UK Government, has so far recommended that it would not be cost effective to introduce the Chickenpox vaccine into the routine UK schedule for the following reasons: - it is not cost-effective in the short-term; - an increase in the incidence of herpes zoster (shingles) cases as a result of childhood varicella vaccination is likely to occur; - a potential increase in varicella (Chickenpox infection) among adults is also likely if there is low vaccine coverage; - it is not guaranteed that varicella vaccination will protect against herpes zoster (shingles) in later life due to re-infection. With poor uptake levels, re-infection would be common. The protection against herpes zoster is a key factor in making varicella vaccination cost-effective and therefore re-infection would have an effect on the cost-effectiveness of vaccination (JCVI minutes 2009). The concerns with regards to an increase in shingles incidence has not currently been demonstrated in countries where the varicella vaccine is routinely given. However, it is too early to comment on epidemiological trends in these countries and further data would need to be obtained to confirm or refute this point. In the meantime, the vaccine is available privately for those who feel it would be beneficial to vaccinate against this illness and are not entitled to it under the NHS. 4) Is my child infectious after receiving the Chickenpox vaccine? As stated earlier, there have been isolated cases of transmission of the vaccine virus to people who do not have immunity to Chickenpox. If the person is already immune (has had Chickenpox infection), then there is no risk of transmission. This is very rare and Chickenpox infection, itself is very contagious. The clinician will discuss this with you in the consultation. 5) Does my child need to stay away from nursery/school following vaccination? Generally they do not need to kept away as the risk of them transmitting the infection is very low. However, If they develop a Chickenpox rash, they may need to be kept away. The doctor will discuss this in detail with you during the consultation. 6) Can my child have the vaccine if they are unwell? If your child has a fever, then we would not recommend the vaccine is administered until they are well. With regards to minor illnesses without a fever, such as cold or cough, vaccination can proceed. However, the decision to vaccinate would be made by the doctor following consultation and assessment. 7) If I am pregnant and have had Chickenpox infection already. Is it OK for my child to be vaccinated? The vaccine virus cannot be transmitted to and cause infection or complications in people who already have immunity. Therefore, if you know that you have definitely had the Chickenpox infection, then both yourself and your unborn baby are safe and your child can be administered the vaccine. 8) Is there is a risk to very young siblings from older children being given the vaccine. If you have had Chickenpox infection yourself, you will pass this immunity onto your baby and there would be no concerns about transmission of the vaccine virus. If you have never had Chickenpox then your baby will not have any immunity. However, the risk of transmission is very rare and has only occurred in those vaccinated individuals hat have developed a Chickenpox like rash. The vaccine virus is weakened and is unlikely to cause any noticeable infection in babies. The exception to this, is newborn babies, that is babies under 28 days of age, born to mothers who have never had Chickenpox, as they may be vulnerable to severe illness form the vaccinated individual. 9) Is two doses of the vaccine required? Two doses of the vaccine are needed to provide full protection against the illness. 10) What is a significant exposure to Chickenpox and when can the vaccine be used to prevent infection? A significant exposure to Chickenpox would be considered as the following: - Chickenpox infection in household contact-sibling or child - Chickenpox infection confirmed in the nursery or school, particularly if your child has been in contact with the infected person. The disease is infectious from 1-2 days before the rash comes out an for 5 days after. Therefore, if the exposure has been within 3 days (ideally 1st day or 2 of rash), then the vaccine can be given to prevent disease from occurring. If the exposure was more than 3 days ago, but less than 5 days, then the vaccine can still be used to reduce the severity of the illness. After 5 days, the vaccine has no effect on the disease. 12) I am not sure if my child has been exposed to Chickenpox; can they still be vaccinated? The incubation period for Chickenpox is 3 weeks. The disease is also highly infectious. Therefore, it is possible that your child may already be exposed and be incubating the virus. The decision to be vaccinate would be made following a risk assessment during the consultation with the clinician. If it is decided to vaccinate, then you must report any rash so that it can be swabbed to see if the vaccine virus is responsible or if it is due to natural infection. If the latter, then further dose of the vaccine is not required. 13) If have received one dose of the vaccine and then become exposed to Chickenpox, do I still need the second dose? In clinical trials, individuals between 12 months to 12 years of age who had received one dose of the Chickenpox vaccine and were then exposed to Chickenpox infection were either completely protected from Chickenpox or developed a milder form of the disease. Therefore, it is likely that one dose of Chickenpox will provide significant protection against the disease and if exposure occurs, any breakthrough illness would be mild. It is reasonable to wait until the incubation period has elapsed to see if Chickenpox infection has occurred and if not, there is no harm in administering the second dose for full protection. Visit One Of Our Vaccine Specialists CityDoc is a trusted provider of expert medical services at our clinics throughout the UK, with nearly 50 travel clinics in London.
<urn:uuid:cdd609ce-df28-4716-b51e-c567c61007cc>
CC-MAIN-2021-43
https://www.citydoc.org.uk/travel-vaccinations/varicella-chickenpox/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00630.warc.gz
en
0.956079
3,163
3
3
by Babu G. Ranganathan Thanks to Charles Darwin, many have confused natural selection with evolution itself. Yes, Charles Darwin did show that natural selection occurs in nature, but what many don't understand is that natural selection itself does not produce biological traits or variations. Natural selection can only "select" from biological variations that are produced. Natural selection only operates once there is life and reproduction and not before. Therefore, natural selection could not have had any influence in life's origin. Natural selection is an entirely passive process in nature, not an active one. The term "natural selection" is simply a figure of speech. Nature, of course, does not do any active or conscious selecting. When a biological change or variation occurs within a species and this change or variation (such as a change in skin color, etc.) helps that species to survive in its environment then that change or variation also will survive and be preserved ("selected") and be passed on to offspring. That is called "natural selection" or "survival of the fittest." But, neither "natural selection" nor "survival of the fittest" produce any change. Natural selection works with evolution but it is not evolution itself. Since natural selection can only "select" from biological variations that are produced and which have survival value, the real question to be asking is what kind of biological variations are naturally possible. How much biological variation (or how much evolution) is naturally possible in nature? Darwin did not realize what produced biological variations and traits. Darwin simply assumed that any kind of biological change or variation was possible in life. However, we now know that biological traits and variations are determined and produced by genes or the genetic code. The evidence from science shows that only micro-evolution (variations within a biological "kind" such as the varieties of dogs, cats, horses, cows, etc.) is possible but not macro-evolution (variations across biological "kinds", especially from simpler kinds to more complex ones). The only evolution that is observable and possible in nature is micro-evolution (or horizontal evolution) but not macro-evolution (or vertical evolution). The genes (chemical and genetic instructions or programs) for micro-evolution exist in every species but not the genes for macro-evolution. Unless Nature has the intelligence and ability to perform genetic engineering (to construct entirely new genes and not just to produce variations and new combinations from already existing genes) then macro-evolution will never be possible. We have varieties of dogs today that we didn't have a couple of hundred years ago. The genes for these varieties had always existed in the population of the dog species but they simply never had an opportunity for expression until the right the conditions came along. The genes themselves didn't evolve! What we call "evolution" is really nothing more than the expression of already existing genes that didn't have opportunity for expression before. No matter how many varieties of dogs come into being they will always remain dogs and not change or evolve into some other kind of animal. Even the formation of an entirely new species of plant or animal from hybridization will not support Darwinian evolution since such hybridization does not involve any production of new genetic information but merely the recombination of already existing genes. Modifications and new combinations of already existing genes for already existing traits have been shown to occur in nature but never the production of entirely new genes for entirely new traits. This is true even with genetic mutations. For example, mutations in the genes for human hair may change the genes so that another type of human hair develops, but the mutations won't change the genes for human hair so that feathers, wings, or entirely new traits develop. Mutations may even cause duplication of already existing traits (i.e. an extra finger, toe, etc. even in another part of the body!), but none of these things qualify as new traits. Evolutionists believe that, if given enough time, random or chance mutations in the genetic code caused by random environmental forces such as radiation will produce entirely new genes for entirely new traits which natural selection can act upon or preserve. However, there is no scientific evidence whatsoever that random mutations have the ability to generate entirely new genes which would program for the development of entirely new traits in species. It would require genetic engineering to accomplish such a feat. Random genetic mutations caused by the environment can never qualify as genetic engineering! Most biological variations within a biological kind (i.e. varieties of humans, dogs, cats, horses, mice, etc.) are the result of new combinations of already existing genes and not because of mutations. For those who are not read-up on their biology, a little information on genes would be helpful here. What we call "genes" are actually segments of the DNA molecule. DNA, or the genetic code, is composed of a molecular string of various nucleic acids (chemical letters) which are arranged in a sequence just like the letters found in the words and sentences of a book. It is this sequence of nucleic acids in DNA that tells the cells of our body how to construct (or build) various proteins, tissues, and organs such as nose, eyes, brain, etc. If the nucleic acids in the genetic code are not in the correct sequence then malfunctioning, or even worse, harmful proteins may form causing serious health problems and even death. There is no law in science that nucleic acids have to come together in a particular sequence. Any nucleic acid can just as easily bond with any other. The only reason for why nucleic acids are found in a particular sequence in the DNA of the cells of our bodies is because they are directed to do so by previously existing DNA. When new cells form in our bodies the DNA of the old cells direct the formation of the DNA in the new cells. The common belief among evolutionists is that, if given millions of years, radiation and other environmental forces will cause enough random changes (mutations) to occur in the sequential structure of the genetic code of a species so that entirely new sequences for entirely new genes will develop which in turn will program for the formation of entirely new biological traits, organs, and structures that natural selection can then act upon. Would it be rational to believe that by randomly changing the sequence of letters in a cookbook that you will eventually get a book on astronomy? Of course not! And if the book were a living being it would have died in the process of such random changes. Such changes, as transforming one book into another or the DNA of one species into the DNA of another, especially one more complex, simply cannot occur by random or chance alterations. It would require intelligent planning and design to change one book into another or to change the DNA of a simpler species into the DNA of a more complex one. Yes, it is true that the raw biological materials and chemicals to make entirely new genes exist in every species, but the problem is that the random forces of nature (i.e. radiation, etc.) simply have no ability to rearrange those chemicals and biological materials into entirely new genes programming for entirely new traits. Again, mutations only have the ability to produce variations of already existing traits. It would require intelligent manipulation of genetic material (genetic engineering) to turn a fish into a human being. The random forces of the environment cannot perform such genetic engineering! What about all the non-coding segments of DNA commonly known as "Junk DNA"? Evolutionists believe that the presently “non-coding” segments of DNA were at one time useful (that they actually coded for something) in an evolutionary past but became broken-down and, therefore, now don’t code for anything. Evolutionists believe that these “broken-down” genes will someday, by chance mutations, evolve into entirely new genes. How wrong they are. The latest science shows that "Junk DNA” isn't junk after all! It's we who were ignorant of how useful these segments of DNA really are. Recent scientific research published in scientific journals such as Nature has revealed that the "non-coding" segments of DNA are very useful, afterall, and even essential in regulating gene expression and intracellular activities. Even if these sections of DNA were truly useless, random mutations could never make them into useful newer genes anymore than the random energy from earthquakes can make more advanced homes and buildings by randomly rearranging the structures of already existing homes and buildings. Furthermore, a half-evolved and useless organ waiting millions of years to be completed by random mutations would be a liability and hindrance to a species - not exactly a prime candidate for natural selection. In fact, how could species have survived over, supposedly, millions of years while their vital (or necessary) organs were still in the process of evolving! How, for example, were animals breathing, eating, and reproducing if their respiratory, digestive, and reproductive organs were still incomplete and evolving? How were species fighting off possibly life-threatening germs if their immune system hadn't fully evolved yet? Scientist and creationist Dr. Walt Brown, in his fantastic book "In The Beginning", makes this point by saying, "All species appear fully developed, not partially developed. They show design. There are no examples of half-developed feathers, eyes, skin, tubes (arteries, veins, intestines, etc.), or any of thousands of other vital organs. Tubes that are not 100% complete are a liability; so are partially developed organs and some body parts. For example, if a leg of a reptile were to evolve into a wing of a bird, it would become a bad leg long before it became a good wing." Usually what is meant by the term "biological kind" is a natural species but this may not always be the case. The key to keep in mind here is that in order for evolution in nature to occur from one biological "kind" to another biological "kind" entirely new genes would have to be generated and not just merely modifications and/or recombination of already existing genes. If, for example, offspring are produced which cannot be crossed back with the original stock then there is, indeed, a new species but if no new genes or traits developed then there is no macro-evolution (variation across biological kinds) and the two distinct species would continue to belong to the same "kind". If the environment doesn't possess the ability to perform genetic engineering and if macro-evolution really did not occur then how else can one explain the genetic and biological similarities which exist between various species and, indeed, all of life. Although it cannot be scientifically proven, creationists believe that the only rational explanation for the genetic and biological similarities between all forms of life is due to a common Designer who designed and created similar functions for similar purposes and different functions for different purposes in all of the various forms of life from the simplest to the most complex. Even humans employ this principle of common design in planning the varied architecture of buildings! If humans must use intelligence to perform genetic engineering, to meaningfully manipulate the genetic code, then what does that say about the origin of the genetic code itself! Young people, and even adults, often wonder how all the varieties or "races" of people could come from the same human ancestors. Well, in principle, that's no different than asking how children with different color hair ( i.e., blond, brunette, brown, red) can come from the same parents who both have black hair. Just as some individuals today carry genes to produce descendants with different color hair and eyes, humanity's first parents possessed genes to produce all the variety and races of men. You and I today may not carry the genes to produce every variety or race of humans, but humanity's first parents did possess such genes. All varieties of humans carry genes for the same basic traits, but not all humans carry every possible variation of those genes. For example, one person may be carrying several variations of the gene for eye color ( i.e., brown, green, blue), but someone else may be carrying only one variation of the gene for eye color ( i.e., brown). Thus, both will have different abilities to affect the eye color of their offspring. Some parents with black hair, for example, are capable of producing children with blond hair, but their blond children (because they inherit only recessive genes) will not have the ability to produce children with black hair unless they mate with someone else who has black hair. If the blond descendants only mate with other blondes then the entire line and population will only be blond even though the original ancestor was black-haired. Science cannot prove we're here by creation, but neither can science prove we're here by chance or macro-evolution. No one has observed either. They are both accepted on faith. The issue is which faith, Darwinian macro-evolutionary theory or creation, has better scientific support. If some astronauts from Earth discovered figures of persons similar to Mt. Rushmore on an uninhabited planet there would be no way to scientifically prove the carved figures originated by design or by chance processes of erosion. Neither position is science, but scientific arguments may be made to support one or the other. What we believe about life's origins does influence our philosophy and value of life as well as our view of ourselves and others. This is no small issue! Just because the laws of science can explain how life and the universe operate and work doesn't mean there is no Maker. Would it be rational to believe that there's no designer behind airplanes because the laws of science can explain how airplanes operate and work? Natural laws are adequate to explain how the order in life, the universe, and even a microwave oven operates, but mere undirected natural laws can never fully explain the origin of such order. Of course, once there is a complete and living cell then the genetic program and biological mechanisms exist to direct the formation of more cells. The question is how did life come into being when there was no directing mechanism in nature. An excellent article to read by scientist and biochemist Dr. Duane T. Gish is "A Few Reasons An Evolutionary Origin of Life Is Impossible" (http://icr.org/article/3140/) There is, of course, much more to be said on this subject. Scientist, creationist, debater, writer, and lecturer, Dr. Walt Brown covers various scientific issues ( i.e. fossils, so-called "transitional" links, biological variation and diversity, the origin of life, comparative anatomy and embryology, the issue of vestigial organs, the age of the earth, etc.) at greater depth on his website at www.creationscience.com. On his website, Dr. Brown even discusses the possibility of any remains of life on Mars as having originated from the Earth due to great geological disturbances in the Earth's past which easily could have spewed thousands of tons of rock and dirt containing microbes into space. In fact, A Newsweek article of September 21, 1998, p.12 mentions exactly this possibility. An excellent source of information from highly qualified scientists who are creationists is the Institute for Creation Research ( www.icr.org ) in San Diego, California. Also, the reader may find answers to many difficult questions concerning the Bible (including questions on creation and evolution, Noah's Ark, how dinosaurs fit into the Bible, etc.)at www.ChristianAnswers.net. It is only fair that evidence supporting intelligent design or creation be presented to students alongside of evolutionary theory, especially in publicschools which receive funding from taxpayers who are on both sides of the issue. Also, no one is being forced to believe in God or adopt a particular religion so there is no true violation of separation of church and state. As a religion and science writer, I encourage all to read my Internet article "The Natural Limits of Evolution" at my website http://www.religionscience.com for more in-depth study of the issue. The author, Babu G. Ranganathan, has his bachelor's degree with concentrations in theology and biology and has been recognized for his writings on religion and science in the 24th edition of Marquis "Who's Who In The East". The author's articles may be accessed at www.religionscience.com. How many angels are there on the tip of the needle? This question is just as pointless as an attempt to find an answer to the question of how many NATO missiles there are in Europe
<urn:uuid:7a67e939-e145-4d15-b6b0-0361abc6a6a6>
CC-MAIN-2021-43
https://english.pravda.ru/science/114722-half_truths_evolution/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00590.warc.gz
en
0.952858
3,402
3.25
3
Composition is a key element in photography and can make or break an image. Rules have been derived to help us create pleasing and well-balanced images that are easy to read. Let’s see how composition works, particularly for astrophotography. Definition Of Composition In Photography (What it means) Composition: the subtle art of arranging different visual elements in a frame, mastered by painters since the year 42000 B.C. Painters soon realized that some particular arrangements of visual elements in the frame, and their relative size, produced scenes that were aesthetically pleasing and easier to “read” than others. Composition rules were born. Strictly speaking, in photography, with the notable exception of staged photo shootings, we cannot arrange the elements of a scene in the field of view of our camera. Instead, we work with and alter the camera’s field of view. In short, we frame the scene rather than compose it. Either way, in photography we use the same composition rules developed by painters in the way we choose how to frame the scene. Why Is Composition Important In Photography? Let’s illustrate why composition is important with an example, and for this let’s consider the comparison in the image below of a plane passing in front of the Moon. It is clear that the subject of the photo is the Moon-plane ensemble: the Moon, shaped as an arc and the plane being the arrow. How do we use the composition rules to create an interesting image out of what nature has offered us? First, we can ask ourselves where is the best place in the frame for our subjects? In the left image above, the plane has “space” to fly into right in front of it. This is a far better framing (composition, if you will) than the one shown on the right, where the plane is right next to the edge of the frame. Whenever you have something moving in the frame, don’t put the target next to the edge of the frame, but leave some space to “move into”. The same is if you have a person or an animal looking in one direction: compose so that the subject has some space to “look into”. We now have a criterion for placing the frame around our subject for a pleasant effect. But what type of frame is best to use? In photography, it is easy to choose the shape (format) of the frame: 2:3, 3:4, 1:1, 16:9 are the most common frame sizes. In the case of the image above, the square format feels a bit tight, and the original 4:3 format has a lot of solid, boring, blue color. I felt like the image would benefit more from a cinematic, 16:9 crop. The final composition above is much more interesting than the squared versions: the Moon and the plane on the top left are well balanced by the puffy clouds emerging from the lower right corner. The plane has enough room to fly into, and there is no much empty sky. Here is a practical tip: if you are not crafting the whole scene yourself and/or you are in a hurry to catch the moment, frame larger so that you can adjust composition and frame format later on by cropping the image as you see fit. Does composition matter in astrophotography? One of the most common things I see when beginner’s post images of their brand new astrophotography setup using a star tracker, is the use of a ball head. The camera is almost never mounted directly on the declination bracket and the reason is “easy framing” (wrong) and composition. More experienced astrophotographers often try to talk them off the use of the ball head by talking about flexure issues in the setup and by claiming composition is not relevant in astrophotography. Or is it? Starry Landscapes and star trails benefit from the same composition rules used in daytime landscape photography. Things like an interesting foreground, a proper balance between the landscape and the sky (horizon positioning), leading lines, and so on still apply. In starry landscapes, even the most magnificent sky cannot save your image from a dark, boring foreground and bad composition. When photographing the Planets and the full lunar disc, the composition is, indeed, quite irrelevant. Since some telescopes flip the image (either vertical or both vertical and horizontal), you may want to rotate the image so that your target looks as you see it in the sky. I found the squared crop with the target centered on the image is a good choice for this kind of astrophotography. For Deep Sky Objects, the composition is less stringent than starry landscapes, but still important, particularly for wide starfields. Below is a photo of the “Summer Triangle”, with Deneb, Altair, and Vega forming the vertex of the triangle. In this case, it is nice to frame so that the Milky Way band runs diagonally through the frame for a more dynamic image. Try avoiding placing a target right next to the edge of the frame: in the image below, I should have framed so as to leave more space on the left, not to pin the Flame Nebula to the edge. If you close in on a target, so to fill the frame with it, then the composition is not really important, but other things can be considered. Pareidolia is the effect of seeing faces, animals, and objects in particular patterns. Many nebulae take their name from their shape, reminding us of an animal (the Pelican and Shark Nebulae), an object (North America, California, and Flame Nebulae to name a few), and faces (the Witch Head Nebula). Here is an interesting thought for judging the quality of many DSO images: a DSO image is only as good as the pareidolia effect that triggers in the viewer’s brain. Pareidolia, though, works only if the pattern is oriented in the same way we usually see the object the pattern is reminding us. This means that you may want to flip or rotate the image until you can trigger pareidolia in the viewer. Some other times it’s not pareidolia, but the optical illusion that matters the most. Take the Andromeda Galaxy, for example. As we see it in the sky from the Northern Hemisphere, the galactic core seems to bulge out the plane of the galaxy. Flip the image, and now the core is “sinking” into the galactic plane, creating a much more pleasant 3D illusion. What Are The Different Types Of Composition In Photography? General rules of composition Before concluding this article on composition for astrophotography, let’s recap some of the most common composition rules. And many more can be used. The Rule Of Thirds This is probably the most classic and well-known composition rule. Let’s consider again the image of the Moon and the plane we saw at the beginning of the article to see the rule of thirds in action. This time, we will focus on the position of the different visual elements in the frame. Let’s divide both the width and the length of the frame into thirds: this creates the grid you see in the photo below. The rule says that you should put the subject at the crossing of the lines, rather than dead centered in the frame. The rule is also useful when it comes to placing the horizon in a landscape. If you weigh more the foreground, then put the horizon toward the ⅔ of the frame, else, if you value more the sky, place it at the lower third. Leading lines guide the viewer into the image. A classic example is a road leading the viewer to the horizon. A big no are leading lines that brings the viewer outside the frame, as does the dock in the example below. Composition rules can be combined. In the image below, the foreground is obviously the most interesting part of the image, so I placed the horizon at ⅔ of the images. But the bridge on the left creates a path to the small town at the foot of the hills in the background that lets you explore past the small waterfall in the foreground. Balance The Image A good image is a balanced image. You do not want to have all the action cramped in a small part of the image, or huge visual elements distracting from the real subject. Visual elements should be framed so as to be in harmony. In the image below, I could have framed the little chapel with the empty road in the foreground, but that would be a lot of empty space. Instead, I placed the camera on top of my car to fill the foreground and better balance the image. In the comparison below, the chapel is too imposing in the top image: the image is not balanced. The idea of this selfie is to show a “backstage” of a typical astrophotography session, but the imposing chapel steals the viewer’s attention. By reframing the scene to include only a part of the chapel wall, the image is more balanced and the viewer can now “see” me, rather than focusing on the building. Finally, if you are after a wide starfield, try to compose so that more interesting targets are visible in the image, else, consider cropping the image tight on the main target. In the wide Orion starfield below, the Great Orion Nebula, the Flame and HorseHead Nebula, and M78 all line up diagonally in the frame, for a nice composition. Yet, the lower right part of the image feels a tad empty, with nothing to balance the two bright stars in the upper part of the frame. You don’t need much to balance a deep sky image: a bright star is often enough, as demonstrated in the comparison below. In the top image, the Gienah star in the Cygnus constellation is bright enough to attract the attention and “fill” the right part of the frame. But if you clone the star out (bottom), the image feels much less balanced, as the right part feels way too empty since nothing is left to balance the large Veil Nebula on the left. Composition rules are there to help us create better images, and astrophotography is no exception. While for planetary and lunar photography, you could get away with a square crop and the target dead center in the frame, starry landscapes and star trails benefit from the same composition rules used for daytime landscape photography. Things such as image balancing can apply to a wide starfield while taking advantage of pareidolia and optical illusions can improve images of deep-sky objects.
<urn:uuid:9c944ab0-26ed-4455-b301-c7edbb56e045>
CC-MAIN-2021-43
https://nightskypix.com/what-is-composition-in-photography/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00711.warc.gz
en
0.924095
2,261
3.5
4
The children enjoyed a fantastic four day stay in Hathersage. We played games, went on a night walk, visited the grave of Little John, walked up Stanage Edge, did orienteering and a stream study. We also visited a dairy farm on the final day and enjoyed an ice cream made using the milk from the dairy. We hope you enjoy the photos below from our amazing residential. This week in English we have been planning, drafting, editing and producing the final draft for our informal letters. We have written in first person from the view point of our class book's main character, Arthur. We have included details from the story and described parts of his adventures. This week in Maths, we have been multiplying fractions. We looked at multiplying a fraction by an integer and then looked at multiplying two mixed number fractions together. We then moved on to finding the fraction of a given amount. This week in R.E. we learnt about the key Islamic figure Malala Yousafzai. We learnt about her story and discussed the commitment she showed to her religion and education. We found out what impact she has had on education for girls and then ordered the main events of her life on a timeline. This week in Computing, we have learnt about a Network and learned the basics of how a Network functions. We then used drama to recreate what happens in a Network when retrieving a file. This week in Geography, we have been learning how to accurately use a compass. We learnt about the points on a compass, how a compass works and then we had to direct our partners around the classroom using compass directions. This week in English, we have been learning about the features of an informal letter. We identified the features, thought of examples for each feature and linked these features to our class book by imagining what Arthur would write in an informal letter to Atrix. This will help us next week when we write a letter to Atrix. In Maths this week, we have been subtracting fractions. We first looked at subtracting fractions with the same denominator, moving on to converting the denominators to ensure they were the same before subtracting. We ended the week with subtracting two, mixed number fractions. This week in Geography, we learnt about the symbols that you would find on an Ordnance Survey Map. We identified the symbols, learnt about their meaning and then completed a sorting activity, matching the symbol to its correct meaning. In Science this week, we learnt about the changes that occur during old age. We played a true or false game to learn what the facts about growing old are and what are the myths about old age. We then created an informative poster about the changes that occur as we grow older. In R.E this week, we learnt about the meaning of commitment. We thought about our own commitments and linked this to the commitments that religious people might have. We then created a min map outlining all the things we are committed too. This week in Maths we have been learning how to add fractions. We have focused on adding fractions that have the same denominator and then moved on to fractions that have common denominators. We also converted improper fractions to mixed number fractions if our answer was greater than one whole. This week in English we have been learning about our new Class book 'Arthur and the Golden Rope'. We made a prediction based on the front cover, then read and recapped the story in a story board and used a dictionary to check the meaning of any words we found hard to understand. We also used drama to help us become more familiar with the characters in the book. This week in Geography, we looked at using an Atlas and learnt about all the different symbols and their meaning on an Ordnance Survey Map. We located Europe, the U.K. and Worksop on the map and compared Worksop to other areas, looking at rivers, canals, hospitals and how hilly the area was. This week in Science, we have looked at the different stages of human development. We learnt about the prenatal, infant, childhood, adolescents, middle age and old age stages of life. We learnt about how we change physically at each stage and what our capabilities are at each stage of human development. We were very happy to receive the Attendance Trophy for our 100% attendance last week! Well done Macaw Class WC 27th January This week in English, we have been using features of a narrative to build up to writing our own story based on our class focus 'The Saga of Biorn'. We have been making predictions, using dialogue, applying descriptive techniques and creating characters and settings. In Maths, we have been continuing our learning on multiplication and short division. We are recapping our learning and using short division to solve problems. We then completed our end of unit test. This week in Science, we learnt about metamorphosis in animals. We learnt about the life cycle of insects and amphibians and discussed the changes they go through in their life time. In History, we looked at daily life as a Viking. We discussed their home life, how they built their houses and what materials they used, what animals and crops they farmed, what clothes they wore and life as a Viking child. We created a settlement image outlining all the main aspects of Viking life. WC 20th January This week in English we have looked at story features and looked at retelling a known tale. We used a 'Story Mountain' to outline the main areas of a story and then looked at our new class focus ' The Saga of Biorne'. We looked at the introduction and tried to recall as much information as we could from the excerpt. We then wrote a setting and character description based on a Viking and Viking Settlement. This week in Maths we have focused on a recap of long and short multiplication and then moved on to short division. We used our times table knowledge to help us use the formal written method correctly. In Computing this week we were looking at coding and its meaning. We learnt about different codes and short cuts and then used these instructions to guide our partner around the classroom. WC 13th January This week in English we have been writing a recount based on our school trip to Perlethorpe. We learnt a lot of facts about Vikings and took part in many fun activities such as making a settlement, dressing as a Viking, making jewellery, making flour and playing Viking games. We recalled our activities writing in paragraphs including an introduction and conclusion. This week in Maths we have been looking at multiplying 4 digit numbers by 1 digit numbers using short multiplication. We focused on using and applying our times table knowledge to work out each stage of the multiplication. This week in History we have been comparing two Anglo-Saxon Kings; King Alfred the Great and King Athelstan. We learnt about both Kings and then discussed which one we thought was the best King and why. W/C 6th January This week in English, we have been learning about the features of a recount. We have learnt about technical language, time conjunctions, fronted adverbials and the features of an introduction and conclusion in a recount. This will help us to write a recount about our trip to Perlethorpe next Monday. This week in Maths, we have been looking at multiplication. We recapped the grid method and learnt new methods, such as short multiplication and area models. We enjoyed using the short multiplication and focused on this to solve a range of multiplication problems. We also enjoyed using area models to solve problems and felt that the visuals helped us to understand the problems. This week in History, we learnt about the first Viking Invasion. We created a timeline to outline the key dates in Viking history and started to learn some of the key names in Viking history, such as King Ethelred and King Alfred. We loved learning about the battles and gruesome history and we are looking forward to learning more about the Vicious Vikings. This week in Science we recapped the parts of a flower and how these parts aid the reproduction of plants. We learnt about the scientific name and function of each part and used this information to create an information card in our books. Our Spring Term topic will be 'Vicious Vikings'. Where did the Vikings come from? Where and why did they invade and settle? Why did they leave Scandinavia? How do we know about them? We will also be developing our geographical skills through our residential trip to Hathersage. Read below to find out what we will be learning about in the different subject areas: Geography and History: In our Geography lessons, we will be securing our map-reading skills and participating in stream studies. We will explore the outdoors through orienteering courses and participate in fieldwork studies. Our History lessons will focus on the Viking era. Art and Design Technology: As part of our creative learning this term, we will be creating woodland pictures and replicating the sounds of the stream. We will demonstrate Viking battle re-enactments and creating shields and Viking longboats. R.E: For our R.E lessons, we will be learning about the Islamic faith and the meaning behind their stories. Science: 'Living Things and their Habitats' and 'Animals (Including Humans)' are the two themes for our science lessons. We will describe the differences in life cycles and recognise the changes as humans develop to old age. Computing: During the Spring Term, we will develop our coding skills and practise these skills using new coding applications and websites. Music: During the Spring Term, we will be continuing to learn how to play a range of instruments as part of a class band with Mr Bellingham. P.E: We will continue to have our weekly PE sessions with Mr Scott where we will be developing the skills needed to play lacrosse and tennis. We will also be continuing with our weekly swimming session, as well as a weekly Karate session with Sensei Slaney. French: Madame Walsh will continue to teach our weekly French lesson. We will be developing new vocabulary and skills during this time. Venture Experts: This term, we will continue with our Venture Experts sessions on a Tuesday afternoon. During this time, we participate in a range of activities with different teachers. We will also be showcasing our learning to the school community.
<urn:uuid:2504728b-df57-4ded-9dc8-1df8b6cb0610>
CC-MAIN-2021-43
https://www.norbridge.org/spring-term-8/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00670.warc.gz
en
0.956263
2,132
3.015625
3
If your sexual history and current signs and symptoms suggest that you have a sexually transmitted disease (STD) or a sexually transmitted infection (STI), your doctor will do a physical or pelvic exam to look for signs of infection, such as a rash, warts or discharge. Laboratory tests can identify the cause and detect coinfections you might also have. - Blood tests. Blood tests can confirm the diagnosis of HIV or later stages of syphilis. - Urine samples. Some STIs can be confirmed with a urine sample. - Fluid samples. If you have open genital sores, your doctor may test fluid and samples from the sores to diagnose the type of infection. Testing for a disease in someone who doesn't have symptoms is called screening. Most of the time, STI screening is not a routine part of health care. Screening is recommended for: - Everyone. The one STI screening test suggested for everyone ages 13 to 64 is a blood or saliva test for human immunodeficiency virus (HIV), the virus that causes AIDS. Experts recommend that people at high risk have an HIV test every year. - Everyone born between 1945 and 1965. There's a high incidence of hepatitis C in people born between 1945 and 1965. Since the disease often causes no symptoms until it's advanced, experts recommend that everyone in that age group be screened for hepatitis C. - Pregnant women. All pregnant women will generally be screened for HIV, hepatitis B, chlamydia and syphilis at their first prenatal visit. Gonorrhea and hepatitis C screening tests are recommended at least once during pregnancy for women at high risk of these infections. Women age 21 and older. The Pap test screens for changes in the cells of the cervix, including inflammation, precancerous changes and cancer. Cervical cancer is often caused by certain strains of HPV. Experts recommend that women have a Pap test every three years starting at age 21. After age 30, experts recommend women have an HPV test and a Pap test every five years. Or, women over 30 could have a Pap test alone every three years or an HPV test alone every three years. Women under age 25 who are sexually active. Experts recommend that all sexually active women under age 25 be tested for chlamydia infection. The chlamydia test uses a sample of urine or vaginal fluid you can collect yourself. Reinfection by an untreated or undertreated partner is common, so you need the second test to confirm that the infection is cured. You can catch chlamydia multiple times, so get retested if you have a new partner. Screening for gonorrhea is also recommended in sexually active women under age 25. - Men who have sex with men. Compared with other groups, men who have sex with men run a higher risk of acquiring STIs. Many public health groups recommend annual or more-frequent STI screening for these men. Regular tests for HIV, syphilis, chlamydia and gonorrhea are particularly important. Evaluation for hepatitis B also may be recommended. People with HIV. If you have HIV, it dramatically raises your risk of catching other STIs. Experts recommend immediate testing for syphilis, gonorrhea, chlamydia and herpes after being diagnosed with HIV. They also recommend that people with HIV be screened for hepatitis C. Women with HIV may develop aggressive cervical cancer, so experts recommend they have a Pap test at the time of the HIV diagnosis or within a year of becoming sexually active if they are under 21 and have HIV. Then, experts recommend repeating the Pap test every year for three years. After three negative tests, women with HIV can get a Pap test every three years. People who have a new partner. Before having vaginal or anal intercourse with new partners, be sure you've both been tested for STIs. However, routine testing for genital herpes isn't recommended unless you have symptoms. It's also possible to be infected with an STI yet still test negative, particularly if you've recently been infected. STDs or STIs caused by bacteria are generally easier to treat. Viral infections can be managed but not always cured. If you are pregnant and have an STI, getting treatment right away can prevent or reduce the risk of your baby becoming infected. Treatment for STIs usually consists of one of the following, depending on the infection: Antibiotics. Antibiotics, often in a single dose, can cure many sexually transmitted bacterial and parasitic infections, including gonorrhea, syphilis, chlamydia and trichomoniasis. Typically, you'll be treated for gonorrhea and chlamydia at the same time because the two infections often appear together. Once you start antibiotic treatment, it's necessary to finish the prescription. If you don't think you'll be able to take medication as prescribed, tell your doctor. A shorter, simpler course of treatment may be available. In addition, it's important to abstain from sex until seven days after you've completed antibiotic treatment and any sores have healed. Experts also suggest women be retested in about three months because there's a high chance of reinfection. Antiviral drugs. If you have herpes or HIV, you'll be prescribed an antiviral drug. You'll have fewer herpes recurrences if you take daily suppressive therapy with a prescription antiviral drug. However, it's still possible to give your partner herpes. Antiviral drugs can keep HIV infection in check for many years. But you will still carry the virus and can still transmit it, though the risk is lower. The sooner you start HIV treatment, the more effective it is. If you take your medications exactly as directed, it's possible to reduce the viral load in the blood so that it can hardly be detected. If you've had an STI, ask your doctor how long after treatment you need to be retested. Getting retested will ensure that the treatment worked and that you haven't been reinfected. Partner notification and preventive treatment If tests show that you have an STI, your sex partners — including your current partners and any other partners you've had over the last three months to one year — need to be informed so that they can get tested. If they're infected, they can then be treated. Each state has different requirements, but most states require that certain STIs be reported to the local or state health department. Public health departments often employ trained disease intervention specialists who can help notify partners and refer people for treatment. Official, confidential partner notification can help limit the spread of STIs, particularly for syphilis and HIV. The practice also steers those at risk toward counseling and the right treatment. And since you can contract some STIs more than once, partner notification reduces your risk of getting reinfected. Explore Mayo Clinic studies testing new treatments, interventions and tests as a means to prevent, detect, treat or manage this condition. Coping and support It can be traumatic to find out you have an STD or STI. You might be angry if you feel you've been betrayed or ashamed if you might have infected others. At worst, an STI can cause chronic illness and death, even with the best care that's available. These suggestions may help you cope: - Hold off placing blame. Don't assume that your partner has been unfaithful to you. One (or both) of you may have been infected by a past partner. - Be honest with health care workers. Their job is not to judge you, but to provide treatment and stop STIs from spreading. Anything you tell them remains confidential. - Contact your health department. Although they may not have the staff and funds to offer every service, local health departments have STI programs that provide confidential testing, treatment and partner services. Preparing for your appointment Most people don't feel comfortable sharing the details of their sexual experiences, but the doctor's office is one place where you have to provide this information so that you can get the right care. What you can do - Be aware of any pre-appointment restrictions. At the time you make the appointment, ask if there's anything you need to do in advance. - Write down any symptoms you're experiencing, including any that may seem unrelated to the reason for which you scheduled the appointment. - Make a list of all medications, vitamins or supplements you're taking. - Write down questions to ask your doctor. Some basic questions to ask your doctor include: - What's the medical name of the infection or infections I have? - How is the infection transmitted? - Will it keep me from having children? - If I get pregnant, could I give it to my baby? - Is it possible to catch this again? - Could I have caught this from someone I had sex with only once? - Could I give this to someone by having sex with that person just once? - How long have I had it? - I have other health conditions. How can I best manage them together? - Should I avoid being sexually active while I'm being treated? - Does my partner have to go to a doctor to be treated? What to expect from your doctor Giving your doctor a complete report of your symptoms and sexual history will help your doctor determine how to best care for you. Here are some of the things your doctor may ask: - What symptoms made you decide to come in? How long have you had these symptoms? - Are you sexually active with men, women or both? - Do you currently have one sex partner or more than one? - How long have you been with your current partner or partners? - Have you ever injected yourself with drugs? - Have you ever had sex with someone who has injected drugs? - What do you do to protect yourself from STIs? - What do you do to prevent pregnancy? - Has a doctor or nurse ever told you that you have chlamydia, herpes, gonorrhea, syphilis or HIV? - Have you ever been treated for a genital discharge, genital sores, painful urination or an infection of your sex organs? - How many sex partners have you had in the past year? In the past two months? - When was your most recent sexual encounter? What you can do in the meantime If you think you might have an STI, it's best to abstain from sexual activity until you've talked with your doctor. If you do engage in sexual activity before seeing your doctor, be sure to follow safe sex practices, such as using a condom. Sept. 21, 2021
<urn:uuid:dac165af-5635-44f1-bd32-e2eee975a3a1>
CC-MAIN-2021-43
https://www.mayoclinic.org/diseases-conditions/sexually-transmitted-diseases-stds/diagnosis-treatment/drc-20351246
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00070.warc.gz
en
0.95475
2,208
3.328125
3
Last Updated February 2nd, 2020 What is Salmon? Ray-finned fishes are commonly termed as salmon fishes. These ray-finned fishes belong to the same group of fishes called Salmonidae. Salmon is a native to the saltwater of Alaska. There are several other fishes under the Salmonidae group of fishes i.e. trout, char, grayling, whitefish, etc. There are various kinds of Salmons. Majorly, there are 6 types of Salmons: Chinook Salmon or King Salmon : This type of Salmon fish is very tasty. It is considered healthy since it has a lot of fat content in it. The flesh of the King Salmon fish is also very rich with various nutrients. The colour of the flesh varies from white to red. Coho Salmon or Silver Salmon: The Coho Salmon is also called Silver Salmon since they have silver coloured skin. The bright red flesh of Silver Salmon or Coho Salmon is extremely nutritious and they are rich in Omega 3. Silver Salmon possess even more delicate texture than Chinook Salmon but they have the almost same flavour. Pink Salmon: The most common Pacific Salmon which are usually canned before selling. Pink Salmon has light coloured flavoured flesh and it has a very less fat content. Red Salmon/ Sockeye Salmon: Red Salmon have bright reddish-orange coloured flesh. It is nutritious and very rich in flavours. Salmo Salar or Atlantic Salmon: Salmon Salar is the only kind of Salmon which is found in the Atlantic. Atlantic Salmons are usually farmed and doesn’t have long sustainability but Salar Salmon is the only kind of Salmon which can be relied on for sustainability. Silverbrite Salmon or Chum Salmon: Chum salmon is also called Dog Salmon since they have dog-like teeth. These have characteristics like pale to medium coloured flesh and very low-fat content. What are the health benefits of Salmon? There are several health benefits of Salmon since Salmon is rich in so many minerals and nutrients. Some of the most important health benefits of Salmon are as follows. Improves the arterial function Salmon is one of the best sources of long-chain Omega-3 fatty acids EPA and DHA which are very essential for the body of an individual as they have several health benefits. Some of the most common health benefits of Omega-3 fatty acids are it helps in decreasing inflammation, reducing the risks of cancer, lowering the blood pressure and also it helps to improve the functioning of the cells that line the arteries. A Salmon of 3.4 ounces (100grams) of farmed salmon has 2.3 grams of long-chain Omega-3 fatty acids whereas the same quantity of wild Salmon has 2.6 grams of Omega-3 fatty acids. An adult ca almost intake 250-500 milligrams of combined DHA and EPA per day. It is very important to have an intake of Omega fatty acids since our bodies cannot make it. Reduces the risks of heart diseases Salmon is very essential for keeping your cholesterol level at check. It helps in maintaining a healthy cholesterol level and hence it helps in preventing all the risks of heart diseases. The lower levels of cholesterol don’t lead to artery blockage as well as prevents strokes or heart attacks. Rich in antioxidants Astaxanthin which belongs to the carotenoid family is a predominant antioxidant in Salmon fish. It gives the fish the reddish tinge. The antioxidant Astaxanthin helps in preventing the oxidation of LDL(Low-Density Lipoprotein) or the “Bad Cholesterol” and hence it helps in preventing the risks of various heart diseases as well. The high antioxidant content also helps in promoting the immune health of an individual as well as it removes all the free pollutant radicals and helps in slowing down the process of ageing. Selenium loaded Salmon Selenium which is a trace element helps in protecting the bone health by keeping them strong is present in Salmon. In the case of the people with autoimmune thyroid disease, Selenium is also very helpful in decreasing thyroid antibodies. Selenium in Salmon also as anti-cancer properties which are very essential for preventing cancer. Maintaining muscle mass When you are under the process of losing weight, Salmon helps you in maintaining the muscle mass due to its high protein content. The protein content of Salmon also helps in slowing down the process of ageing as well as fast healing of various injuries. Since the high protein level helps in promoting the better functioning of the immune system, Salmon is very essential and can be included in the correct amount in the everyday diet of an individual. Salmon is a great source of Vitamin B Salmon is a rich source of Vitamin B. There are several types of Vitamin B present in Salmon. The most common types of Vitamin B which are present in 100 grams of Salmon are listed along with the percentage of the RDI(Regular Dietary Intake) that they satisfy. - Vitamin B1 (thiamin): 18% of the RDI - Vitamin B2 (riboflavin): 29% of the RDI - Vitamin B3 (niacin): 50% of the RDI - Vitamin B5 (pantothenic acid): 19% of the RDI - Vitamin B6: 47% of the RDI - Vitamin B9 (folic acid): 7% of the RDI - Vitamin B12: 51% of the RDI All the vitamins together are very helpful in maintaining a healthy nervous system and helps in the optimal functioning of the brain in an individual. The large array of vitamins are also helpful for metabolising the food into energy, producing and repairing the DNA as well as reducing inflammations which can lead t heart diseases. Rich in Potassium Salmon is highly rich in Potassium which is very beneficial for maintaining low blood pressure since the POtassium content helps in preventing water retention. 3.5 ounces of Salmon provides 18% of the RDI(Regular Dietary Intake) of POtassium. It helps in weight loss The protein levels of Salmon is very essential for the regulation of various hormones which are responsible for appetite control. Consuming Salmon doesn’t make you feel hungry for a very long time and keep you full. This helps in cutting down the cravings for the foods with high calories in it and hence helps in losing weight. The root cause of heart diseases, lung and kidney disorders, diabetes, cancer is inflammation. Salmon with its huge array of minerals and nutrients reduces the development of any markers of tumours or inflammations. Hence Salmon helps in preventing cancer. Salmon protects the brain health Consuming Salmon helps in reducing the symptoms of depression, protect the health of the fetal brain, slows down the process of memory loss and it also reduces the risk of dementia. Some healthy recipes of Salmon Grilled Salmon with Avocado juice Ingredients required for the preparation of the Salmon are the following. 2 pounds of nicely cut 4 pieces salmon fillet 1 teaspoon of properly ground cumin 1 teaspoon of paprika powder 1 teaspoon of onion powder 1 teaspoon of chilli powder ½ teaspoon of garlic powder Sea salt as well as freshly ground black pepper for taste. Ingredients required for the preparation for the avocado are Roughly chopped 2 avocados 1 diced small red onion 1 minced clove of garlic 1 lemon juice completely 1 tablespoon of olive oil 1 tablespoon of minced cilantro Sea salt and freshly ground black pepper – Take a bowl and make a mixture of cumin, paprika, onion powder, chilli powder, garlic powder, and season the mixture with salt and pepper for the taste. – Take the Salmon and rub the whole mixture on the Salmon and refrigerated the marinated Salmon for 20 minutes. – Take another bowl to smash the avocado until it gets smooth and then add all the remaining ingredients for the preparation of the avocado sauce, and stir the mixture until it is well blended. – Take the refrigerated Salmon out of the refrigerator. – Before grilling preheat the Salmon over medium to high heat. Then grill the Salmon for 6 to 10 minutes and flip after grilling a little. Serve the Salmon with the avocado sauce. Simple Herb Crusted Salmon The ingredients required for the Salmon are as follows - 2 fillets of Salmon - 1 tablespoon heaped up with coconut flour - 2 tablespoon of dried or fresh parsley - 1 tablespoon of olive oil - 1 tablespoon of dijon mustard - Use salt and pepper for taste The ingredients required for the - 2 cups of arugula - 1/4th onion thinly sliced - 1 lemon’s juice - 1 tablespoon of white wine vinegar - 1 tablespoon of Olive oil - Use salt and pepper for taste - Preheat the oven over 450 degrees heat - Take a foil-lined baking sheet to place the Salmon fillets. You can also use a parchment-lined baking sheet. - Make a mixture of olive oil and dijon mustard and rub it on the Salmon. - Take another bowl to make a mixture of coconut flour, parsley, salt and pepper. - Sprinkle your toppings on the Salmon using a spoon and then you can use your hand to pat into your Salmon. - Put it in the oven for 10-15 minutes until the Salmon is cooked. You can keep it for a longer time for cooking it until your preference. Maple bacon Salmon Ingredients required for the Salmon - 1 lemon sliced properly - ¼ pounds of Salmon fillets with skin - ½ tablespoon of pink salt, black pepper and garlic paste - 1 tablespoon of dijon mustard - ⅓ cap of extra virgin olive oil - 2 tablespoon of fresh lime juice - 2 tablespoon of maple syrup - Season it using finely grated chives Ingredients required for the Candied Bacon - 3 tablespoon of maple syrup - 1 tablespoon of packed brown sugar - ¼ teaspoon of pink salt, black pepper. You can use garlic paste for the mixture too - 6 slices of bacon Preheat the oven over 400 degrees heat. - Take back baking dish and put lemon slices in that and place the Salmon on top of that. - Toss 2 teaspoons of pink salt, black pepper, garlic paste or garlic powder for seasoning the Salmon. - Take a medium-sized bowl and make a mixture of mustard, oil, lemon juice, maple syrup and remaining ½ teaspoon of pink salt, black pepper and nicely grated or powdered garlic. - Pour this mixture on Salmon and rub it properly all over the Salmon. - Roast the Salmon until it is cooked properly. The best way to check it is to try flicking the Salmon with a fork. It is very easily flicked it indicates that it is perfectly cooked. Keep cooking it for 20 to 35 minutes. - Keep boiling it for 3 minutes until it turns golden. - Take a small bowl to make a mixture of candied bacon. - Mix maple syrup with brown sugar and ¼th teaspoon of pink salt, black pepper and garlic powder for seasoning mixture. - Cook bacon over medium heat in a skillet. Keep cooking it until it turns golden on both sides. Cook each side for 4 minutes. - Go back to medium heating the Salmon. - Add bacon and cook it until the liquid is absorbed to a good extent. - Cook it for 3 to 4 minutes until the bacon is glazed. - You can serve your relishing Maple bacon Salmon. Subscribe to free FactDr newsletters. If you're enjoying our website, we promise you'll absolutely love our new posts. Be the first one to get a copy! Get factually correct, actionable tips delivered straight to your inbox once a week. We hate spam too. We will never share your email address with anyone. If you change your mind later, you can unsubscribe with just one click Help Others Be Fit - Chymoral Forte - Meftal Spas
<urn:uuid:840ffdba-c1ab-47d3-94fd-c6b8e2ccc3c3>
CC-MAIN-2021-43
https://factdr.com/nutrition/healthy-foods/salmon/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00190.warc.gz
en
0.91095
2,493
3.140625
3
No food better represents southern cooking than dark green, delicious collard greens. With their nutritious, tender leaves, it is no wonder gardeners love growing these leafy greens in their veggie beds. Learning how to grow collard greens is something you should consider if you want to add nutrition to your diet and expand your vegetable garden. Planting and caring for an entire plant takes a commitment. How long does collard greens take to grow? Growing collard greens from seed isn’t something that is going to demand all of your attention. This plant is a hardy grower and isn’t overly fussy about its environment. The green leaves on this plant are celebrated in festivals in Georgia and are the state vegetable of South Carolina. Collard leaves are a must-have if you love southern cooking as much as we do, and you won’t regret taking the time to learn how to plant collard greens seeds. - What are Collard Greens? - Everything to Know about Growing Collard Greens - Collard Green Cultivars - Managing Pests and Diseases on Collard Greens - Preserving Collard Greens What are Collard Greens? Collard greens are also known as Brassica oleracea var. Acephala. These plants are a part of the cabbage family and are closely related to Brussels sprouts, kale, broccoli, cauliflower, and more. Although they are mostly grown as annuals, collards are a biennial plant and produce loose leaves that make them less susceptible to getting one of the many diseases that these types of plants have to deal with. Brassica plants are appreciated because of their mellow flavors and high nutritional value. Collard greens, specifically, are high in fiber, calcium, manganese, folic acid, and vitamins A, C, and K. It is possible to lose small amounts of this nutrition during the cooking process, but the wonderful flavors make up for it. Growing collard greens at home is ideal because they tolerate hot and cool weather, although they prefer a cooler growing season to produce sweeter leaves. These plants reach up to 36 inches in height and width. They are hardy in USDA hardiness zones six through 11. Collard greens likely descended from wild cabbages from Asia. Collard plants were eaten by Greek and Roman civilizations until they spread through the Middle East and Africa. Over time, collard greens made their way to the southern United States and became a staple food for African slaves. This crop has an important role in American history and has been passed down through generations to give us the collard green recipes we know and love today. How long does collard greens take to grow? Let’s get you started on this adventure and find out how easy growing collard greens can be. Everything to Know about Growing Collard Greens It’s hard to figure out how to grow collard greens when you’ve never done it before. We have gathered all the gardening information you need to know and put it in one place for a guide that tells you what to do from start to finish. How to Grow Collard Greens It is your choice to start growing collard greens from seed either indoors or outdoors. If you prefer, purchase seedlings from a local garden center and start transplanting them in your garden when the time is ready. The ideal time to start collard green seeds indoors is four to six weeks before the last frost or when you want to transplant them outside. For an early spring planting, wait until the soil temperature is at least 45°F for germination to take place. For a late summer planting, find a time between the first frost and the first hard freeze of autumn for a winter harvest. The way to plant collard greens starts with a couple of small pots and filling them with potting soil. Plant your seeds an eighth of an inch deep in the dirt and water them. The seedlings start to sprout after four to seven days. If sowing directly outdoors, and it is still too frigid outside, you can use a cold frame or row covers to insulate the plants from light frost. If you sow the seeds directly outside, keep spacing between each row at least 30 inches apart. Thin seedlings back two 12 to 18 inches apart as they grow. Once your indoor seedlings are ready to put outside, or if you already have some transplants you picked up from a nursery, plant them in the prepared beds at the same depth you planted them in their containers. Space each transplant 12 to 18 inches apart from one another and water them thoroughly. Caring for Collard Greens Collard greens thrive in full sun, but a few hours of partial shade won’t hurt them. However, they do have a list of soil requirements to stay happy. Collard greens enjoy well-draining and fertile soil that is rich in organic material. Some good organic matter to add to your soil includes compost, mulch, or other natural materials. If you plan to grow them in a raised bed, keep in mind that collard greens have deep roots that grow up to two feet. Aside from that, collard greens grow fairly easily. They require about two inches of water every week, and you have to supplement it if it doesn’t rain. As the growing season progresses, consider scattering fertilizer next to the plants to give them an additional boost. Grow collard greens in containers that are at least one foot deep. If you leave your greens in the ground too long, they start bolting and producing flowers. The leaves turn bitter after they bolt, and you will have wasted this year’s harvest. If you missed your harvest date, stop watering the plants after they bolt and let the seed pods turn brown and dry out. If you crack one open and the seeds are black, they are ready for harvest, and you have a bunch of collard green seeds prepared for the next growing season. How Long does Collard Greens Take to Grow? Technically, it is okay to harvest collard leaves whenever they reach the size that you like them. This process usually takes about 40 days but could be as early as 30 days. If you pick them while growing, harvest only the outer leaves and let the inner, new leaves continue to grow. At the end of the season, either harvest the entire plant or continue to cut off the outer leaves as necessary. Collard Green Cultivars There are many different varieties of collards for you to choose from. Here is a list of a few of our favorite cultivars to grow in a vegetable garden. As an heirloom variety, Champion collards do well in pretty much any location. They produce high yields and are resistant to a multitude of diseases. They take up a lot of space and grow up to 34 inches tall. Georgia Southern Collards Another heirloom collard green is the Georgia Southern. It is ideal for busy homeowners because it is slow to bolt and ready for harvest in roughly 80 days. It also has an 80 percent germination rate. Vates are a long-time favorite in the United States. These plants were developed in Virginia during the Great Depression and produce long greenish-blue leaves that are ready in 65 to 75 days. They grow about 32 inches tall. These collard greens are a hybrid of the Vates and Georgia Southern varieties. They even have a little bit of kale DNA thrown into the mix. Be mindful if you pick this one, though, because it grows four to six feet tall. Hybrid plants are great because they take the best qualities from one plant and mix them with the best qualities from another. Tiger collard greens have hardy, upright leaves that are ready to harvest in 60 days. Their leaves regrow easily, and they are known for their exceptional taste. Managing Pests and Diseases on Collard Greens If collards are appealing to you, imagine how appealing they are to a considerable number of pests and fungal diseases. There are ways to keep these issues from happening but expect to see a few animals and insects hanging around these plants. If you live in an area close to the country, you might have a deer wander into your yard. Dear love to snack on collard greens and are smart enough to wait after a couple of light frosts to get the sweeter leaves. You might have to create a barrier or protect your veggies with row covers to deter them, or you could make a homemade deer repellent recipe to spritz in the area. Aphids are a common problem that gardeners face when they grow collard greens. The bugs have pear-shaped bodies and suck the fluids out of the leaves of many plant species. The smartest way to get rid of wooly aphids or regular ones is to blast them off with a steady stream of water or spray insecticidal soap. Cabbage Worms and Cabbage Loopers Cabbage worms and loopers are types of moth caterpillars that snack on collard leaves and other brassica plant members. Pesticides aren’t an obvious solution to kill them, but some of their natural predators are parasitic wasps. You could also companion plant the greens with zinnias and alliums. Black rot is a common disease found on collards, cabbage, and kale. The disease shows its first signs as dull, yellow areas on the leaves, which become brown and dry over time. When the disease advances, it looks like the entire plant is scorched, and the veins and stems are infected with a black pathogen. Black rot is caused by high temperatures and thrives in rainy, humid conditions. Once your crops have it, it is nearly impossible to get rid of. Rotate your crops every three or four years and make sure you only purchase certified pathogen-free seeds and transplants. Clubroot is another disease you hopefully won’t find growing on your collard greens. It results in stunted plant growth and leaves them yellow and wilted during the day. Downy mildew probably isn’t a foreign disease if you have been gardening a long time. It shows up as tan and yellow spots on the upper surfaces of your collard green leaves. It also produces a fluffy grayish-white mold on the underside of the leaves and causes them to fall off. Treat this disease with a commercial fungicide found at your local hardware store, garden center, or online. Preserving Collard Greens If you harvest only the outer leaves during the entire growing season, store them in a plastic bag or airtight bag in the crisper drawer of your fridge to keep them fresh for about a week. Only wash your collard greens right before you plan to eat them. You cannot usually store vegetables in the fridge for a long time. The freezer is a better option. A better way to preserve your collard greens is to freeze them. After you wash and dry the leaves, remove the fibrous center stem with a sharp knife or kitchen scissors. Boil the greens in hot water for three minutes and then immediately plunge them into an ice-water bath. This quick process is called blanching and helps preserve the flavor of the leaves and destroys enzymes that make them lose their color. Wash your greens thoroughly and dry them before packing them into a freezer bag, removing the air, and storing them in your freezer for up to a year. Cooking with Collard Greens The best part of growing any fruit or vegetable is developing your own recipes or cooking those you already know. Smaller leaves are best eaten raw, while the larger greens benefit from a little bit of heat. What better way to embrace your harvest than to cook some traditional southern collard greens? Tear the greens away from their central stem, roll them up, and cut them horizontally into smaller pieces. Wash the bunches of greens to remove any sand or grit, and rinse the ham hock. Add the ham to a large pot and cover it with water. Cook the ham hock over medium-high heat for 45 minutes, or the meat is tender. Stir in the greens and an additional four cups of water. Add in the remainder of the ingredients and let the collards simmer over medium-low heat for two hours or until the water has evaporated enough to barely cover the greens. Collard greens have one of the most important backgrounds in American history. There is no better way to honor this ingredient than to grow them in our personal gardens and cook with them regularly. These delicious, tender greens don’t require a lot of work but have a huge burst of flavor in the end. If learning how to grow collard greens has made you have a deeper appreciation for this veggie, share this guide for growing collard greens on Facebook and Pinterest.
<urn:uuid:fff09a02-c929-4bc2-b52f-bde68112667a>
CC-MAIN-2021-43
https://www.tipsbulletin.com/how-to-grow-collard-greens/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00190.warc.gz
en
0.952769
2,674
3.328125
3
The course consists of lectures, readings, discussions, panels of guest speakers, group and individual projects. The purpose of the lectures, readings, discussion and panels of guest speakers is to explore a variety of aspects of adolescence and adolescent health. The group and individual projects are meant to help students develop skills to work in multi-disciplinary teams and analyze adolescent health concerns through conceptual frameworks and recommend effective solutions through interventions. This course introduces students to the principles, laws, and policies that influence the use of animal and alternative, non-animal-based (humane sciences) research techniques in biomedical research. Healthcare professionals around the world are experiencing increasing pressures from patients, communities, governments and payers to demonstrate value. Controlling costs, providing high quality outcomes, assuring access, and enhancing patient satisfaction have become leading issues. In addition, services increasingly are provided within the context of multi-disciplinary teams and complex organizational and financial arrangements. Fiscal and other resource constraints abound. Meeting these challenges within healthcare settings requires leadership and managerial skills in addition to clinical expertise. This seminar-style course challenges students to look closely at the environment of Baltimore City's complex food systems and to consider what it would take to improve these systems to assure access for all to nutritious, adequate, affordable and sustainably produced food. Students "go backstage" with tour guides at sites including a supermarket, a corner store, an emergency food distribution center, and a farm connected to the city school system. Students learn about the types of food available at these sites, who uses them, relevant aspects of their operations, and site-relevant key barriers to and opportunities for providing access to healthier food, ideally with reduced environmental harm. They also conduct oral history interviews about food with elderly city residents to understand how food access has changed over the years. Class discussions, lectures, readings, and guest speakers support critical thinking, and provide background and frameworks for understanding the experiential sessions. Lectures and discussions consider applicability of lessons gained from the study of Baltimore to other area food systems. Throughout, students consider the relative impacts of access, demand, and stakeholder interests, and consider the relative strengths of voluntary, governmental, legal and other strategies. For their final papers, students apply the Intervention Decision Matrix to selected aspects of the city's food systems and food environments, identifying challenges and opportunities for change, incorporating lessons learned from other food systems and programs, and discussing implications beyond Baltimore . Covers the basics of R software and the key capabilities of the Bioconductor project (a widely used open source and open development software project for the analysis and comprehension of data arising from high-throughput experimentation in genomics and molecular biology and rooted in the open source statistical computing environment R), including importation and preprocessing of high-throughput data from microarrays and other platforms. Also introduces statistical concepts and tools necessary to interpret and critically evaluate the bioinformatics and computational biology literature. Includes an overview of of preprocessing and normalization, statistical inference, multiple comparison corrections, Bayesian Inference in the context of multiple comparisons, clustering, and classification/machine learning. This course provides a broad understanding of the application of biostatistics in a regulatory context. Reviews the relevant regulations and guidance documents. Includes topics such as basic study design, target population, comparison groups, and endpoints. Addresses analysis issues with emphasis on the regulatory aspects, including issues of missing data and informative censoring. Discusses safety monitoring, interim analysis and early termination of trials with a focus on regulatory implications. This course introduces students to the origins, concepts, and development of community-based primary health care through case studies from both developing and developed countries. As in clinical bedside teaching, we use real cases to help students develop problem-solving skills in practical situations. We also discuss participatory approaches in the organization and management of health services and other factors such as equity, socio-cultural change, environmental protection, and the process of community empowerment.Included among this course's lecture materials are several recorded presentations by Carl Taylor, a giant in the field of international health. Dr. Taylor recorded the presentations for this course in January of 2008, just 2 years before he passed away in February of 2010. This course focuses on the core processes of growth and development in early to middle childhood. Considers developmental theories, issues and research findings related to physical growth and cognitive, emotional, and social development. Considers appropriate instruments to assess growth and development. Evaluates efficacy of popular early intervention programs designed to enhance development in at-risk populations of children. Describes how economic theory is linked to economic evaluation techniques like cost-benefit and cost-effectiveness analysis and to introduce students to many concepts that are specific to economic evaluation. Introduces students to the many varieties of economic evaluation to establish a common terminology. Discusses cost-benefit with a demonstration of how this type of evaluation is most clearly linked to economic theory. Explores other theories and concepts, including cost measurement, benefit valuation, and incremental decision-making. Finally, explores recommendations on performing economic evaluations that are made in the United States with a focus on how these are related to underlying economic theory and other concepts. Confronting the Burden of Injuries- A Global Perspective is a course offered by the Department of International Health and the Department of Health Policy and Management at the Bloomberg School of Public Health, Johns Hopkins University. This course is intended to guide students interested in working on injury control in areas with little to no tradition in injury prevention from a public health perspective. Students will learn to define the injury problem and assess its magnitude; identify data sources and assess the quality of the data; identify which agencies or institutions should be involved in the solution of the problem; identify which interventions are in place and need to be implemented and evaluated; produce a strategic plan for the establishment and/or improvement of injury prevention programs in such areas; and present such a plan to authorities in a compelling manner. There is much controversy and anecdotal information about popular diets and dietary supplements, but all too often little scientific or controlled clinical data. We examine the science behind normal mechanisms of weight control, and how weight loss diets are constructed and work. The aim of the course is to acquire the knowledge to critically appraise a weight control diet or dietary supplement and choose the best plan for success, both in the short-term and the long run. Students taking the actual class will, in addition to learning the lecture material presented here, complete in-class assignments where they choose a popular diet or supplement, research the scientific literature on this diet/supplement, and present a critical appraisal of its validity and efficacy. The workshop is intended for Doctoral students in the health and social sciences who are at the stage of developing a research proposal. Participants will gain skills in the design of conceptually cogent and methodologically rigorous dissertation proposals. The Workshop has an emphasis on topics that relate to Africa, but can be applied to a broad range of research issues. This course provides a broad overview of diverse topics in the practice of and approaches to humane animal experimentation. It addresses such issues as experimental design (including statistics and sample size determination), humane endpoints, environmental enrichment, post-surgical care, pain management, and the impact of stress on the quality of data. It was developed by CAAT director Alan Goldberg and James Owiny, the training and compliance administrator of the Johns Hopkins University animal care and use committee, along with Christian Newcomer, associate provost for animal research and resources at Hopkins.The self-paced course consists of 12 audio lectures with accompanying slides, resource lists, and study questions. Examines health issues, scientific understanding of causes, and possible future approaches to control of the major environmental health problems in industrialized and developing countries. Topics include how the body reacts to environmental pollutants; physical, chemical, and biological agents of environmental contamination; vectors for dissemination (air, water, soil); solid and hazardous waste; susceptible populations; biomarkers and risk analysis; the scientific basis for policy decisions; and emerging global environmental health problems. Introduces the basic methods for infectious disease epidemiology and case studies of important disease syndromes and entities. Methods include definitions and nomenclature, outbreak investigations, disease surveillance, case-control studies, cohort studies, laboratory diagnosis, molecular epidemiology, dynamics of transmission, and assessment of vaccine field effectiveness. Case-studies focus on acute respiratory infections, diarrheal diseases, hepatitis, HIV, tuberculosis, sexually transmitted diseases, malaria, and other vector-borne diseases. Introduces the theory and application of modern, computationally-based methods for exploring and drawing inferences from data. Covers re-sampling methods, non-parametric regression, prediction, and dimension reduction and clustering. Specific topics include Monte Carlo simulation, bootstrap cross-validation, splines, local weighted regression, CART, random forests, neural networks, support vector machines, and hierarchical clustering. De-emphasizes proofs and replaces them with extended discussion of interpretation of results and simulation and data analysis for illustration. Lectures and small group discussions focus on ethical theory and current ethical issues in public health and health policy, including resource allocation, the use of summary measures of health, the right to health care, and conflicts between autonomy and health promotion efforts. Student evaluation based on class participation, a group project, and a paper evaluating ethical issues in the student's area of public health specialization. Ethics of Human Subject Research (2 credits) is offered by the Department of Health Policy and Management and the Distance Education Division, Johns Hopkins Bloomberg School of Public Health and The Phoebe R. Berman Bioethics Institute, Johns Hopkins University. The course introduces students to the ethics of human subject research. Ethical theory and principles are introduced, followed by a brief history of research ethics. Topics covered in lectures and moderated discussions include informed consent for research participation, role and function of institutional review boards, just selection of research subjects, ethical aspects of study design, and privacy and confidentiality. Student evaluation will be based on participation in moderated discussions, an informed consent exercise and written case analysis. Introduces issues and programmatic strategies related to the development, organization, and management of family planning programs, especially those in developing countries. Topics include social, economic, health, and human rights rationale for family planning; identifying and measuring populations in need of family planning services; social, cultural, political, and ethical barriers; contraceptive methods and their programmatic requirements; strategic alternatives, including integrated and vertical programs and public and private sector services; information, education, and communication strategies; management information systems; and the use of computer models for program design. This course provides an understanding of the complex and challenging public health issue of food security and in a world where one billion people are under-nourished while another billion are overweight. Explores the connections among diet, the current food and food animal production systems, the environment and public health, considering factors such as economics, population and equity. Case studies are used to examine these complex relationships and as well as alternative approaches to achieving both local and global food security and the important role public health can play. Guest lecturers include experts from a variety of disciplines and experiences.
<urn:uuid:83f71ebd-7c68-4c1f-ab18-8f3388143946>
CC-MAIN-2021-43
https://opened.cuny.edu/browse?f.provider=johns-hopkins-bloomberg-school-of-public-health
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00551.warc.gz
en
0.925099
2,285
2.65625
3
Invasive adenocarcinoma is the most common type of colon cancer. The tumour arises from the glands normally found on the inside surface of the colon. Any part of the colon, from the cecum to the rectum, can be involved. In many cases, this cancer starts in a pre-cancerous condition called an adenoma. Common types of adenomas in the colon are tubular, tubulovillous, villous, and sessile serrated. The colon is a part of the gastrointestinal tract which also includes the mouth, esophagus, stomach, small bowel, and anus. The colon is a long hollow tube that starts at the small bowel and ends at the anal canal. The colon is divided into sections which include the cecum, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum. The functions of the colon are to absorb water from the food that we eat and to move waste out of the body. The colon is made up of six layers of tissue: The diagnosis of invasive adenocarcinoma is usually made after a small sample of the tumour is removed in a procedure called a biopsy. A test called immunohistochemistry may be performed to confirm the diagnosis. After the tumour has been removed completely, it will be sent to a pathologist who will prepare another pathology report. This report will confirm or revise the original diagnosis and provide additional important information such tumour size, extension, and spread of tumour cells to lymph nodes. A test to look for mismatch repair proteins may also be performed (see Mismatch repair proteins section below). This information is used to determine the cancer stage and to decide if additional treatment is required. Grade is a term pathologists use to describe how different the cancer looks compared to the normal tissue in the colon. Because the normal epithelial cells in the colon connect together to make glands, invasive adenocarcinoma is usually divided into four grades based on how much of the tumour is made of glands: Mismatch repair (MMR) is a system inside all normal, healthy cells for fixing mistakes in our genetic material (DNA). The system is made up of different proteins and the four most common are called MSH2, MSH6, MLH1, and PMS2. A loss of one of these proteins increases the risk of developing cancer. Pathologists order mismatch repair testing to see if any of these proteins are lost in a tumour. If mismatch repair tested has been ordered on your tissue sample, the results will be described in your pathology report. Each cell in your body contains a set of instructions that tell the cell how to behave. These instructions are written in a language called DNA and the instructions are stored on 46 chromosomes in each cell. Because the instructions are very long, they are broken up into sections called genes and each gene tells the cell how to produce a piece of the machine called a protein. If the DNA becomes damaged or if it cannot be read accurately, the cell will be unable to produce the proteins it requires to function normally. An area of damaged DNA is called a mutation and mutations are one of the most common causes of cancer in humans. Mismatch repair proteins keep cells healthy and functioning normally by fixing these mutations when they happen. The four mismatch repair proteins MSH2, MSH6, MLH1, and PMS2 work in pairs to fix damaged DNA. Specifically, MSH2 works with MSH6 and MLH1 works with PMS2. If one protein is lost, the pair cannot function normally. For most people, cancer develops as a result of both environmental factors (for example smoking) and genetic factors. These tumours are called ‘sporadic’ because we cannot predict exactly which people will develop them and when. Some people, however, inherit genetic changes that put them at a much higher risk of developing cancer. These people are said to have a syndrome. The most common syndrome associated with invasive adenocarcinoma of the colon is Lynch syndrome. Lynch syndrome is caused by a genetic change that results in the loss of one of the mismatch repair proteins. Another name for this syndrome is hereditary nonpolyposis colorectal cancer (HNPCC). The most common genetic changes associated with Lynch syndrome involve the genes that produce MLH1 and MSH2. A small number of people with Lynch syndrome will show genetic changes involving MSH6 and PMS2. People with Lynch syndrome are at high risk for developing adenocarcinoma of the colon at an early age. Women with Lynch syndrome are also at risk for developing ovarian and endometrial cancer at an early age. Other types of cancers associated with Lynch syndrome include stomach, liver, bladder, skin, and brain. Muir-Torre is a syndrome that is closely related to Lynch syndrome. People with Muir-Torre are at high risk for developing a type of skin cancer called sebaceous carcinoma. These people are also at risk for developing multiple non-cancerous skin tumours called sebaceous adenomas. The most common way to test for mismatch repair proteins is to perform a test called immunohistochemistry. This test allows pathologists to see if the tumour cells are producing all four mismatch repair proteins. If the tumour cells are not producing one of the proteins, your report will describe this protein as “lost” or “deficient”. Because the mismatch repair proteins work in pairs (MSH2 + MSH6 and MLH1 + PMS2), two proteins are often lost at the same time. If the tumour cells in your tissue sample show a loss of one or more mismatch repair proteins, you may have inherited Lynch syndrome and should be referred to a genetic specialist for additional tests and advice. After the tumour has been removed fully, your pathologist will measure it in three dimensions although only the largest dimension is typically included in your report. For example, if the tumour measures 5.0 cm by 3.2 cm by 1.1 cm, the report may describe the tumour size as 5.0 cm in the greatest dimension. All invasive adenocarcinomas start in the mucosa on the inside surface of the colon. The layers of tissue below the mucosa include the submucosa, muscularis propria, subserosal adipose tissue, and serosa. The movement of cancer cells from the mucosa into the tissue below is called invasion. Tumour extension is a way of describing how far the cancer cells have travelled from the mucosa into the tissue below. Your pathologist will carefully examine your tissue to find the cancer cells that have travelled the furthest from the mucosa. Cancer cells that travel deeper in the wall are more likely to come back in the area of the original tumour (local recurrence) after treatment or to spread to a lymph node or distant site such as the lungs. Tumour extension is also is used to determine the tumour stage (see Pathologic stage below). Nerves are like long wires made up of groups of cells called neurons. Nerves send information (such as temperature, pressure, and pain) between your brain and your body. Perineural invasion is a term pathologists use to describe cancer cells attached to a nerve. Perineural invasion is important because cancer cells that have attached to a nerve can use the nerve to travel into tissue outside of the original tumour. Perineural invasion is also associated with a higher risk that the tumour will come back in the same area of the body (local recurrence) after treatment. Blood moves around the body through long thin tubes called blood vessels. Another type of fluid called lymph which contains waste and immune cells moves around the body through lymphatic channels. Cancer cells can use blood vessels and lymphatics to travel away from the tumour to other parts of the body. The movement of cancer cells from the tumour to another part of the body is called metastasis. Before cancer cells can metastasize, they need to enter a blood vessel or lymphatic. This is called lymphovascular invasion. Seeing lymphovascular invasion increases the risk that cancer cells will be found in a lymph node or a distant part of the body such as the lungs. The presence of cancer cells inside a large vein past beyond the wall of the colon (outside of the thick bundle of muscle) is associated with a high risk that the cancer cells will eventually be found in the liver. In the colon, a margin is any tissue that was cut by the surgeon in order to remove the tumour from your body. The colon is a long tube and your surgeon will need to cut out a portion of the tube in order to remove the tumour from your body. The two cut ends of the tube are called the proximal and distal margins. The radial margin is any tissue around the tube that needs to be cut. In the colon, a margin is considered positive when there are cancer cells at the very edge of the cut tissue. A positive margin is associated with a higher risk that the tumour will recur in the same site after treatment. A tumour deposit is a group of cancer cells that are separate from the main tumour but not in a lymph node. Tumour deposits are associated with a higher risk that the tumour cells will spread to another part of the body such as the lungs after treatment. Tumour budding is a term pathologists use to describe either single cancer cells or small groups of cancer cells seen at the edge of the tumour. A score is assigned, either low, intermediate, or high, based on the number of buds seen under the microscope. A high score is associated with an increased risk that cancer cells will spread to another part of the body. Occasionally the cancer cells are still contained in the adenoma that gave rise to the tumour. If the cancer cells are limited to the inner surface of the adenoma and the adenoma is removed completely, there is very little chance that the cancer will come back. The risk that cancer will come back in the future is increased if your pathologist sees any of the following features under the microscope: If you received treatment (either chemotherapy or radiation therapy or both) for your cancer prior to the tumour being removed, your pathologist will carefully examine the area of the tissue where the tumour was previously identified to see if any cancer cells are still alive (viable). The most commonly used system describes the treatment effect on a scale of 0 to 3 with 0 being no viable cancer cells (all the cancer cells are dead) and 3 being extensive residual cancer with no apparent regression of the tumour (all or most of the cancer cells are alive). Lymph nodes are small immune organs located throughout the body. Cancer cells can travel from the tumour to a lymph node through lymphatic channels located in and around the tumour (see Lymphovascular invasion above). The movement of cancer cells from the tumour to a lymph node is called metastasis. Most reports include the total number of lymph nodes examined and the number, if any, that contain cancer cells. Your pathologist will carefully examine all lymph nodes for cancer cells. Lymph nodes that contain cancer cells are often called positive while those that do not contain any cancer cells are called negative. Finding cancer cells in a lymph node is important because it is associated with a higher risk that the cancer cells will be found in other lymph nodes or in a distant organ such as the lungs. The examination of lymph nodes is also used to determine the nodal stage (see Pathologic stage below). The pathologic stage for invasive adenocarcinoma is based on the TNM staging system, an internationally recognized system originally created by the American Joint Committee on Cancer. This system uses information about the primary tumour (T), lymph nodes (N), and distant metastatic disease (M) to determine the complete pathologic stage (pTNM). Your pathologist will examine the tissue submitted and give each part a number. In general, a higher number means more advanced disease and a worse prognosis. Invasive adenocarcinoma is given a tumour stage between 1 and 4 based on the distance the cancer cells have travelled from the mucosa into the wall of the colon or surrounding tissues (tumour extension). Invasive adenocarcinoma is given a nodal stage between 0 and 2 based on whether any cancer cells were found in any of the lymph nodes examined or the finding of tumour deposits. If no cancer cells were found in any of the lymph nodes examined, the nodal stage is N0. If no lymph nodes were sent for pathologic examination, the nodal stage cannot be determined and is listed as NX. Invasive adenocarcinoma is given a metastatic stage of 0 or1 based on the presence of cancer cells at a distant site in the body (for example the liver). The M stage can only be assigned if tissue from a distant site is submitted for pathological examination. Because this tissue is rarely present, the M stage cannot be determined and is listed as X.
<urn:uuid:46793f9f-c078-4190-a8ac-b7326ffba558>
CC-MAIN-2021-43
https://www.mypathologyreport.ca/invasive-adenocarcinoma-of-the-colon/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587770.37/warc/CC-MAIN-20211025220214-20211026010214-00630.warc.gz
en
0.953064
2,745
3.28125
3
Few of us ever come as deeply under the influence of another person as Charlotte Salomon was affected by Alfred Wolfsohn. To his charismatic teachings we owe the existence of one of the great works of art of the twentieth century. One tie that bound them to each other was the movies. A new exhibition shows how. Photo from Sheila Braggins, Alfred Wolfsohn: the man and his ideas, privately published in September 2003 The German Jewish voice teacher Alfred Wolfsohn (1896-1962) was a world-class influencer who exercised his influence only in person. He did not have many followers, but those who fell under his spell were never the same again. They called him by his initials AW, pronounced Ah-Vay, suspiciously close to the German pronunciation of Jahweh. What he had to offer was not a message or a product or a feel-good technique to get you into your comfort zone. Wolfsohn was discomfort personified. He put you on a hard climb toward your own utmost potential. He urged and spurred and shamed his followers to aim beyond themselves, at prodigious creations, beyond anything anyone else had ever dreamed of. As he told it, his approach came into being as a self-discovered means to overcome a life-changing trauma of his own. Serving in the German army in the First World War at the age of seventeen, he underwent an experience of guilt-ridden, unimaginable horror. Following an attack on the French front: These were not empty phrases for Wolfsohn. It took long, painful years, but the struggle brought him to the saving realization that the sounds he had heard dying soldiers make were like the crying of babies – raw, unmediated expression. Between these extremes at the outer edges of life we use our voices differently, in socially dictated modes. This awareness allowed Wolfsohn, without insisting on a theory, to work with singers on far broader terms than conventional voice training or performance technique. Our voices can do much more than producing the approved speech or singing we like to hear; their unused capacities are linked to personality traits that get suppressed along with our unlearned sounds. Wolfsohn pushed his pupils to explore and exploit these resources. One result of his training is that singers could extend their vocal capabilities beyond beauty and to ranges of five, for some even eight octaves. (A 30-minute opera for the Roy Hart Theatre, which puts Wolfsohn’s ideas into performative and pedagogical practice, is Eight songs for a mad king, by Peter Maxwell Davies.) Training singers was his profession, but Wolfsohn worked with – worked on is probably more like it – anyone at all with whom he had rapport. He had a guiding motto that he took away from his trauma and its aftermath: “One must go into oneself first, to be able to go outside oneself.” He could teach this, and press toward its implications, to anyone. The Wolfsohn follower who is now best known is an artist: Charlotte Salomon (1917-43). Following a period of highly intensive contact with him in Berlin, when she was 20 and he twice as old, she was sent to her grandparents in the south of France by her father and stepmother, in a flight from Nazi Germany. There she found her grandmother in mortal despair and was unable to keep her from jumping out a window to her death. Her odious grandfather then revealed to her the family secret that her mother, along with seven other relatives, had also committed suicide, and prompted her to do the same. Building on and benefiting from Wolfsohn’s example, Charlotte chose for life. Thanks to him this did not require years of soul-searching; she did it at once. With her own life, intertwined with his, as a medium, she wrote and painted 1325 gouaches and sheets of tracing paper in an unprecedented, unparalleled ensemble of texted art, in part to the accompaniment of music. She numbered 769 of the gouaches, titling them, with some 200 tracing sheets, not Life or Death? but Leben? oder Theater? (Life? or Theater? A “Singspiel in three colors”). I read this as a corrective reaction to Wolfsohn’s vitalism. The Wolfsohn solution, it says, is not an immersion into normal life, but into a part in a performance with a one-to-one relation to reality. Those who knew her described Charlotte as shy to the point of reclusiveness. Her creation, and her persona in it, Charlotte Knarre, gave her just the distance she needed to put life itself at her command. The Alfred Wolfsohn character in Life? or Theater? is called Amadeus Daberlohn. The manuscript of a book he wrote in the 1930s, Orpheus, oder Der Weg zu einer Maske (Orpheus, or the way to a mask), is given a cameo role in the story. A passage on singing as expression, from Leila Vennewitz’s translation of Life? or Theater? (Daberlohn) and that of Marita Günther, revised by Sheila Braggins, of Orpheus (Wolfsohn), reads like this: Wolfsohn’s understanding of the voice and the self did not stop at the boundaries of individuality. He was intent on integrating his ideas into larger realms and hooking them onto outside processes. The expansive psychology of Carl Jung provided a large metaphysical space for his method, while a mechanical tool for furthering the aim of going into and out of the self he saw in the movies. After trying to digest this typically abstruse message, by all means treat yourself for four and a half minutes to Eleanor Powell’s soft-shoe masterpiece finale in “Broadway Melody of 1938.” And then indulge in Wolfsohn’s perfectly understandable association. (The documentary on Würzburg Cathedral, with the old lady praying, has not yet been located.) The above is by way of a recommendation that you visit the delightful and insightful exhibition in the Jewish Historical Museum Charlotte Salomon in close-up: on the influence of cinema on Life? or Theater? It is the work of curator Mirjam Knotter, who also signed for the unrepeatable 2017/18 exhibition of the complete work. Her new display is the first of a planned series of thematic exhibitions on Life? or Theater? It opened on 13 March 2020, the day when the Dutch prime minister proclaimed the corona virus lockdown, giving the exhibition an initial, record-breaking run of two hours. Fortunately it has now reopened and can be visited until 22 November. The exhibition shows that cinema meant even more to Charlotte than the meanings assigned to it by Alfred Wolfsohn. Here is one of her great visual finds: Sending Charlotte to the south of France turned out to be a fatal mistake. In 1939 Wolfsohn escaped to London and Charlotte’s father Albert Salomon and his second wife Paula Lindberg to Holland, where they survived the German occupation. When the Germans took over from the Italians the occupation of the south of France, Charlotte was taken to Auschwitz, where she was murdered on 10 October 1943. After the war Albert and Paula retrieved her magnum opus from the house of an American woman who had given Charlotte hospitality after the deaths of her grandparents and to whom Life? or Theater? is dedicated, Ottilie Moore. Life? or Theater? can be read in a number of editions in print, in German, English, Dutch, French and Italian editions. Most conveniently and completely, it is available on the website of the Jewish Historical Museum in Amsterdam, where the work is preserved. Go to https://charlotte.jck.nl/ for the German original and Dutch and English translations, with vocal readings and the overlay tracing sheets in place on order. Thanks to the research and generosity of the Jewish Historical Museum, I can share with you links to seven of the films Charlotte could – probably will – have seen. Das Cabinet des Dr. Caligari (1920) Der Letzte Mann (1924) The Lodger (1927) Berlin: Die Sinfonie der Großstadt (1927) Menschen am Sonntag (1930) Mädchen in Uniform (1931) The museum is screening them on successive Friday afternoons. https://jck.nl/en/exhibition/charlotte-salomon-close © Gary Schwartz 2020; the images from Life? or Theater? courtesy of the Jewish Historical Museum. Published on the Schwartzlist on 8 July 2020 Mirjam Knotter and I have been collaborating on various projects for fifteen years. In 2006 on the Jewish Historical Museum exhibition The “Jewish” Rembrandt: the myth unravelled and in coupled presentations at a Rembrandt symposium in Berlin: “Rembrandt’s Hebrew” (Mirjam, on a subject to which she dedicated a master’s thesis) and “Rembrandt’s Hebrews” (me). We are now working on an exhibition for the Jewish Museum and Tolerance Center in Moscow: Rembrandt seen through Jewish eyes. We also talk a lot about Charlotte Salomon. What our prime minister calls an “intelligent lockdown” has not brought me those seas of free time that everyone else seems to be enjoying. My situation is well illustrated in this image that some good soul put up on Facebook. There is work on the three exhibitions of which I am guest curator. Mainly, though, I have been writing a book on this painting: It has an utterly fascinating provenance (King Willem II of the Netherlands; the grand ducal court at Weimar; the Weimar museum; a bunch of burglars; a German seaman; a plumber in Dayton, Ohio; the U.S. government, in care of the National Gallery of Art; the West German government, in care of the Wallraf-Richartz Museum; Hereditary Grand Duchess Elisabeth von Sachsen-Weimar-Eisenach) and critical history (an unquestioned Rembrandt self-portrait until September 1968, since then in attributional limbo). Any information you may have about it is very welcome. Another self-portrait. Yesterday Loekie and I received the maiden issue of a new glossy by and about our eleven-year-old grandson Abel. He is phenomenally good at transforming photos into digital images like this. I am equally impressed by his creativity as a graphic designer. Why waste the stem of the letter K when you can make it do double duty as an I by putting a dot on top of it? The magazine is called IK (Me). No subscription information yet. Another claim on my time has come from completely unexpected quarters. On both sides of my family I have come into contact over the past month with second cousins of whose existence I was unaware and who have been conducting extensive family research. Tomas Kertész of Stockholm is the great-grandson of the brother of my maternal great-grandfather Isaac Friedman, from Budapest; and Howard Rosenblum of Ottawa the great-grandson of the sister of my paternal grandfather Albert Schwartz, from a village in the south of Poland. I now know the names of numerous relatives, and have learned very upsetting things about them. From Tomas I found out that ten members of my family were killed in the Holocaust. They were the brother (aged 79), sister-in-law (73), nephews and nieces of my great-grandfather, whom I knew well as a child and who attended my bar-mitzvah before he died, 93 years old, in Brooklyn. He lived with my grandparents, and they must have known about these deaths but never told me, and perhaps not even my parents, about them. Why I will never know. Finding out about this for the first at the age of 80 gives that much more of a jolt. My paternal grandparents lived a few blocks away from my mother’s parents, in East New York, Brooklyn, where I grew up. Their house nearly abutted the back yard of the house where according to the 1940 census my grandfather’s sister lived. Her family name, which must also have been his, was Szwarcberg. I knew that the name had been truncated on Ellis Island, but not that it was spelled this way. What upset me is that my grandfather, whom we visited every Shabbat and holiday for years, on the same walks that brought us to my mother’s parents, never introduced us to or even told us about his sister, my father’s aunt. My grandfather’s religious intolerance, stubbornness and authoritarianism were legendary, but that they led to a family split of this extremeness causes fresh pain. In the very act of discovering these relatives, I also lose them. Responses in the Reply box below (these will be viewed by all visitors to the site) or personally to [email protected] are always appreciated and will be answered. So will donations. Please do send a donation.
<urn:uuid:2d50ff95-5d65-49b7-8627-ea01f71d6197>
CC-MAIN-2021-43
http://www.garyschwartzarthistorian.nl/384-the-man-who-was-charlottes-muse/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00671.warc.gz
en
0.968417
2,763
2.65625
3
Researching Conversos & Crypto-Jews of the Southwest & New Word History & Definitions - Definitions: Sephardim - Conversos - Marranos - Crypto-Jewish definitions and a historical overview with a bibliography from JewishGen, an affiliate of the Museum of Jewish Heritage. - Anusim (Crypto Judaism) Page of Shulamith HaLevy - Lexicon, articles, essays, and other resources from the Crypto-Jewish scholar. - Society for Crypto-Judaic Studies - Created to foster research and networking of information on the historical and contemporary development of crypto Jews of Iberian origin. Be sure to check out the papers in HaLapid, the Society's journal including annual conference proceedings. - The Bloom Southwest Jewish Archives - Housed at the University of Arizona Library, the research collection is dedicated to collecting and recording the history of Crypto-Jews and other pioneer Jews in the Desert Southwest, covering Arizona, New Mexico, and West Texas. - Columbus Was a Catalan-Speaking Jew, U.S. Scholar Says - Linguistics professor, Estelle Irizarry asserts that peculiarities found in Columbus' writings that are associated with Ladino, suggest that Columbus was Jewish. Irizarry states that “Columbus even punctuated marginal notes and he included copious notes around his pages. In that sense, he followed the punctuation style of the Ladino-speaking scribes.” - Kabbalistic Signet Indicates Columbus was an Exiled Jew - Tzvi Ben Gedalyahu's article asserts that a recently rare triangular Kabbalistic signet indicates that Columbus was a Jew named Salvador Fernando Zarco and was among those expelled from Spain in 1492. Proof is that the unique monogram is similar to inscriptions on gravestones in Jewish cemeteries in Spain and southern France. - Was Columbus Jewish? - Howard M. Sachar, Professor of History and International Affairs at George Washington University, explores the legend that Columbus consulted with Jews and transported some to the New World at the time of the expulsion, thus giving rise to new Jewish communities around the world. - Crypto-Jews in Mexico During the Spanish Colonial Era - Paper from the Nahum Goldmann Museum of the Jewish Diaspora in Israel discusses Spanish policy toward New Christians, accusations of Judaizing, the Carvajal affair, and the Auto-da-Fé of 1649; with bibliography and links. - The Virtual Jewish History Tour of Mexico - The Jewish Virtual Library's history of the Jews of Mexico. Many prominent Mexicans claimed conversos roots, including Porfirio Diaz, Francisco Madero and Jose Lopez Portillo, and artist Diego Rivera who publicly announced his Jewish roots when he wrote in 1935: "My Jewishness is the dominant element in my life. From this has come my sympathy with the downtrodden masses which motivates all my work." The Spanish & Mexican Inquisitions - The Edict of Expulsion of the Jews (1492) - An English translation of the Edict signed by Ferdinand and Isabella and a photo of a page of the original Edict housed in the Nahum Goldmann Museum of the Jewish Diaspora, Israel. - Cultural Encounters: The Impact of the Inquisition in Spain and the New World (e-book) Anthology edited by Mary Elizabeth Perry and Anne J. Cruz, University of California Press. With chapters written by Stanley Hordes, Richard C. Greenleaf, and others. Paper by Clara Steinberg-Spitz: A brief overview of the origins of the Inquisition in Spain and Portugal, the Spanish territories in the New World, and the arrival of Crypto-Jews to the newly discovered lands. From the Inquisition Rosters; some of the names of Conversos who were tried in New Spain (Mexico) by the Spanish Inquisition for relapsing into Judaism. - Don’t Drink the Chocolate: Domestic Slavery and the Exigencies of Fasting for Crypto-Jews in Seventeenth-Century Mexico Robert J. Ferry's paper, based on testimonial records of over a hundred people who were prosecuted for Jewish heresy by the Mexican Holy Office of the Inquisition, examines some of the elements of the identity of Crypto Jews in seventeenth-century Mexico. Dr. Yitzchok Levine, author of the Jewish Press column "Glimpses Into American Jewish History," writes about the life and trials of Luis de Carvajal, Jr. (1567-1596), one of the most interesting personalities to be tried by the Inquisition in Mexico during the sixteenth century. Dr. Yitzchok Levine asserts that it was not just economic opportunities that attracted the anusim to Mexico, but also the hope that in the New World they would be free to secretly practice the religion of their ancestors without interference from the Christian Inquisitors. Unfortunately, the Inquisition would soon follow them to New Spain. Dr. Yitzchok Levine's essay shows that despite threats of torture and confiscation of property, as well as sufficient knowledge of Jewish ritual and practice, historical records prove that New Christians practiced as much Judaism as they could. One such story is about Tomas Trebino de Sobremonte, a martyr who was burned alive at the stake in the Mexican Inquisition of 1649. During the black days of the Spanish Inquisition, instead of getting drunk on Purim and drawing the inquisitors' suspicions, crypto Jews took on the custom of fasting for three days, as Queen Esther had ordered the Jewish people when threatened with annihilation. Resources for Those Researching Converso Heritage - Shavei Israel - "Israel Returns" - An Israel-based organization comprised of academics, educators and rabbis, whose goal is to assist "lost Jews," or those with Jewish ancestry in coming to terms with their heritage and identity "in a spirit of tolerance and understanding." Also see their "Anousim "section for articles and history about the Anousim. - Kulanu – All of Us - An organization dedicated to finding and assisting lost and dispersed remnants of the Jewish people (anusim/crypto-Jews). - SephardicGen Resources for Crypto-Jews/Anusim Genealogy - From Sephardic Genealogy Resources, includes general resources on Crypto-Jews, Sephardic Genealogy, and a bibliography for those wishing to research their Crypto-Jewish or Sephardic background. - Be’chol Lashon: In Every Tongue - Be'chol Lashon's goal is to expand and strengthen the Jewish people through ethnic, cultural, and racial inclusiveness. The organization recognizes the anusim as a vital component for potential growth. "If the forced conversions, expulsions, and inquisitorial persecutions had not occurred, the Sephardic population today would number in the tens of millions. Be'chol Lashon seeks to restore a link that was broken and thereby strengthen the future of the Jewish people." - The "Secret Jews" of San Luis Valley - In Colorado, the gene linked to a virulent form of breast cancer found mainly in Jewish women is discovered in Hispanic Catholics. Is this another link to proof of a crypto-Jewish past? Crypto-Jewish Writers & Artists - The Searchers: Seven South Americans Uncover Their Converso Roots - Gabriela Böhm, filmmaker and a child of Holocaust survivors, discusses her film, The Longing. The film follows the return to Judaism of a group of South Americans who were raised as Catholics. They undergo conversion, but in the end face the heartbreaking reality that the Jewish community of Ecuador does not accept them into their community. For the filmmaker, a more important story emerged: "What happens when the forces who are saying 'no' are the Jews rather than the Catholic Church?" - Writer Kathleen Alcalá - Alcalá is the author of the short story collection, Mrs. Vargas and the Dead Naturalist, and three novels: Spirits of the Ordinary, The Flower in the Skull, and Treasures in Heaven. Her recent collection of essays, in which she explores her family's crypto-Jewish heritage in Saltillo, Mexico, The Desert Remembers My Name, was recently published by the University of Arizona Press. Also, read Alcalá's presentation to the Society for Crypto Judaic Studies, "A Thread in the Tapestry: The Narros of Saltillo, Mexico, in History and Literature." - Crypto Jewish Images by Photographer Cary Herz - New Mexico's Crypto-Jews: Image and Memory, Cary' Herz's twenty-year search for descendants of crypto-Jews, with essays by Mona Hernandez and Ori Z. Soltes; published by UNM Press. Also see Picturing Today’s Conversos, in which Herz discusses her observation that "even today New Mexico’s Crypto-Jews are ambivalent about their integration into the largely Ashkenazic New Mexican Jewish community." More Herz photos. - Consuelo Luz - Raised in Greece, the Philippines, Spain, Italy and Peru by Sephardic/Chilean/Cuban/Mampuche Indian parents, Luz now lives in Northern New Mexico. She sings Sephardic (Judeo-Hispanic) songs that "embrace all of humanity and envision a transformed and loving world celebrating its diversity while at the same time honoring its oneness." Personal Stories of Crypto Jews/Conversos - The Jewish Shepherd of Tijuana - The story of Carlos Salas Diaz, founder of Congregacion Hebrea de Baja California. A converso, born in Mexico to a Catholic family, he was ordained as a Methodist minister, later converted to Judaism and became a rabbi. Diaz tells about his life and how he returned to Judaism. He has converted many Mexicans and also provides Jewish instruction to Mexican Jews, including conversos, such as the hidden Jews of Venta Prieta. - Hanging By a Wick - Musician Vanessa Paloma, who has a CD of Ladino music with Flor de Serena, tells her family's story by following the lives of strong female predecessors, starting with the expulsion from Spain as they moved from country to country through the Netherlands, Italy, Morocco, Panama, Columbia, and eventually to the United States. - Zakhor: A personal Account - Rabbi Juan Mejía, a descendant of Columbian Anusim, recounts the personal story of his decision to pursue the rabbinate. He asks, "After all, who was I? Just a Jew back from the dark woodwork of the Inquisition after 500 years? Could I aspire to learn as much as people who has been Jewish all their lives...?" - Reclaiming Jewish Traditions in Mexico - Rabbi Daniel Mehlman, of Southern California, was asked to provide guidance to a group of crypto-Jewish Mexicans practicing Judaism in a Mexicali home on their own, without rabbi or synagogue. This is the story of his visit. - The Inquisition: Full Circle - The story of Nuria Guasch Vidal, a crypto-Jew from Barcelona who discovered her family's secret when her grandfather lay on his deathbed and pulled her aside, instructing her not to allow a priest in the room once he died. Culture & Folklore - "Let it go to the garlic!": Evil Eye and the Fertility of Women Among the Sephardim - Rosemary Levy Zumwalt examines the belief and ritual surrounding "mal ojo" (the evil eye) among Sephardic communities. She focuses on the prominent position of women in maintaining the evil eye belief system. - Preserving the Heritage - Renee Levine Melammed, author of A Question of Identity: Iberian Conversos in Historical Perspective, writes about the practices of crypto-Jewish women of Spain and how they managed to observe some Jewish holidays, especially Yom Kippur, even after the forced conversions and under the watchful eye of the Church. - Converso Dualities in the First Generation: The Cancioneros - Cancioneros are collections of popular poems that flourished in the fifteen century. Often satirical and irreverent, using plain language and simple rhyme, the poems dealt with current events, people, and cultural norms. The Cancioneros provide a glimpse into "the converso situation and its early dualities." Many authors of the poems were conversos of the first generation, as was the first compiler of their work, Juan Alfonso de Baena. Several poems use Hispanized Hebrew idioms, and many attack as well as defend conversos. (From Jewish Social Studies Volume 4, Number 3, by Yirmiyahu Yovel.) - Texas Mexican Secret Spanish Jews Today - From Sefarad.org. An interesting article by Anne deSola Cardoza on how Jewish food, oral traditions, culture, and secret religious customs are in evidence today in the folklore, habits, and practices of the descendants of early settlers in South Texas and the nearby areas of Northern Mexico. - Flour Tortillas and Other Jewish Legacies of Colonial Texas - Charles M. Robinson, historian and McAllen, Texas author, discusses the unleavened tortilla and other culinary traditions of the crypto-Jews of the Rio Grande Valley of South Texas. Note: The original link disappeared, and I found a copy of the original essay posted on All Empires Blog. The article is there in its entirety, but the format is poor; however, I thought it was worth including it here because of the unique information it offers. - Semitas, Semitic Bread, and the Search for Community: A Culinary Detective Story (PDF) - Rachel Laudan's article about "pan de Semita" (bread of the Semites), a lightly sweetened loaf found along the Rio Grande border regions of South Texas, and how it reveals the identity of the conversos of the area. - The history and theories of the origins of "capirotada," or the bread pudding that is shared by both Hispanic Christians and Jews for Lent and Passover, respectively. - Manifestations of Crypto Judaism in the American Southwest - Article by Shulamith Halevy; appeared in Jewish Folklore & Ethnology Review 18(1-2), pp. 68-76, 1996. - Judeo-Spanish Ballads from New York (e-book) - The Sephardic community of New York City, numbering over twenty-five thousand, is an excellent source of ballads representative of the Judeo-Spanish communities of Turkey, Morocco, the Balkans, and South America. Maír José Benardete collected the ballads from mainly women older than forty years of age. Their archaic ballad repertoires retain many features of the Spanish ballad tradition as it existed at the time of the expulsion from Spain. Many narrative types date back to medieval times and still survive among Sephardic Jews. - Jewish Settlers Left Strong Imprint in the Rio Grande Valley - An article detailing how Jewish customs, culture, and bloodline, survived beyond the Spanish and Mexican Inquisitions to become a part of the lifestyle in the Rio Grande Valley city of McAllen, Texas.
<urn:uuid:3964a412-f50a-41fd-92b9-09de4a4739b7>
CC-MAIN-2021-43
https://www.miriamherrera.com/crypto-jewishlinks
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00070.warc.gz
en
0.926171
3,157
2.90625
3
Why do we need 5G? - Mobile data traffic is rising rapidly, mostly due to video streaming. - With multiple devices, each user has a growing number of connections. - Internet of Things will require networks that must handle billions more devices. - With a growing number of mobiles and increased data traffic both mobiles and networks need to increase energy efficiency. - Network operators are under pressure to reduce operational expenditure, as users get used to flat rate tariffs and don't wish to pay more. - The mobile communication technology can enable new use cases (e.g. for ultra-low latency or high reliability cases) and new applications for the industry, opening up new revenue streams also for operators. So 5G should deliver significantly increased operational performance (e.g. increased spectral efficiency, higher data rates, low latency), as well as superior user experience (near to fixed network but offering full mobility and coverage). 5G needs to cater for massive deployment of Internet of Things, while still offering acceptable levels of energy consumption, equipment cost and network deployment and operation cost. It needs to support a wide variety of applications and services. Comparison of key capabilities of IMT-Advanced (4th generation) with IMT-2020 (5th generation) according to ITU-R M.2083: Who is interested in using 5G? 5G offers network operators the potential to offer new services to new categories of users. What are the main usage scenarios of 5G? ITU-R has defined the following main usage scenarios for IMT for 2020 and beyond in their Recommendation ITU-R M.2083: - Enhanced Mobile Broadband (eMBB) to deal with hugely increased data rates, high user density and very high traffic capacity for hotspot scenarios as well as seamless coverage and high mobility scenarios with still improved used data rates - Massive Machine-type Communications (mMTC) for the IoT, requiring low power consumption and low data rates for very large numbers of connected devices - Ultra-reliable and Low Latency Communications (URLLC) to cater for safety-critical and mission critical applications which requires different key capabilities according to ITU-R M.2083: How is the 5G standard developed? ITU-R has set up a project called IMT-2020 to define the next generation of mobile communication networks for 2020 and beyond with the following time plan: At TSG #67 in March 2015, 3GPP formulated with SP-150149 a 3GPP timeline on how to contribute to this 5th generation of mobile networks. In connection with RAN #69 in Sep. 2015, 3GPP held a workshop in Phoenix, USA in order to inform 3GPP about the ITU-R IMT-2020 plans and to share the visions and priorities of the involved companies regarding the next generation radio technology/ies. The chair's summary (RWS-150073) formulated 3 next steps: - preparation of channel modeling work for high frequencies - a study to develop scenarios and requirementsfor next generation radio technology - a study for RAN WGs to evaluate technology solutions for next generation radio technology At RAN #69 in Sep.15, 3GPP started a Rel-14 study item (FS_6GHz_CH_model, RP-160210) "Study on channel model for frequency spectrum above 6 GHz". This study completed at RAN #72 in June 2016 with the 3GPP TR 38.900. Note 1: LTE-Advanced was so far aggregating spectrum of up to 100MHz and was so far operating in bands below 6GHz. This study looks at the frequency range 6-100GHz and bandwidths below 2GHz. Note 2: The whole contents of this TR was later transferred into 3GPP TR 38.901 "Study on channel model for frequencies from 0.5 to 100 GHz" covering the whole frequency range. At RAN #70 in Dec. 2015, 3GPP started already a Rel-14 study item (FS_NG_SReq, RP-160811) "Study on Scenarios and Requirements for Next Generation Access Technologies" with the goal to identify the typical deployment scenarios (associated with attributes such as carrier frequency, inter-site distance, user density, maximum mobility speed, etc.) and to develop specific requirements for them for the next generation access technologies (taking into account what is required for IMT-2020). This study completed at RAN #74 in Dec. 2016 with the 3GPP TR 38.913 which describes scenarios, key performance requirements as well as requirements for architecture, migration, supplemental services, operation and testing. In March 2016, ITU-R invited for candidate radio interface technologies for IMT-2020 in a Circular Letter. The overall objectives of IMT-2020 were set via ITU-R M.2083 and the requirements were provided in ITU-R M.2410 like e.g.: The minimum requirements: - for peak data rate: Downlink: 20 Gbit/s, Uplink: 10 Gbit/s - for peak spectral efficiencies: Downlink: 30 bit/s/Hz, Uplink: 15 bit/s/Hz - user plane latency (single user, small packets): 4 ms for eMBB, 1 ms for URLLC - control plane latency (idle => active): 10-20ms - maximum aggregated system bandwidth: at least 100 MHz, up to 1GHz in higher frequency bands (above 6GHz) - mobility: up to 500km/h in rural eMBB At RAN #71 in March 2016, 3GPP started a Rel-14 study item (FS_NR_newRAT, RP-170379) "Study on New Radio (NR) Access Technology" with the goal to identify and develop the technology components to meet the broad range of use cases (including enhanced mobile broadband, massive MTC, critical MTC) and the additional requirements defined in 3GPP TR 38.913. This study completed at RAN #75 in March 17 with the Rel-14 3GPP TR 38.912 which is a collection of features for the new radio access technologies together with the studies of their feasibility and their capabilities. Note: Included in this study item were also some RAN Working Group (WG) specific 3GPP internal TRs: 38.802 (RAN1), 38.804 (RAN4), 38.801 (RAN3), 38.803 (RAN4). At RAN #75 in March 17, 3GPP started a Rel-15 work item (NR_newRAT, RP-181726) on "New Radio Access Technology". Over time this WI got split into 3 phases addressing different network operator demands: - "early Rel-15 drop": focus on architecture option 3, also called non-standalone NR (NSA NR) which could be considered as the first migration step of adding NR base stations (called gNB) to an LTE-Advanced system of LTE base stations (eNB) and an evolved packet core network (EPC) i.e. in this option no 5G core network (5GC) is involved; functional freeze: Dec.17, ASN.1 freeze in March 18 - "regular Rel-15 freeze": focus on the standalone NR architecture option 2 which would be a network of NR base stations (gNB) connected to the 5G core network (5GC) without any LTE involvement; functional freeze: June 18; ASN.1 freeze in Sep.18; Note: Originally all other architecture options were supposed to be completed in this regular freeze phase as well. However, due to the extremely challenging time plan apart from option 2 only architecture option 5 (an LTE base station can be connected to a 5GC) was completed in this phase as well. - "late Rel-15 drop": architecture option 4 (this would be like adding an LTE base station to an SA NR network where the control plane is handled via the NR base station) and architecture option 7 (this would be like adding an LTE base station to an SA NR network where the control plane is handled via the LTE base station) plus NR-NR Dual Connectivity; functional freeze: Dec.18; ASN.1 freeze in March 19; Note 1: Illustrations of the different architecture options can be found in 3GPP TR 38.801 (with the caveat that the terminology was not yet stable during this study phase). Note 2: Rel-15 is distinguishing 2 frequency ranges: FR1: 450 MHz – 6000 MHz and FR2: 24250 MHz – 52600 MHz; while LTE is operating only in FR1, NR can operate in FR1 and FR2; that's why FR1 is considered for NSA NR and FR2 is considered for SA NR. As LTE-Advanced can fulfill parts of the IMT-2020 requirements for certain use cases the 3GPP input (called "5G") to IMT-2020 has 2 submissions: - SRIT (set of radio interface technologies): component RIT NR + component RIT E-UTRA/LTE (incl. standalone LTE, NB-IoT, eMTC, and LTE-NR Dual Connectivity) - RIT (radio interface technology) NR Note: The terms RIT and SRIT are discussed and explained in RP-171584. When will the 5G standard be ready? Splitting Rel-15 into multiple drops turned out to be very challenging, e.g. - NSA NR had still non-backward compatible Change Requests in Sep.18 - inserting ASN.1 into an already frozen specification requires very high quality change requests which is difficult under high time pressure - WGs that require stable pre-work from other WGs (like RAN4 for RF/RRM and RAN5 for Testing) are working on instable grounds and struggle even more to stay in the time plan Nevertheless, 3GPP contributed in time to the IMT-2020 schedule shown below: - in Jan. 2018 via PCG40_11 with initial characteristics of the NR RIT and NR+LTE SRIT - in Sep./Oct.2018 via PCG41_08 with the characteristics of the NR RIT and NR+LTE SRIT, the preliminary self-evaluation and link budget results and the compliance templates - in June 2019 via PCG43_07 with the 3GPP 5G candidate submissions of NR RIT and NR+LTE SRIT including characteristics, compliance and link budget templates and the 3GPP self evaluation TR 37.910 (this submission includes further Rel-16 enhancements) to step 3 of the IMT-2020 process Note: The characteristics templates give a good overview about the considered technology. - in June 2020 the final overviews of the 3GPP specifications via PCG45_07 for NR+LTE SRIT and PCG45_08 for NR RIT and in July 2020 the final specification sets of 2020-06 (Release 15 & 16) for the transposition of the 3GPP OPs Rel-16 considered e.g. the following NR enhancements: - eNB(s) Architecture Evolution for E-UTRAN and NG-RAN - Enhancements on MIMO for NR - NR positioning support - 5G V2X with NR sidelink - Cross Link Interference handling and Remote Interference Management for NR - NR-based access to unlicensed spectrum - 2-step RACH for NR - L1 enhancements for NR Ultra-Reliable and Low Latency Communication (URLLC) - UE Power Saving in NR - NR mobility enhancements - Multi-RAT Dual-Connectivity and Carrier Aggregation enhancements (LTE, NR) - Integrated access and backhaul for NR - Single Radio Voice Call Continuity from 5G to 3G - Optimisations on UE radio capability signalling – NR/E-UTRA Aspects - Support of NR Industrial Internet of Things (IoT) - Private Network Support for NG-RAN - NG interface usage for Wireless Wireline Convergence - RF requirements for NR frequency range 1 (FR1) - Add support of NR DL 256QAM for frequency range 2 (FR2) - NR RF requirement enhancements for frequency range 2 (FR2) - Self-Organising Networks and Minimization of Drive Tests support for NR - NR support for high speed train scenario - RRM requirement for CSI-RS based L3 measurement in NR - NR RRM enhancement - Transfer of Iuant interface specifications from 25-series to 37-series - Direct data forwarding between NG-RAN & E-UTRAN nodes for inter-system mobility - Introduction of capability set(s) to the multi-standard radio specifications The Rel-16 stage 3 and ASN.1 freeze was carried out in June 2020 (note: Some work items got an exception to complete some remaining open issues by Sep. 20 and also some corrections could still be expected as doing a stage 3 freeze and an ASN.1 freeze at the same time is a challenge). Rel-17 is working e.g. on the following NR enhancements: - Further enhancements on MIMO for NR - NR Sidelink enhancement - NR Dynamic spectrum sharing (DSS) - Enhanced Industrial Internet of Things (IoT) and ultra-reliable and low latency communication (URLLC) support for NR - Solutions for NR to support non-terrestrial networks (NTN) - UE power saving enhancements for NR - NR multicast and broadcast services - Enhancements to Integrated Access and Backhaul (IAB) for NR - NR small data transmissions in INACTIVE state - Multiple Input Multiple Output (MIMO) Over-the-Air (OTA) requirements for NR UEs - Enhancement of Private Network support for NG-RAN - Introduction of DL 1024QAM for NR FR1 - Enhanced NR support for high speed train scenario for frequency range 1 (FR1) - NR support for high speed train scenario in frequency range 2 (FR2) - Further enhancements of NR RF requirements for frequency range 2 (FR2) - RF requirements enhancement for NR frequency range 1 (FR1) - NR positioning enhancements - NR coverage enhancements - Support of reduced capability NR devices - NR repeaters - Introduction of bandwidth combination set 4 (BCS4) for NR - NR Sidelink Relay - NR Uplink Data Compression (UDC) - Enhancement of RAN slicing for NR - NR QoE management and optimizations for diverse services - Introduction of UE TRP (Total Radiated Power) and TRS (Total Radiated Sensitivity) requirements and test methodologies for FR1 (NR SA and EN-DC) - Introduction of UE high power classes (1.5 and 2) for various bands and Carrier Aggregation combinations - Introduction of various new bands and Carrier Aggregation/Dual Connectivity band combinations First Rel-18 study items and work items will start in December 2021. Like with GERAN, UMTS and LTE in the past, 5G will be further evolved in the future to address the industry and customer demands. Where to find the corresponding 5G specifications? Radio related specifications addressing only NR: 38 series specifications. Radio related specifications addressing only LTE: 36 series specifications. Radio related specifications addressing aspects affecting both LTE and NR: 37 series specifications. Service requirements for next generation new services and markets: 3GPP TS 22.261. System Architecture for the 5G system (stage 2): 3GPP TS 23.501. Procedures for the 5G System (stage 2): 3GPP TS 23.502. NR; NR and NG-RAN Overall Description (stage 2): 3GPP TS 38.300. NR; Multi-connectivity; Overall description (stage 2): 3GPP TS 37.340. NG-RAN; Architecture description: 3GPP TS 38.401. ETSI's 5G Building Blocks ETSI has a number of component technologies which will be integrated into future 5G systems: Network Functions Virtualization (NFV), Multi-access Edge Computing (MEC), Millimetre Wave Transmission (mWT) and Non-IP Networking (NIN).
<urn:uuid:33fcffd7-4e99-4383-a79c-52bc76b86b28>
CC-MAIN-2021-43
https://www.etsi.org/technologies/mobile/5g
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00631.warc.gz
en
0.894319
3,464
2.59375
3
What is recycling? Recycling is the process of reusing materials and resources that would otherwise be thrown away as waste to slowly deteriorate with time. We all know that recycling has many benefits that are both economical and environmental yet we don’t truly know the value of recycling. Thus, we don’t see any immediate impact that the extra effort of recycling has. That is why this article exists to educate and inform you of what positive changes you can have as an individual. However, you should be aware that not all materials and resources can be recycled. We will take a look at common household items which can and cannot be recycled, and why later on in this article. Because we cannot recycle everything, we are required to sort out the recyclable from the non-recyclable items and handing it to a recycling centre where it is taken care of. It is this little extra effort that makes all the difference. We hope by the end of this article, you are more convinced to make a conscious effort to implement a recycling routine as a small part of your daily life. What are common recyclable items? Did you know that almost all metals can be recycled? You are able to recycle your old bike, cast iron pots and pans, even your old refrigerator junk. Steel and aluminium can be recycled for an infinite number of times because it does not lose any quality when recycled. Here are some common household items that you can recycle: - Aluminium foil and cans – Aluminium can and foil are very common in a household. They can easily be melted down to reproduce aluminium cans and foils. - Steel and tin cans – Steel and tin cans such as tuna cans, coffee cans and aerosol cans can be recycled. You can visit the EPA website to find your nearest recycling centre. Paper and cardboard Paper can almost be found everywhere in our lives. In our home, office, work and schools. Unlike glass and metal, paper loses its quality each time it is recycled. The fibres are shortened and can be expected to recycle up to a maximum of 8 times. - Cardboard boxes – Whether you are moving houses or using it for storage, cardboard boxes can be recycled to make printing paper, cereal boxes, tissue paper and poster board. - Envelopes – The next time you receive a mail from a relative or friend, instead of tossing it into the rubbish bin, you can recycle it. - Magazine and newspaper – More often than not when we finish reading a magazine or a newspaper, they become outdated and useless. Approximately 25% of our daily newspapers are recycled. - Office paper – Clean Up Australia reported that in 2012-2013, Australians consumed 3,672,000 tonnes of paper with only 45% recycling rate. - Telephone books – Does anyone still use telephone books? Yellow Pages, White Pages. If not, you should recycle them instead of tossing them out. They can be recycled into something more useful. Like steel and aluminium, you can recycle glass almost endlessly! However, they are can be more complex to recycle due to the various colours that they come in. Therefore they should be sorted according to their colours. - Flint glass – Flint glass simply refers to clear glass containers. They are commonly found in your pantry in the form of jars, bottles and food packaging. - Amber glass – Amber glasses are more difficult to recycle due to the amber colour that the glass contains. It cannot be removed but instead, they can be recycled into other amber colour products. They come in the form of beer bottles, pharmaceuticals liquids and other ultraviolet light-sensitive liquid. - Emerald glass – Much like amber glass, emerald glasses are used for liquids that are sensitive to ultraviolet light. However, to a lesser extent than amber glass bottles. Emerald glass is used for beverage bottles like Sprite and 7Up and for wine bottles. Image courtesy of Waste360.com Did you know that it is possible to fully recycle all types of plastic? Like paper, plastic has fibres that shorten every time it is recycled. It is estimated that plastic can only be recycled 7 to 9 times before it cannot be recyclable. It takes much more electrical power to melt away plastic material in comparison to what it does to reuse it. If you happen to think about tossing plastic into an incinerator, think about delivering it to the recycling centre as a replacement. The plant may well be unable to reuse every bottle, but it’s genuinely worth your time and effort. However, due to the complexity of some plastic and the lack of technology, local Material Recovery Facility (MRF) will only accept some plastic types. That is why you should check the RIC (Resin Identification Code) on the plastic before disposing of it. Three RIC that are commonly accepted for recycling: Here is a more comprehensive breakdown of the RIC on plastic products. Image courtesy of Cleanaway.com.au What common household items can’t you recycle? You should be aware that there are some items that cannot be recycled. We will have a look at why in the next heading. If we could recycle everything then there wouldn’t be a reason for landfills to exist right? Accepted by a few recycling centres: - Polystyrene – Rigid polystyrene are much easier to recycle than expanded polystyrene which is also known as styrofoam. - Paint – old paint often contains lead or mercury and cannot be recycled. - Toxic chemicals – laboratory waste, inks, dyes, oil and other toxins. - E-waste – batteries, mobile phones, televisions and other electronic waste. - Soft plastic packaging – cling wrap, frozen vegetable bags, lolly wrappers. Cannot be recycled: - Take-out food containers - Plastic bottle caps - Certain paper products – Paper coffee cups are lined with polyethylene which cannot be recycled. Milk cartons and juice boxes can’t be recycled because other non-recyclable materials are mixed with it. - Certain types of glass – Broken glass cannot be recycled because it is hard to tell what the source of the glass is when it’s broken. - Plastic grocery bags - Objects containing radioactive metals – Uranium, Plutonium, Mercury - Lead-containing products – found in TVs and computer monitors Why can’t you recycle everything? Recycling is not always the cheapest method except if you have the option to reuse them such as glass jars or drink containers. Reusing, recycling and landfills have materials for which they are the least method of wasteful disposal. 2. Lack of technology More complex plastics are currently difficult to recycle due to the fact that we do not have the technology to recycle such as styrofoam cups. Rubber tyres are another example. Once manufactured, it has gone through a chemical process that is nearly impossible to convert back. Sadly, that is why tyre graveyards exist. As technology advances in the near future, we will be able to sort and recycle waste more effectively that will definitely decrease the amount of waste in landfills. Benefits of recycling - Reduce greenhouse gas emissions – By re-using our resources, it reduces the amount of pollution from factories needing to produce new materials. Little by little, this can have a significant impact on the long-term. - Keeping the Earth beautiful – Re-using items which can typically end up inside dumps is definitely a terrific way to help keep the environment clean. Can you imagine a future where the city is covered with landfill dump? Not very pleasant is it? Recycling can help us avoid a bleak future and protect the environment through a more natural process of life. - Conserve natural resources – We are limited to a finite number of natural resources here on Earth. Once we run out of natural resources we must look for new ways to acquire resources beyond this planet. By using our waste in a creative way, we can minimise the number of natural resources we use to produce products every day. - Saves money – It is obvious that we should recycle things that will otherwise cost a lot to produce as new. Why should you spend more money on recycling for something that you could buy brand new for cheaper right? Luckily for us, we are able to recycle products at a lower cost than creating new products. It also reduces the need for the economy to spend money on planting more forests and mining for more metal ores. - Cash benefits – A promising technology has been announced just a few weeks ago that could lend more motivation for us to recycle plastic waste. More than 500 recycling containers are scattered across New South Wales to encourage families to exchange their plastic, metal and glass bottles for cash. For example, each plastic bottle can be exchanged for 10 cents. - More employment opportunities – One person’s waste can be used for something else. In turn, this creates green jobs as the waste is re-circulated instead of being disposed of after one-time use. Composting and recycling create more jobs than disposing does. For every job created in the waste management industry, recycling creates four. Depending on where you live and the recycling service that is offered in your area, recycling may or may not be appealing. For example, Europe has a more active recycling approach than America does. Europe doesn’t have space for massive landfills and must heavily regulate recycling. Supermarket giants in Australia such as Woolworths have announced their shift to removing plastic bags from their supermarkets. This means that recycling initiatives are really starting to shape up and pay attention. It is time for you to take the same actions to reap the massive benefits that recycling has to offer. The little bit of effort can go a long way to protecting the planet. It is much easier than ever to help the environment. How can I get involved? Getting into a positive recycling habit can be as simple as starting at home. Sorting out your recyclable from your non-recyclable items will help drastically. With a little bit of help from the internet, you can find out how you can re-use and upcycle common household items. If you feel like you could do more, you can get involved with local recycling programs and start a local campaign to push for others to recycle as well. There is a greater chance of success by educating and informing the public about the tremendous benefits that recycling can have. As a rubbish removal company in Sydney, we always try to do our part in helping to educate and inform. We encourage everyone to do the same. Spread the word to friends, families and colleagues. Together our small efforts combined will contribute to a cause much greater and larger than we can ever imagine.
<urn:uuid:2dce30bb-af28-4069-85f6-acbe4a177d78>
CC-MAIN-2021-43
https://www.paulsrubbish.com.au/recycling/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585504.90/warc/CC-MAIN-20211022084005-20211022114005-00070.warc.gz
en
0.947096
2,221
3.546875
4
This post is the second in a series of classic philosophy papers. The Naturalist and the Supernaturalist is Chapter Two of C. S. Lewis’s Miracles: A Preliminary Study (New York: The Macmillan Company, 1947) There are two fundamental worldviews that have currency today. One I call Naturalism or the “bottom up” worldview. According to Naturalism, the world somehow got here by itself. It pulled itself up by its own bootstraps. … The other I call Supernaturalism or the “top down” worldview. According to Supernaturalism, the world is an artefact. Someone or something made it. It didn’t get here by itself. … C. S. Lewis makes the same distinction, but says it so much better. 🙂 ‘Gracious!’ exclaimed Mrs Snip, ‘and is there a place where people venture to live above ground?’ ‘I never heard of people living under ground,’ replied Tim, ‘before I came to Giant-Land’. ‘Came to Giant-Land!’ cried Mrs Snip, ‘why, isn’t everywhere Giant-Land?’ Roland Quizz, Giant-Land, chap xxxii. I use the word Miracle to mean an interference with Nature by supernatural power. Unless there exists, in addition to Nature, something else which we may call the supernatural, there can be no miracles. Some people believe that nothing exists except Nature; I call these people Naturalists. Others think that, besides Nature, there exists something else: I call them Supernaturalists. Our first question, therefore, is whether the Naturalists or the Supernaturalists are right. And here comes our first difficulty. Before the Naturalist and the Supernaturalist can begin to discuss their difference of opinion, they must surely have an agreed definition both of Nature and of Supernature. But unfortunately it is almost impossible to get such a definition. Just because the Naturalist thinks that nothing but Nature exists, the word Nature means to him merely ‘everything’ or ‘the whole show’ or ‘whatever there is’. And if that is what we mean by Nature, then of course nothing else exists. The real question between him and the Supernaturalist has evaded us. Some philosophers have defined Nature as ‘What we perceive with our five senses’. But this also is unsatisfactory; for we do not perceive our own emotions in that way, and yet they are presumably ‘natural’ events. In order to avoid this deadlock and to discover what the Naturalist and the Supernaturalist are really differing about, we must approach our problem in a more roundabout way. I begin by considering the following sentences (1) Are those his natural teeth or a set? (2) The dog in his natural state is covered with fleas. (3) I love to get away from tilled lands and metalled roads and be alone with Nature. (4) Do be natural. Why are you so affected? (5) It may have been wrong to kiss her but it was very natural. A common thread of meaning in all these usages can easily be discovered. The natural teeth are those which grow in the mouth; we do not have to design them, make them, or fit them. The dog’s natural state is the one he will be in if no one takes soap and water and prevents it. The countryside where Nature reigns supreme is the one where soil, weather and vegetation produce their results unhelped and unimpeded by man. Natural behaviour is the behaviour which people would exhibit if they were not at pains to alter it. The natural kiss is the kiss which will be given if moral or prudential considerations do not intervene. In all the examples Nature means what happens ‘of itself’ or ‘of its own accord’: what you do not need to labour for; what you will get if you take no measures to stop it. The Greek word for Nature (Physis) is connected with the Greek verb for ‘to grow’; Latin Natura, with the verb ‘to be born’. The Natural is what springs up, or comes forth, or arrives, or goes on, of its own accord: the given, what is there already: the spontaneous, the unintended, the unsolicited. What the Naturalist believes is that the ultimate Fact, the thing you can’t go behind, is a vast process in space and time which is going on of its own accord. Inside that total system every particular event (such as your sitting reading this book) happens because some other event has happened; in the long run, because the Total Event is happening. Each particular thing (such as this page) is what it is because other things are what they are; and so, eventually, because the whole system is what it is. All the things and events are so completely interlocked that no one of them can claim the slightest independence from ‘the whole show’. None of them exists ‘on its own’ or ‘goes on of its own accord’ except in the sense that it exhibits, at some particular place and time, that general ‘existence on its own’ or ‘behaviour of its own accord’ which belongs to ‘Nature’ (the great total interlocked event) as a whole. Thus no thoroughgoing Naturalist believes in free will: for free will would mean that human beings have the power of independent action, the power of doing something more or other than what was involved by the total series of events. And any such separate power of originating events is what the Naturalist denies. Spontaneity, originality, action ‘on its own’, is a privilege reserved for ‘the whole show’, which he calls Nature. The Supernaturalist agrees with the Naturalist that there must be something which exists in its own right; some basic Fact whose existence it would be nonsensical to try to explain because this Fact is itself the ground or starting-point of all explanations. But he does not identify this Fact with ‘the whole show’. He thinks that things fall into two classes. In the first class we find either things or (more probably) One Thing which is basic and original, which exists on its own. In the second we find things which are merely derivative from that One Thing. The one basic Thing has caused all the other things to be. It exists on its own; they exist because it exists. They will cease to exist if it ever ceases to maintain them in existence; they will be altered if it ever alters them. The difference between the two views might be expressed by saying that Naturalism gives us a democratic, Supernaturalism a monarchical, picture of reality. The Naturalist thinks that the privilege of ‘being on its own’ resides in the total mass of things, just as in a democracy sovereignty resides in the whole mass of the people. The Supernaturalist thinks that this privilege belongs to some things or (more probably) One Thing and not to others–just as, in a real monarchy, the king has sovereignty and the people have not. And just as, in a democracy, all citizens are equal, so for the Naturalist one thing or event is as good as another, in the sense that they are all equally dependent on the total system of things. Indeed each of them is only the way in which the character of that total system exhibits itself at a particular point in space and time. The Supernaturalist, on the other hand, believes that the one original or self-existent thing is on a different level from, and more important than, all other things. At this point a suspicion may occur that Supernaturalism first arose from reading into the universe the structure of monarchical societies. But then of course it may with equal reason be suspected that Naturalism has arisen from reading into it the structure of modern democracies. The two suspicions thus cancel out and give us no help in deciding which theory is more likely to be true. They do indeed remind us that Supernaturalism is the characteristic philosophy of a monarchical age and Naturalism of a democratic, in the sense that Supernaturalism, even if false, would have been believed by the great mass of unthinking people four hundred years ago, just as Naturalism, even if false, will be believed by the great mass of unthinking people today. Everyone will have seen that the One Self-existent Thing–or the small class of self-existent things–in which Supernaturalists believe, is what we call God or the gods. I propose for the rest of this book to treat only that form of Supernaturalism which believes in one God; partly because polytheism is not likely to be a live issue for most of my readers, and partly because those who believed in many gods very seldom, in fact, regarded their gods as creators of the universe and as self-existent. The gods of Greece were not really supernatural in the strict sense which I am giving to the word. They were products of the total system of things and included within it. This introduces an important distinction. The difference between Naturalism and Supernaturalism is not exactly the same as the difference between belief in a God and disbelief. Naturalism, without ceasing to be itself, could admit a certain kind of God. The great interlocking event called Nature might be such as to produce at some stage a great cosmic consciousness, an indwelling ‘God’ arising from the whole process as human mind arises (according to the Naturalists) from human organisms. A Naturalist would not object to that sort of God. The reason is this. Such a God would not stand outside Nature or the total system, would not be existing ‘on his own’. It would still be ‘the whole show’ which was the basic Fact, and such a God would merely be one of the things (even if he were the most interesting) which the basic Fact contained. What Naturalism cannot accept is the idea of a God who stands outside Nature and made it. We are now in a position to state the difference between the Naturalist and the Supernaturalist despite the fact that they do not mean the same by the word Nature. The Naturalist believes that a great process, of ‘becoming’, exists ‘on its own’ in space and time, and that nothing else exists–what we call particular things and events being only the parts into which we analyse the great process or the shapes which that process takes at given moments and given points in space. This single, total reality he calls Nature. The Supernaturalist believes that one Thing exists on its own and has produced the framework of space and time and the procession of systematically connected events which fill them. This framework, and this filling, he calls Nature. It may, or may not, be the only reality which the one Primary Thing has produced. There might be other systems in addition to the one we call Nature. In that sense there might be several ‘Natures’. This conception must be kept quite distinct from what is commonly called ‘plurality of worlds’–i.e. different solar systems or different galaxies, ‘island universes’ existing in widely separated parts of a single space and time. These, however remote, would be parts of the same Nature as our own sun: it and they would be interlocked by being in relations to one another, spatial and temporal relations and casual relations as well. And it is just this reciprocal interlocking within a system which makes it what we call a Nature. Other Natures might not be spatio-temporal at all: or, if any of them were, their space and time would have no spatial or temporal relation to ours. It is just this discontinuity, this failure of interlocking, which would justify us in calling them different Natures. This does not mean that there would be absolutely no relation between them; they would be related by their common derivation from a single Supernatural source. They would, in this respect, be like different novels by a single author; the events in one story have no relation to the events in another except that they are invented by the same author. To find the relation between them you must go right back to the author’s mind: there is no cutting across from anything Mr Pickwick says in Pickwick Papers to anything Mrs Gamp hears in Martin Chuzzlewit. Similarly there would be no normal cutting across from an event in one Nature to an event in any other. By a ‘normal’ relation I mean one which occurs in virtue of the character of the two systems. We have to put in the qualification ‘normal’ because we do not know in advance that God might not bring two Natures into partial contact at some particular point: that is, He might allow selected events in the one to produce results in the other. There would thus be, at certain points, a partial interlocking; but this would not turn the two Natures into one, for the total reciprocity which makes a Nature would still be lacking, and the anomalous interlockings would arise not from what either system was in itself but from the Divine act which was bringing them together. If this occurred each of the two Natures would be ‘supernatural’ in relation to the other: but the fact of their contact would be supernatural in a more absolute sense–not as being beyond this or that Nature but beyond any and every Nature. It would be one kind of miracle. The other kind would be Divine ‘interference’ not by the bringing together of two Natures, but simply. All this is, at present purely speculative. It by no means follows from Supernaturalism that Miracles of any sort do in fact occur. God (the primary thing) may never in fact interfere with the natural system He has created. If He has created more natural systems than one, He may never cause them to impinge on one another. But that is a question for further consideration. If we decide that Nature is not the only thing there is, then we cannot say in advance whether she is safe from miracles or not. There are things outside her: we do not yet know whether they can get in. The gates may be barred, or they may not. But if Naturalism is true, then we do know in advance that miracles are impossible: nothing can come into Nature from the outside because there is nothing outside to come in, Nature being everything. No doubt, events which we in our ignorance should mistake for miracles might occur: but they would in reality be (just like the commonest events) an inevitable result of the character of the whole system. Our first choice, therefore, must be between Naturalism and Supernaturalism. This definition is not that which would be given by many theologians. I am adopting it not because I think it an improvement upon theirs but precisely because, being crude and ‘popular’, it enables me most easily to treat those questions which ‘the common reader’ probably has in mind when he takes up a book on Miracles.
<urn:uuid:29ff37a9-ca19-49d3-b04f-85ad1b8c76ca>
CC-MAIN-2021-43
http://eternalvigilance.nz/2012/11/the-naturalist-and-the-supernaturalist/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00311.warc.gz
en
0.965335
3,177
2.625
3
Guest Post by Jasmine Hall Educators across the nation are working hard this summer to begin developing updated curricula that will fit into the new Common Core State Standards, which will be fully applied in 45 U.S. states (Texas, Alaska, Nebraska, Virginia, and Minnesota have opted out of statewide participation) by 2015. Yet despite the hubbub about the new standards, which were created as a means of better equipping students with the knowledge they need to be competitive in the modern world, many teachers still have a lot of unanswered questions about what Common Core will mean for them, their students, and their schools. Luckily, the Internet abounds with helpful resources that can explain the intricacies of Common Core, offer resources for curriculum development, and even let teachers keep up with the latest news on the subject. We’ve collected just a few of those great resources here, which are essential reads for any K-12 educator in a Common Core-adopting state. Groups and Organizations These links will take you to essential reading materials from the institutions and organizations behind Common Core. - Common Core State Standards Initiative:This is the official site for the CCSSI, featuring information about the standards, news, resources, and answers to frequently asked questions. - National Governors Association: The NGA played a major role in the development of Common Core, so their website is a great place to look for answers about the standards. - Council of Chief State School Officers: The other major group behind Common Core is the CCSSO, an organization you can learn more about by visiting their site. Read up on Common Core, find out more about what it will mean for your classes, and get some help from educational providers and groups by following these links. - CCSSI Wiki:One simple way to learn more about the CCSSI is to visit the program’s Wikipedia page, which is packed with useful information on the subject. - Common Core 360: Common Core 360 is an educational network that offers webinars, training tools, news, and more to help teachers adapt to the new Common Core standards. - MasteryConnect: Use the MasteryConnect site to track your students’ progress under the new Common Core system. - Pearson Education Common Core State Standards: Pearson, a major educational publisher, offers access to numerous resources on Common Core. Visitors to the site will find everything from basic explanations to informative webinars. - McGraw Hill Common Core Solutions: Educational publisher McGraw Hill is also reaching out to teachers when it comes to Common Core, loading up their website with tools for professional and curriculum development. - Common Core Adoptions by State: The ASCD website offers up information on which states are adopting Common Core, along with links to each Common Core state website. - The Common Core Institute: Teachers who are unsure about their expertise on Common Core should give the Common Core Institute a try. The organization offers Black Belt certification on Common Core, as well as a wealth of other conferences and professional development opportunities for teachers. - Common Core Standards App: This iPhone application (it is also available for Android) lets teachers keep essential information about Common Core at their fingertips. - ASCD Common Core Webinars: ASCD is working on new webinars on Common Core for this fall, but educators can take a look at their archived resources from earlier this year in the meantime. - Common Core Workbook: Use this workbook from Achieve and the U.S. Education Delivery Institute to help guide the Common Core implementation process at your school. - CommonCore.org:Here you’ll find an organization dedicated to ensuring that the Common Core is about more than just reading and math, instead promoting a well-rounded education that includes reading literature, studying culture, and engaging with the arts. These sites offer a wealth of resources for helping you develop curricula that meets Common Core standards. - The Mathematics Common Core Toolbox: Districts and teachers alike can find support for building better math lessons that fit into new Common Core guidelines through this helpful site. - Khan Academy Common Core Map: Those who’ve been using Khan Academy videos and lessons in the classroom can see how each relates to new Common Core standards using this map. - Literacy Design Collaborative: The LDC offers templates, modules, and guidebooks for teachers that make it simple to develop engaging literacy lessons under Common Core. - Illustrative Mathematics: Get some guidance on the mathematics topics covered at every grade level under Common Core. - Teaching Channel: The Teaching Channel site offers just over 100 videos on Common Core lessons, ideas, and more. - Achieve the Core: This website encourages teachers to steal its tools for curriculum development. - Lexile: Is that text at grade level? Use this handy online tool to measure a text for readability. - AASL Lesson Plan Database: The American Association of School Librarians has loads of lesson plans and checklists for teachers that fall under Common Core standards. - Surveys of Enacted Curriculum: Use reliable data to develop, plan, and compare your curriculum when you visit this site’s archive of PDF reports. You can get regular reading material on the subject of CCSS by following any or all of these blogs. - Common Core:Head to this blog to read updates about Common Core news and other educational topics on a regular basis. - Pearson Common Core Blog: Part of the Common Core resources offered by Pearson is a blog, full of articles on a range of educational topics. - Tools for the Common Core Standards: This blog is an excellent resource for learning about new tools that help support Common Core implementation in schools. - Common Core Blog: Offering links to Common Core tools, news, articles, and more, this blog is a great resource for learning about Common Core. - The Core Knowledge Blog: Find a wealth of high-quality articles on teaching topics (especially Common Core) on this blog on the Core Knowledge Foundation’s site. - Core Commons: Follow this blog to read more about emerging strategies and issues in implementing the Common Core standards. - The Learning Network: The Learning Network blog, part of the New York Times‘ website, regularly publishes articles on Common Core. - Common Core Facts: Get an opposing view on Common Core by reading this blog. - All Things Common Core: Educators can learn from fellow teachers about the challenges of applying Common Core in their district from this blog. Some states have created helpful websites for teachers all about Common Core. Here, we share a few that can be useful to teachers all over the United States. - Resources for Implementing the Common Core State Standards: The Indiana Department of Education offers a number of CCSS resources on their website, including a number of informative articles and videos. - NC Common Core Support Tools: North Carolina is making it easier for teachers in the state (and in others) to apply Common Core by collecting this incredibly useful set of tools. - NYC Common Core Library: Any lingering questions you have about Common Core will undoubtedly be answered by this comprehensive site from the NYC Department of Education. - TNCore: Tennessee has built an entire website to help teachers with Common Core, with resources on Math, English, and other disciplines. - CDE Implementation Toolkit: Here, the Colorado Department of Education has a number of design tools teachers can use to move into the new standards. - Engage NY: From a Common Core toolkit to curriculum exemplars, the New York Common Core website has loads of great resources teachers can use. - ODE Model Curriculum: Head to this Ohio Department of Education site to find model curriculum resources for all Common Core subjects. Articles and Presentations These articles and videos offer different perspectives on Common Core, some supporting it while others doubt it, essential reading for any educator looking for a well-rounded perspective on the matter. - A First Look at the Common Core and College and Career Readiness: In this report, ACT takes a look at how Common Core standards will help to better prepare students for college and the working world. - Common Core standards drive wedge in education circles: Not all teachers support Common Core, and as this article from USA Today points out, it’s creating a rift between some educators. - Embracing the Common Core: Helping Students Thrive: Download this presentation by Stan Heffner and Michael Cohen on what Common Core means for today’s students. - Putting a Price Tag on the Common Core: How Much Will Smart Implementation Cost?:With school districts already strapped for cash, it makes sense to consider the financial impact of Common Core, which you can learn more about from this Fordham Institute read. - Why Common Core standards will fail:Well-known Washington Post education columnist Jay Matthews doesn’t think Common Core is the answer. Check out this editorial to see why. - Research Finds 97% of Teachers are Now Favorable Towards Common Core Standards: Are you among the 97% of teachers who support Common Core? Learn about the battle to get teachers on board, here. - For CCSS Math Education Some Problems are Elementary: Stuart Singer brings up some pretty important points when it comes to how Common Core will affect math education. - Common Core won’t likely boost student achievement, analysis says: The Brookings Institution believes that Common Core won’t help students improve their achievement. Their study is discussed in detail here by Valerie Strauss. - No Need to Fear the Common Core Standards: This New York Times article assures teachers that Common Core standards are nothing to fear, and are actually already having benefits in schools. - Primer on Common Core State Standards: Head to this site for a helpful primer on the basics of Common Core Standards. - The Impact of Common Core State Standards on Your Student:Have you had parents asking you about Common Core? Not sure what to tell them? This article can help, by explaining the new system in an easily understandable way. also published here
<urn:uuid:ec33abe6-14f9-4c17-91c2-689ce4e59cb8>
CC-MAIN-2021-43
https://rashidfaridi.com/2012/07/01/important-links-for-common-core-educators/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00270.warc.gz
en
0.916846
2,080
3.328125
3
My Senior Year Capstone Project The term “fast fashion” refers to the mass production of inexpensive clothing that is made as quickly as it lasts. On the surface, fast fashion seems like a great idea because its affordability allows everyone access to the latest trends. But buying clothes this cheap comes with an incredibly high cost. The production of fast fashion relies on exploitative labor conditions and destructive environmental practices. The “fast” in fast fashion refers to both how quickly these items are produced and how long they last. Unlike sustainable clothing, which focuses on long-lasting quality, fast fashion is made for quantity. For example, the popular clothing brand Zara introduces more than twenty collections a year. Yet now, in the golden age of online retailers, that number seems low. For example, “ultra-fast fashion” giant Fashion Nova launches 600-900 new styles every week. With such a rapid production rate, the shoppers’ desire to buy more is intensified in order to stay on trend. The clothing industry has not always been so speedy, however. As our world has rapidly changed, so has the way we dress. The twentieth century began as an era where clothing was bespoke (custom-made clothing that was tailored to fit a single individual), shifted to the factory produced ready-to-wear model, and ended with the first business models of fast fashion. In 1963, Zara, which is credited as the first successful fast fashion business model, was founded based upon four key points: vertical integration, data analysis, fast design-to-retail production, and outsourcing labor. These four points would become the foundation of fast fashion: a singular corporation that oversees all aspects of the textile production where the clothing is based on current trends and manufactured quickly with cheap labor. As you can see, fast fashion relies on speed. This need to be on trend comes at the cost of the environment and those that make the clothing. The extreme pollution of fast fashion factories and the amount of unnecessary garbage produced creates a destructive cycle. The individuals who work in these factories are paid poorly, subject to horrible work conditions, and often exploited. It is not reasonable to expect everyone to suddenly boycott fast fashion, however, especially not when it has become the societal norm to be seen in a new outfit everyday. Nor is giving up fast fashion an option for everyone– often, sustainable clothing is extremely expensive and not a lifestyle most people can afford. On top of that, some brands that claim to be extremely ethical are, in fact, not actually sustainable at all. However, there are small changes we can all do to help live a more eco-friendly lifestyle. Whether it is properly recycling our textiles, or supporting the development of higher labor and environmental standards for suppliers, the Earth will be a better place because of it. Fast fashion is a complex problem that spans multiple issues, from excessive air pollution, to exploitative labor conditions, to hidden economic costs for the everyday consumer. These concepts inspired me to spread awareness of the darker side of fast fashion through a series of art installations. Before we moved to remote learning this spring, I had planned to create a single installation that focused on the environmental cost of fast fashion. However, since I had to shift my plans, I decided to create a series of mini installations in my front yard. This series can be taken as a proposal for a larger, more permanent version in the future. Installation 1: “Trashion” For this first installation, I compiled clothes from my wardrobe that I had purchased in the past from fast fashion companies such as Brandy Melville, Victoria’s Secret, and Levi’s. I arranged them into symbols and words out on my front lawn, photographed each one, and then used Adobe Draw to emphasize each message. Each photo in the slideshow shows the main statistic I wished to convey through my installation. One side of the fast fashion industry that often goes unacknowledged is the economic cost of the textiles after they have been disposed of. Because the synthetic materials of the clothing are produced cheaply, they do not last very long. Most people simply throw out their clothing when it becomes too worn out. However, this comes at a great economic cost. In California, taxpayers spend over 70 million dollars each year to dispose of these textiles in landfills. This is because 5% of all landfill in California are textiles. However, 95% of these textiles are recyclable or reusable (although unfortunately, once commingled, become garbage). Imagine how much money we could save if we all properly recycled or donated our used clothing! Even the material we wear comes with a cost. 20,000 liters of water are used to produce one kilogram of cotton, which in turn, emits 10-15 kilograms of carbon dioxide. On the other hand, it takes 17 liters of water to produce one kilogram of polyester, which emits 2.3 kilograms of carbon dioxide. This does not mean polyester is a better material than cotton, however. One kilogram of polyester also requires 1.5 kilograms of oil made from fossil fuels. Since polyester is the most commonly used fiber, nearly 70 million barrels of oil are used each year. The carcinogenic compounds of fossil fuel eventually break down into microplastics within the fiber. These dangerous microplastics eventually make their way into the ocean where they move through the food chain right back into what is served on our table. On top of that, it takes more than 200 years for polyester to decompose. Wondering how microplastics get into the ocean? One way is through laundry. Washing some types of fabrics can send tons of microplastics into the ocean. On a larger scale, the fashion industry produces 20% of global wastewater. In fact, textile dyeing is the second largest polluter of water globally. This brings us to Installation 2… Installation 2: “Macrofibers” What if microplastics weren’t actually so… micro? If we could tangibly see and quantify the amount of plastic waste we’ve created, would we take more steps to minimize the damage? The answer to that question is probably yes, and was the inspiration for my second installation. Here, I draped a few articles of clothing over a piece of large driftwood that had washed ashore. The clothing represents the synthetic microfibers that contaminate the ocean because they are the very source of the pollution. The plastic microfibers that are shed from the synthetic clothing we wear end up in the water supply and account for 85% of the human-made material found along ocean shores. As aforementioned, these microfibers threaten marine wildlife and eventually end up in our food supply. Installation Three: “The Human Cost of the Fast Fashion Industry” With a demand to release hundreds of new styles every week, fast fashion retailers and manufacturers rely on cheap low-wage labor in order to produce clothes as fast as possible. In order to maximize profit, these laborers are often grossly underpaid (only being paid 19 cents for a shirt that will be sold for $19.99). Fast fashion’s demand for increased production often results in unrealistic deadlines, which in turn leads to dangerous working conditions. These conditions escalate the likelihood of worker injury. For example, apparel workers often work for piece rates– a system where the worker is paid per unit of creation– which pressure them to work at hazardous speeds. A 2002 study showed that garment workers often developed back, kidney, and musculoskeletal problems due to their extended exposure to fabric dust and chemicals as well as their long shifts of sitting with little to no breaks. While a lot of retailers outsource their labor internationally, fast fashion sweatshops exist in the United States as well. These sweatshops are especially concentrated in Los Angeles, California. There are a few reasons for this, including the fact that L.A. has a well-established cut and sew manufacturing base, a large number of fast fashion company headquarters (Forever 21, Fashion Nova, etc.), and its proximity to the North American and Asian Pacific Rim markets. And perhaps most importantly, the industry’s need for cheap labor relies on the city’s vast population of immigrant workers from Latin America and Asia. Despite being necessary to the fast fast fashion industry, this immigrant workforce is subject to exploitative and dangerous workplace conditions and treatments. These workers make less than minimum wage and work (on average) around 60 hours a week just to make ends meet. The cheap price of fast fashion comes at the cost of human lives. So, what can we do to help? How to Reduce our Impact Thankfully, there are several steps anyone can take to reduce some of the social and environmental risks of fast fashion: - Buy less, wear more. Rewearing the clothes already sitting in your closet is always the most sustainable option! - Instead of trashing old clothing, try these options. - thredUP: Save the earth AND make money! thredUP takes your old clothes and sells them on their online thrift store where you get a percentage of what is sold. Plus, they properly recycle any clothing item that is not sellable or in disrepair for you. - Earth911: This site allows you to find the nearest textile recycling area near you. - Donate to charities such as Goodwill. - Read the label! When browsing different clothing items, look for organic cotton over synthetic fibers like polyester. - Vote with your purchase. By supporting brands that are sustainable (Reformation, Patagonia) or have pledged to increase their sustainability (IKEA, GAP), you send a clear message to companies that sustainability sells. - Decrease the amount of laundry loads. Try to rewear clothes if possible before sending them to the washing machine. Washing clothes at a lower temperature also uses less energy. There are other broader ways we can reduce fast fashion’s global impact: - Develop standards for designing garments that can be easily reused or recycled. - Invest in the development of new fibers that lower the environmental effects of garment production. - Establish higher labor and environmental standards for suppliers and create mechanisms to make supply chains more transparent. - Make retailers/brands responsible for wage theft and wage hour violations as well as the unsafe conditions of their factories. - Implement indoor heat standards for workers in the garment industry (especially in California). - Actively promote state and citywide standards to fill current gaps in worker protections. Fast fashion is a serious problem that only seems to be growing. Although it can feel overwhelming, it only takes one person to create lasting change. Any step in the right direction is helpful. If each of us choose to implement just one of these steps into our lives, the world will become a better place because of it. A huge thank you to everyone who has supported my Capstone project: Thank you Ms. Donald for allowing me to explore the Archives; Thank you Ms. Rampertab for your helpful guidance when researching; Thank you Mr. Freeman for sparking my interest in the environmental history of Choate; Thank you Mr. Davidson for allowing me to pursue this Capstone; Thank you to my parents and friends who constantly supported me and offered their help; and of course THANK YOU to Ms. Jessica Cuni for not only being the best Capstone adviser, but also for supporting me these past four years. I could not have done it without you. And finally, thank you Reader. I hope you enjoyed this article as much as I enjoyed writing it! Let me know if you have any questions/comments/new ideas. I’m always open to connect!
<urn:uuid:bb2b6120-bc81-40d5-b0f1-0f8883457e75>
CC-MAIN-2021-43
https://skylarhansenraj.com/2020/05/26/capstone/?like_comment=48&_wpnonce=bc8efeee69
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00151.warc.gz
en
0.954691
2,412
3.515625
4
Click on the headline (link) for the full text. Many more articles are available through the Energy Bulletin homepage. Permission to Transition: Zoning and the Transition Movement Corinne Calfee and Eve Weissman, Planning & Environmental Law (American Planning Association) Introduction: The Transition Movement Communities are taking responsibility for their own economic futures. In response to a growing consensus that the days of cheap oil are numbered due to any combination of declining production, environmental constraints, political instability, or increasing demand, these communities are seeking to buffer themselves from economic shocks by strengthening local economic ties and reducing costs of living. Much of their work is focused on unlocking the existing value of yards, homes, and rooftops by using them more efficiently. As part of that effort, these communities are generating efficiencies by allowing residents to share anything from cars to kitchens. However, these innovative projects can be blocked by existing local regulations, primarily zoning. We explore below ways in which these codes can be carefully pared back to allow greater choice and flexibility in transportation, housing, food, and economic opportunity One of the simplest steps toward economic transition is to shrink the geographical distance between where our goods come from and where they are consumed. The growing movement of urban agriculture in its various forms strengthens the local supply chain, reducing reliance on increasingly scarce fossil fuels to feed our communities. … Amend municipal zoning codes to define and permit categories of urban agriculture by adding urban agriculture uses as permitted uses in existing zones, or creating new zones for specific types of urban agriculture. When many cities in the United States first promulgated zoning codes in the early 1900s, agriculture was widely considered a strictly rural activity.5 Consequently, agriculture was permitted mostly in industrial zones and to a lesser extent in commercial zones, and was almost entirely prohibited in residential areas.6 Today, given large tracts of vacant city land and the centrality of urban agriculture in building local economies, municipal codes can be amended to accommodate, or even promote, urban agriculture. City zoning policy can facilitate the broadest possible opportunities for urban agriculture programs without creating a nuisance for the surrounding neighbors and community. A number of cities have already taken this step. .. Simplify permitting procedures to facilitate the sale of local food. A logical corollary to enabling food cultivation and production throughout a city is to allow people to sell the food they grow.21 Many cities require that people wishing to sell their crops obtain conditional use permits, which can cost thousands of dollars and involve an uncertain political process in which the local agency may never grant the permit sought.22 Thus, conditional use permits can be prohibitively expensive and time-consuming for small growers, creating significant barriers to the sale of locally produced food. Given the low financial margin of most urban food production, especially in contrast to its high social value, cities can consider requiring no more than a simple and inexpensive administrative use permit for relatively small-scale food selling enterprises. … Shared Housing Shared housing provides another opportunity to build community resilience and reduce reliance on fossil fuels and includes a variety of living arrangements such as accessory dwelling units (including granny flats, in-law units, second units, and accessory apartments), clustered homes, cohousing communities, and eco-villages. These arrangements can reduce waste, lower energy needs, reduce traffic, increase transit use, decrease car dependence, and reduce the need for residential and public parking.54 The policies described below can also decrease housing prices by removing regulatory requirements for larger homes and yards. Another way to facilitate transition is to more fully use existing space within urban areas. Rooftops present an obvious opportunity because cities generally contain acres upon acres of empty or underutilized rooftop space. Roofs have been successfully used for agriculture, water collection and filtering, and power generation. At the most basic level, planning tools can be used to reduce barriers to rooftop use. Going further, cities may make policy choices to incentivize rooftop utilization. … Car Sharing Car sharing presents an opportunity for transportation at a lower cost to individuals, more efficient use of existing resources, decreased reliance on carbon resources, and less congestion. When people share cars, it reduces the cost for each individual to just a fraction of the cost of owning and operating a personal vehicle. More surprisingly, car sharing takes cars off the road, thereby decreasing traffic, reducing vehicle miles traveled, and reducing gasoline consumption, an effect amplified by the fact that many car-sharing organizations use lower-emission vehicles The changes ahead mean that our laws and infrastructure, designed for another time, will increasingly place an unnecessary burden on our citizens and local economies. Strategically loosening these restraints to permit efficiency, enterprise, and sharing can give private citizens the freedom to adapt to new circumstances, and local planning expertise and action will be indispensable in creating the conditions to build this new resilience. Corinne Calfee is a real estate attorney at the SSL Law Firm in San Francisco. Her practice focuses on land use entitlements and litigation, particularly under the California Environmental Quality Act. Eve Weissman is a second-year law student at the University of California, Berkeley School of Law. They extend a special thank you to Janelle Orsi and the Sustainable Economies Law Center, whose initial research and recommendations laid the groundwork for this article. (May 2012 issue) Suggested by Jon Freise who writes: “An urban planner friend forwarded this article published by the American Planning Association to me about how to incorporate some of the goals of Transition Towns into the zoning code. I thought it would be very helpful to Transition Initiatives across the country to see how to translate some of the ideas of Transition into zoning language and law. And it would be great to help Initiatives recruit some zoning experts. Unfortunately it is behind a pay wall. I thought parts of this might be publishable on the energy bulletin as fair use. And Transition US might ask the APA if they can reprint the article.”. Happiness is a glass half empty Oliver Burkeman, Guardian Be positive, look on the bright side, stay focused on success: so goes our modern mantra. But perhaps the true path to contentment is to learn to be a loser … Failure is everywhere. It’s just that most of the time we’d rather avoid confronting that fact. Behind all of the most popular modern approaches to happiness and success is the simple philosophy of focusing on things going right. But ever since the first philosophers of ancient Greece and Rome, a dissenting perspective has proposed the opposite: that it’s our relentless effort to feel happy, or to achieve certain goals, that is precisely what makes us miserable and sabotages our plans. And that it is our constant quest to eliminate or to ignore the negative – insecurity, uncertainty, failure, sadness – that causes us to feel so insecure, anxious, uncertain or unhappy in the first place. Yet this conclusion does not have to be depressing. Instead, it points to an alternative approach: a “negative path” to happiness that entails taking a radically different stance towards those things most of us spend our lives trying hard to avoid. This involves learning to enjoy uncertainty, embracing insecurity and becoming familiar with failure. In order to be truly happy, it turns out, we might actually need to be willing to experience more negative emotions – or, at the very least, to stop running quite so hard from them. In the world of self-help, the most overt expression of our obsession with optimism is the technique known as “positive visualisation”: mentally picture things turning out well, the reasoning goes, and they’re far more likely to do so. Indeed, a tendency to look on the bright side may be so intertwined with human survival that evolution has skewed us that way. … It doesn’t necessarily follow, of course, that it would be a better idea to switch to negative visualisation instead. Yet that is precisely one of the conclusions that emerges from Stoicism, a school of philosophy that originated in Athens a few years after the death of Aristotle, and that came to dominate western thinking about happiness for nearly five centuries. For the Stoics, the ideal state of mind was tranquility – not the excitable cheer that positive thinkers usually seem to mean when they use the word “happiness”. And tranquility was to be achieved not by chasing after enjoyable experiences, but by cultivating a kind of calm indifference towards one’s circumstances. One way to do this, the Stoics argued, was by turning towards negative emotions and experiences: not shunning them, but examining them closely instead (15 June 2012) InfraInput – A Website for Users to Report on Infrastructure Parfait Gasana and Peng Zhou, InfraInput Challenge: Public Infrastructure In this new decade, the United States arrives at a critical juncture with an aging, overused, and neglected public infrastructure system from airports to water pipes. To add to this situation, the nation faces tight federal and state budgets, immense global competition, and growing population. And yet, the general public, perhaps the most essential stakeholder, being both everyday users and taxpaying owners continue to be uninformed and unengaged in the process of delivering and managing public infrastructure. Opportunity: User Input InfraInput is a one-stop, database-driven website for users to inform the public of their issues and ideas across an array of infrastructure facilities. Now, complaint or feedback sections of utilities, tranportation agencies, public works, state departments, and others are combined into one simple platform for documentation and dissemination. Altogether, this process will aid in the maintenance and rehabilitation of public facilities, helping to rebuild America. Issues | Examples: Bent utility and telephone poles, foul sewer smells, underutilized transit lines, delayed train systems, functionally obsolete bridges, inadequate airport ground access, molding school buildings, abandoned parks and greenspaces, deteriorating highways … Ideas | Examples: electronic fareboxes, solar panel bus shelters, recycled material composites for utility poles, privatized bridge inspections firm, energy efficient HVAC systems in libraries, broadband coverage in underserved areas, high speed rail at crucial corridors … InfraInput also serves other objectives: - Allows all visitors to see other issues and proposed solutions within and between different geographies. - Provides real-time demand side information to public agencies and managers of operations and maintenance. - Begins a national conversation from the gray pavement to the legislative gavel around a critical subject matter. Parfait Gasana is a graduate student in Economics at the University of Illinois at Urbana-Champaign with research focus on transportation and infrastructure, a trained academic and public policy researcher, and self-taught website and database programmer. Peng Zhou is a graduate student in Construction Management, Civil Engineering at the University of Illinois at Urbana-Champaign with professional experience in construction management, also trained in surveyor and engineering software and tools. Parfait Gasana, one of the developers of the website, sent us a letter about it: … My co-developer partner and I, both graduate students at the University of Illinois, developed a web application, InfraInput.org, a one-stop public infrastructure reporting site that allows virtually anyone –resident, business, utility worker, expert– to report issues, observations, or suggestions concerning their public works from airports to water systems with a section devoted to Energy (specifically, electricity, natural gas, etc.). Equipped with html, CSV, PDF, and geocoded Google map output, now everyday users can lend their eyes, ears, and intuition for the attention of real-time, location-specific issues to public managers and policymakers like inefficient substations, antiquated distribution and transmission network, costly gas supply and delivery, and others. The civil engineering literature asserts U.S. has an infrastructure that is marked with aging, overuse, (in some cases) mismanagement, and overall neglect. We believe in turning neglect into attention by engaging the core stakeholders (general public) which ultimately evolves into investment. Please check out this crowdsourcing platform and pass it along to colleagues, members, your audience, or other interested parties who will contribute. Entirely a student-led bootstrapped endeavor, with no outside affiliation, InfraInput is free, user-friendly, available in mobile (m.infrainput.org), requires no registration, and can begin the needed conversation to help us live sustainably past the peak energy crisis.
<urn:uuid:42dade31-17c1-44ba-b050-b38bc4db1210>
CC-MAIN-2021-43
https://www.resilience.org/stories/2012-06-24/transition-solutions-june-24/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00710.warc.gz
en
0.922237
2,607
2.578125
3
53 relations: Abstract algebra, Addison-Wesley, Area, Associative algebra, Bijection, Change of basis, Commutative property, Complex analysis, Complex number, Connected space, D. Reidel, Determinant, Diagonal matrix, Differential form, Dot product, Dual number, Exterior algebra, General linear group, Hyperboloid, Hyperplane, Idempotent matrix, Identity component, Identity matrix, Imaginary unit, Invertible matrix, Involutory matrix, Linear map, Mathematics, Mathematics Magazine, Matrix (mathematics), Matrix addition, Matrix multiplication, Motor variable, Nilpotent matrix, Paraboloid, Polar decomposition, Projection (linear algebra), Rafael Artzy, Real line, Real number, Ring (mathematics), Ring homomorphism, Rotation (mathematics), Shear mapping, SL2(R), Special linear group, Split-complex number, Split-quaternion, Squeeze mapping, Subring, ..., Unit (ring theory), University of Chicago Press, Vector space. Expand index (3 more) » « Shrink index In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Addison-Wesley is a publisher of textbooks and computer literature. Area is the quantity that expresses the extent of a two-dimensional figure or shape, or planar lamina, in the plane. In mathematics, an associative algebra is an algebraic structure with compatible operations of addition, multiplication (assumed to be associative), and a scalar multiplication by elements in some field. In mathematics, a bijection, bijective function, or one-to-one correspondence is a function between the elements of two sets, where each element of one set is paired with exactly one element of the other set, and each element of the other set is paired with exactly one element of the first set. In linear algebra, a basis for a vector space of dimension n is a set of n vectors, called basis vectors, with the property that every vector in the space can be expressed as a unique linear combination of the basis vectors. In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. A complex number is a number that can be expressed in the form, where and are real numbers, and is a solution of the equation. In topology and related branches of mathematics, a connected space is a topological space that cannot be represented as the union of two or more disjoint nonempty open subsets. In linear algebra, the determinant is a value that can be computed from the elements of a square matrix. In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero. In the mathematical fields of differential geometry and tensor calculus, differential forms are an approach to multivariable calculus that is independent of coordinates. In mathematics, the dot product or scalar productThe term scalar product is often also used more generally to mean a symmetric bilinear form, for example for a pseudo-Euclidean space. In linear algebra, the dual numbers extend the real numbers by adjoining one new element ε with the property ε2. In mathematics, the exterior product or wedge product of vectors is an algebraic construction used in geometry to study areas, volumes, and their higher-dimensional analogs. In mathematics, the general linear group of degree n is the set of invertible matrices, together with the operation of ordinary matrix multiplication. In geometry, a hyperboloid of revolution, sometimes called circular hyperboloid, is a surface that may be generated by rotating a hyperbola around one of its principal axes. In geometry, a hyperplane is a subspace whose dimension is one less than that of its ambient space. In linear algebra, an idempotent matrix is a matrix which, when multiplied by itself, yields itself. In mathematics, the identity component of a topological group G is the connected component G0 of G that contains the identity element of the group. In linear algebra, the identity matrix, or sometimes ambiguously called a unit matrix, of size n is the n × n square matrix with ones on the main diagonal and zeros elsewhere. The imaginary unit or unit imaginary number is a solution to the quadratic equation. In linear algebra, an n-by-n square matrix A is called invertible (also nonsingular or nondegenerate) if there exists an n-by-n square matrix B such that where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. In mathematics, an involutory matrix is a matrix that is its own inverse. In mathematics, a linear map (also called a linear mapping, linear transformation or, in some contexts, linear function) is a mapping between two modules (including vector spaces) that preserves (in the sense defined below) the operations of addition and scalar multiplication. Mathematics (from Greek μάθημα máthēma, "knowledge, study, learning") is the study of such topics as quantity, structure, space, and change. Mathematics Magazine is a refereed bimonthly publication of the Mathematical Association of America. In mathematics, a matrix (plural: matrices) is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together. In mathematics, matrix multiplication or matrix product is a binary operation that produces a matrix from two matrices with entries in a field, or, more generally, in a ring or even a semiring. In mathematics, a function of a motor variable is a function with arguments and values in the split-complex number plane, much as functions of a complex variable involve ordinary complex numbers. In linear algebra, a nilpotent matrix is a square matrix N such that for some positive integer k. The smallest such k is sometimes called the index of N. More generally, a nilpotent transformation is a linear transformation L of a vector space such that Lk. In geometry, a paraboloid is a quadric surface that has (exactly) one axis of symmetry and no center of symmetry. In mathematics, particularly in linear algebra and functional analysis, the polar decomposition of a matrix or linear operator is a factorization analogous to the polar form of a nonzero complex number z as z. In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself such that. Rafael Artzy (23 July 1912 – 22 August 2006) was an Israeli mathematician specializing in geometry. In mathematics, the real line, or real number line is the line whose points are the real numbers. In mathematics, a real number is a value of a continuous quantity that can represent a distance along a line. In mathematics, a ring is one of the fundamental algebraic structures used in abstract algebra. In ring theory or abstract algebra, a ring homomorphism is a function between two rings which respects the structure. Rotation in mathematics is a concept originating in geometry. In plane geometry, a shear mapping is a linear map that displaces each point in fixed direction, by an amount proportional to its signed distance from a line that is parallel to that direction. In mathematics, the special linear group SL(2,R) or SL2(R) is the group of 2 × 2 real matrices with determinant one: a & b \\ c & d \end \right): a,b,c,d\in\mathbf\mboxad-bc. In mathematics, the special linear group of degree n over a field F is the set of matrices with determinant 1, with the group operations of ordinary matrix multiplication and matrix inversion. In abstract algebra, a split complex number (or hyperbolic number, also perplex number, double number) has two real number components x and y, and is written z. In abstract algebra, the split-quaternions or coquaternions are elements of a 4-dimensional associative algebra introduced by James Cockle in 1849 under the latter name. In linear algebra, a squeeze mapping is a type of linear map that preserves Euclidean area of regions in the Cartesian plane, but is not a rotation or shear mapping. In mathematics, a subring of R is a subset of a ring that is itself a ring when binary operations of addition and multiplication on R are restricted to the subset, and which shares the same multiplicative identity as R. For those who define rings without requiring the existence of a multiplicative identity, a subring of R is just a subset of R that is a ring for the operations of R (this does imply it contains the additive identity of R). In mathematics, an invertible element or a unit in a (unital) ring is any element that has an inverse element in the multiplicative monoid of, i.e. an element such that The set of units of any ring is closed under multiplication (the product of two units is again a unit), and forms a group for this operation. The University of Chicago Press is the largest and one of the oldest university presses in the United States. A vector space (also called a linear space) is a collection of objects called vectors, which may be added together and multiplied ("scaled") by numbers, called scalars.
<urn:uuid:1dd6038c-b10c-426d-b679-f1cae4fc51f3>
CC-MAIN-2021-43
https://en.unionpedia.org/2_%C3%97_2_real_matrices
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00390.warc.gz
en
0.899005
2,048
3.515625
4
The World's Weirdest CEPHALOPODS Written by Jonathan Wojcik You probably know by now that the Cephalopoda - squids, octopuses, cuttlefish and their other tentacled brethren - are intelligent, sophisticated mollusks with an arsenal of clever defense mechanisms, but the strangeness of this ancient group goes well beyond the common octopuses or the legendary giant squids. To kick this off with some of Mother Nature's original home brewed Nightmare Fuel, even I'm spooked by the cartoonishly human eyes of Mesonychoteuthis hamiltoni, also known as the "Colossal squid." Theoretically growing even larger than the famed "giant" squid, with shorter tentacles but much bigger, broader, bulkier bodies. these ferocious beasts hunt in freezing cold waters where they may prey on large fish and other squid. More viciously armed than other giants, the suckers of a colossus are equipped with a variety of nasty claws, including the rings of jagged teeth present in other squid, unique swiveling hooks and these tough, three-pronged talons. Terrifying though they seem, these titans are adapted in many ways for the safety of their young ones; thought to give live birth, females may retain thousands of eggs inside their massive mantles and even have a dark inner membrane to block out the luminous glow of the developing embryos, cloaking them from hungry whales and other dangers until they're ready to hatch. At the extreme opposite of the size spectrum, Idiosepius notoides is the tiniest known cephalopod, reaching an adult length under an inch. Adapted to warm, shallow waters, adhesive cells along their backs allow them to cling to blades of sea grass while leaving their tentacles free. Like insects in an aquatic meadow, they attach their eggs in rows to grassy shoots, the female keeping a close watch over them until they hatch. Octopuses of the genus Grimpoteuthis earn the common name "dumbo octopus" for their comically ear-like, flapping fins, resembling a cross between the famous Disney elephant and a gelatinous umbrella. Dwelling in the dark and drably colored deep-sea abyss, they lack the advanced color-changing abilities of other octopods but may confuse predators with transparency or bioluminescence. Instead of suckers, their thickly webbed tentacles are lined with hair-like sensory filaments called cirrae, which are thought to brush small particles of food toward their beakless mouth. Members of this group have been found at the greatest depths of any octopus, as deep as 23,000 feet below the sea's surface. Once classified with the dumbos as species of Grimpoteuthis, the "flapjack" or "pancake" octopuses are now classified as Opisthoteuthis, though the initial confusion is understandable. The two groups have a great deal in common, though these little cuties tend to have a thicker mantle fusing the head and tentacles into one smooth, blobby dome, capable of stretching into a nearly flattened saucer-shape. This flattening tactic likely helps to make the animal less conspicuous, both to predators and, perhaps, to prey, though their feeding habits have yet to be observed. If you ask me, as an armchair biologist with no formal education in a scientific field, their physiology strongly suggests a diet rich in Pac-man. I hear you giggling. It's easy for an octopus to camouflage itself amongst aquatic vegetation and colorful reefs, but Amphioctopus marginatus frequents shallow, sandy waters where the monotonous scenery is often only broken by the sunken shells of coconuts. By bunching itself into a tight, dark ball, it takes on the appearance of this fallen fruit and "tip toes" away as though rolling in the current. The only other species to demonstrate this bipedal motion is Abdopus aculeatus, which prefers to walk around in the guise of seaweed. This is not, however, the only coconut-related defense by an When they're not role-playing coconuts themselves, marginatus species have also been observed carrying one or both halves of an empty coconut shell in their suckers as a sort of armored "suit," one of the first recorded instances of tool use by invertebrates. Totally transparent except for their eyes and internal shells, Cranchid or "glass" squid defend themselves by puffing up like a balloon and withdrawing their heads into their bodies like some wonderful squid-turtle-pufferfish chimera. Some species, like Cranchia scabra here, are even covered in tiny, thorny denticles to deter predators. Similar species of Cranchiidae have their own distinct arrangement of spines, while some lack thorns entirely. The adorable goofball above is a fairly recent discovery and a bit of an enigma, but may use its polka dots in some way to confuse attackers. Every year from March to May, Japan's Toyama Bay comes alive with the dazzling blue light of millions, even billions of tiny bodies in the midst of a mating frenzy. While light production is hardly unusual for cephalopods, Watasenia scintillans are thought to rely on it the most for communication, and may be the only squid able to see in full color. Millions of years ago, the seas were dominated by shelled cephalopods including both the subclass Nautiloidea and subclass Ammonoidea, known to reach tremendous sizes. Only the nautiloids survived complete extinction, and are represented by only six species alive today. During daylight hours, they rest in rocky crevices, shelled side out, between 100 and 300 meters deep. Every single night, they ascend to nearly the surface to hunt small fish and crustaceans. They can nonetheless survive for months without food, and are known to live for up to 20 years - literally ten times longer than most other cephalopods. This slow metabolism is due in part to their energy-saving method of locomotion; gases exchanged between the hollow chambers of the shell allow the animal to easily adjust its own These ancient creatures have more primitive, sucker-less tentacles than their relatives, but may have over ninety at any given time. They not only prey on small animals but will scavenge the discarded exoskeletons of recently molted crustaceans, maintaining healthy growth of their own Found off the coast of Australia, the four species in the genus Tremoctopus are known as "blanket" octopuses for the female's unique defensive mechanism; a long, billowing membrane of skin she can unfurl like a cape as she soars majestically away from her attackers. In addition to making her appear larger and more threatening, the cloak serves as an effective decoy, easily tearing away without harm to the animal. Amazingly, the males of this genus were only identified in the early 2000's, thanks to being less than a hundredth the size of the female. After pumping a specialized penis-like tentacle with sperm, the little guy snaps off the appendage in the female's body and dies shortly afterwards. Before developing their parachute trick, young females as well as males are often seen hanging closely around the toxic Portuguese Man O' Wars for protection, immune to their paralyzing and agonizing venom. When startled, they may frantically tear apart one of the jellies to create a deadly cloud of stinging tendrils, or simply carry a piece with them as a biological weapon. Left: Mbari Right: Tolweb.org Discovered fairly recently and poorly understood, this eerie deep-sea phantom is believed to be an adult member of the genus Magnapinna, otherwise known only from juveniles. Aptly nicknamed, its curiously "elbowed" tentacles trail nearly twenty feet from its broad, big-finned body, trailed in the water more like the tentacles of a jellyfish than a cephalopod. It's likely that the animal simply drifts in place, waiting for prey to bump into its dangling web. As much adoration as I have for all of these animals, I also have to admit a certain nagging sensation of horror when I look at images like these; to contemplate pale, alien shapes adrift in the weightless darkness of the deep, inhuman thought behind lidless, glassy eyes as they grope for equally alien prey, makes for imagery both hauntingly beautiful and positively bone-chilling in a fascinatingly primal way. I'm not sure that poor little Lovecraft could have handled knowledge of the long-arm squid. Members of the genus Histioteuthidae or "Jewel squid" have also been referred to as "cock-eyed squid," for in all known species the tubular, bulging left eye is at least twice the diameter of the flat, sunken right eye. Such extreme asymmetry between visual organs is almost unheard of in other animals, so of what use is it here? The answer is interestingly complicated. In the perpetual night of the deep-sea abyss, predators may be so sensitive to light that solid objects still stand out darker against the faint remnants of sun that trickle down from above. To protect itself from such discriminating senses, the Jewel Squid practises what is known as bioluminescent cryptis - producing just enough light of its own to eliminate the contrast. They light up to blend in with darkness, a trick that would never work up here on the relatively bright and shiny surface world. In order to keep track of how brightly they should glow throughout the day, that specialized bug-eye is constantly aimed skyward and finely tuned to the rays of our yellow sun. Additionally, this makes the blue or red lights of sea creatures stand out like a sore thumb, thwarting any predators that employ their own cloaking system. Its other, smaller eye, directed downward, scans for tiny fish and shrimp that might make a good meal. Members of the Decapodiforms, the clever and curious Cuttlefish rival even the octopuses in their mastery of color, even animating patterns on their bodies to confuse and mesmerize simple-minded prey. Perhaps the most colorful of all, however, is Metasepia pfefferi, looking more like some sort of orchid than an animal. This intense pattern has been confirmed in recent years to warn predators of its highly toxic flesh, making it one of only three poisonous cephalopods known to man. Stranger still, Flamboyant cuttlefish can't float quite as effortlessly as other decapod mollusks, and spend much more of their time walking along the seafloor with a quadrupedal gait, two muscular skin-flaps serving as "hind legs" while its outermost pair of arms act as forelegs. It feels almost unnatural to watch a tentacled mollusk crawl around in such a vertebrate-like fashion, though it's far outweighed by the sheer preciousness of those little tenta-footies. It's stepping! What kind of cuttlefish steps!?! That's silly, Flamboyant cuttlefish! Literally meaning "Vampire Squid from Hell," this seldom-seen haunter in the dark is the only living species in its ancient order, Vampyromorphidae, believed to have been larger and more abundant during the mid-Jurassic period before gradually retreating to the protective darkness of the abyss. Adapted to expend as little energy as possible, they are among the few complex animals comfortable in Oxygen Minimum Zones or Shadow Zones, areas of poorly oxygenated, effectively stagnating seawater. Moreso than any other deep-sea cephalopod, Vampyroteuthis defends itself with a devilish array of light tricks; like the Jewel squid, tiny photophores speckle its body to counteract ambient light. Brighter lights on its tentacle tips can give the impression of a whole school of smaller organisms, and the massive, lidded "headlights" at the end of its mantle can give it a larger, more threatening appearance or the false impression of retreat as they shrink and close off. When all else fails, it may flip its entire "umbrella" inside out, becoming a dense, prickly-looking grey ball to confuse animals who thought they were chasing some sort of squid a moment ago. As though it couldn't possibly get any odder, the vampire squid's feeding habits were only finally deduced in 2012 by researchers at MBARI. Long assumed to feed on small crustaceans, analysis of specimens in both the field and the laboratory revealed that the so-called "vampire from hell" has quite possibly never harmed one hair on another living thing. While every other cephalopod known to man is a strict predator, these phantoms feed entirely on globs of decomposing organic waste known as marine snow. Extending one of those unique, stringy filaments, they trap tiny particles of detritus in rows of nearly microscopic bristles, scrape it off in their tentacles, coat the nutritious refuse in mucus and ferry it to their soft, toothless mouths with their finger-like cirri; the sea's most elaborate and frightening looking garbage disposals. Beautiful. Closely related to Tremoctopus, female "Argonauts" are only ten to twenty times larger than the males but far stranger in appearance. Unlike any other genus of octopus, females are able to construct a shell-like egg case remarkably similar to the true shells of their ancestral ammonites. Secreted by a pair of highly modified tentacles, this calcerous, papery structure gives the animal its common name, "paper nautilus." A bubble of gas gives this false shell buoyancy, and the mother faces outward to ward away predators with her venomous bite. Bizarrely, some species have been seen attaching themselves to the tops of live jellyfish, possibly feeding parasitically off their gastric contents and adding an extra layer of defense to their mobile nursery. So, if it's a totally different process with totally different materials, why does the egg case of this octopus so closely resemble the interior of an ammonite or nautiloid shell? While it could very well be a case convergent evolution, it's also been theorized that these creatures once carried the old shells of other mollusks in the manner of a hermit crab, and secreted the "paper" shell as a lining for these borrowed homes. After most of the shell-bearing cephalopods went extinct, the secretion may have continued to be useful as an egg case. There are many animals that closely resemble other, more dangerous creatures as a defensive mechanism, but the indonesian Thaumoctopus mimicus is the first animal ever discovered to imitate both the appearance and behavior of many different species for different situations. Here, a mimic octopus creates a false sea snake by taking on its coloration and hiding all but two tentacles. This is often employed as a defense against damselfish, which sea snakes have been known to hunt. By flattening itself and skimming swiftly along he seafloor, the octopus takes on the appearance of a flounder, a fish that the mollusk's common predators might find distasteful. Another handy disguise is that of a Stomatopod or "Mantis Shrimp" - these crustaceans are well known to man and animal alike for the incredible power of their bladed claws, capable of shattering glass or snipping through bone. This is just a small peek at the mimic's bag of tricks. Over a dozen imitations have been observed in a single specimen, and many more may yet be discovered. By rapidly changing both its form and direction, it easily fools predators into thinking they've lost track of their prey, and its preferred murky waters make the difference even tougher to spot - tough enough that the species eluded human notice until the their formal discovery in only 1998. Sadly, this incredible creature may already face extinction through poaching - due entirely to the prices they can fetch in the exotic pet trade. They are short lived in captivity, many more die in transit and captive breeding is thus far unheard of. Hopefully, populations of these dopplegangers may persist in still-unexplored regions of the tropical sea, and many more species may still await discovery. Perhaps someone you know has been an octopus all along... Perhaps everyone but you.
<urn:uuid:60fb8b67-f172-441d-80be-e08939b74565>
CC-MAIN-2021-43
https://bogleech.com/cephalopods.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00230.warc.gz
en
0.951277
3,529
2.84375
3
This video, written and produced by Maitreyi Menon, Isabel Aurichio, and Judy Fisher during the First-Year Experience (FYE) section of FG110 Introduction to Feminist & Gender Studies at Colorado College during Block 1 2016, explores constructions of gender in comic culture. Our guide, Hannes, was one of the exhibit curators and began the tour with some background information on the museum. The Schwules Museum* was founded 30 years ago by three white German gay men who were working at the Berlin Museum and wanted to establish a permanent museum devoted to gay history. “Schwule” means gay in German, and Hannes noted that, similarly to “gay” in English, this word had been (and continues to be) used in a derogatory many, but that many in the LGBTQIA community, including the museum, were reclaiming it. Hannes also told us why there is an asterisk following the museum name. In 2008, the Board of Directors decided that they wanted to open up the museum for the rest of the LGBTQIA community, considering that it had focused primarily on the history of white, cisgender, gay men up to that point. Borrowed from something the trans community was doing in the U.S., the asterisk denotes that even though the name of the museum is specific to gay men, the museum itself is inclusive of many queer identities. This strategy can be problematized through an examination of liberal politics. Many organizations that are marginalized sometimes feel they must expand the scope of their organization either to give the appearance of progress or out of a genuine desire to include other marginalized people. These both stem from liberal understandings of “inclusivity” and “diversity.” Black feminists have been critical of this notion for years, especially concerning white feminism. First, because other marginalized groups often have their own thing going on (Hannes mentioned that German lesbians have a more extensive archive that predates the Schwules Museum* by ten years), and second, because assimilation is not a tactic that helps the most marginalized, but rather a tactic that helps those complicit in existing power structures to maintain power. Additionally, “trans*” has been changed in the U.S., because it implies that anyone who is not binary/passing/post-op is conditionally trans. In many ways, however, this is working quite well. For example, all the signs in the Superqueeroes exhibit use the “gender gap,” which resists how certain German words are gendered by replacing part of the word with an underscore. In addition, the exhibit featured several trans artists and the rest seemed to be almost equally about lesbians and gay men. Another exhibit that we stopped in briefly at the end was art entirely done by trans artists. While not perfect, this is in many ways a step above similar attempts in the U.S. Although most of the comics in the exhibition are actually American, there were some interesting historical parallels that seemed relevant to Germany and other parts of Europe. Hannes told us about the comic burnings between 1945 and 1955 in the U.S., during which people would publicly burn piles of comic books. Much of this stemmed from author Frederic Wertham, who wrote Seduction of the Innocent in order to argue that comic books were turning the children into criminals. While Hannes didn’t mention this explicitly, his discussion made me think about the Nazi book burnings happening around the same time. As Erik Jensen writes in “The Pink Triangle and Political Consciousness: Gays, Lesbians, and the Memory of Nazi Persecution,” “While the American gay community often employed the Jewish Holocaust as a template for understanding the persecution of homosexuals, the German gay community generally avoided this comparison” (342). The collective memory of American gays concerning the treatment of homosexuals during the Holocaust is very different from the German understanding. Perhaps that is why this parallel seemed so obvious to me. By Queers, For Queers Throughout the exhibition, there were two main categories of comics that were shown: comics that were written by queers for queers, in which a significant part of the story line has to do with queer identity, and mainstream comics that incorporate queer characters as a side note to a larger plot line. These categories are both significant, especially given the influence of the Comics Code Authority (CCA). Between 1955 and 2011, the CCA (a private board that governed all the mainstream publishing houses) dictated what types of content could be in comics. The list of banned subjects included any type of explicit sexuality, drugs, violence, the words “horror” and “terror,” undead characters, and critiques of military/police/judges. Further, homosexual content was not allowed by the CCA until 1989. In response, the 1960s brought about an explosion of underground comics that used “comix” instead of “comics” to denote the change. Within this underground movement, there was yet another split as queers and women grew tired of the sexism, racism, and heterosexuality that dominated the underground scene. Comix publishers, such as “Wimmens Comix” and “Tits & Clits” were founded to counter this phenomenon. An important note is that in 1972, a woman named Trina Robbins created the first gay comic “Sandy Comes Out.” As our friends at the ADEFRA meeting pointed out, lesbians are always at the beginning of a movement, despite dominant groups trying to push them from the front lines. In the newer era of web comics, one person making a name for herself is Scout Tran-Caffee (Dax). She is a non-binary, trans woman of color who has created comics that transcend the page and are only possible in the virtual parallel universe. This unapologetic love for the trans experience is amazing, especially when compared to the stale decades old statements that Marvel is trying to make about sexuality. There is an absolutely striking difference between the levels of political thought and storytelling in the mainstream comics and comix. The former use a quite different parallel universe in which gay sexual encounters exist between superheroes as a way to simultaneously draw in queer readers while retaining their (presumably) heterosexual audience (a tactic used in almost every form of media, commonly referred to as “queer-baiting”). Sadly, the most progressive comic we looked at featured Wonder Woman officiating a lesbian wedding and then explaining her actions by saying, “Where I come from it’s not gay marriage, it’s just marriage.” This sort of assimilationist, liberal language illustrates the significance of many queer artists saying that they are queer and actively queering the way comics are written and produced. These comics also incorporate the problematic notion of “coming out.” Hannes repeatedly referred to the “coming out page” of a comic. As noted by many scholars, the conceptualization of “outness” is a Western construct that is often used as a litmus test for progressivism. Within the Western context, coming out is often problematized for perpetuating compulsory heterosexuality. As Jürgen Lemke writes about the coming out process in East Berlin before the Wall fell, “The coming-out generally catapults her or him…into the cold, hard world. Very often a banishment from the family unit will be the harsh result” (33). The “coming out pages” for these superheroes are only necessary because until that page is created, they are heterosexual by default. This marks another stark difference regarding comics being written by queers, for queers, because operating with a knowledge base of other sexualities changes the way you write about and conceptualize those sexualities in media you are producing. Hannes informed us that this was the first exhibition about queer comics in all of Europe. It is quite obviously a highly interesting field and many books could be (and probably have been) written about it. The key lessons I took away from the experience are that independent artists have more political freedom, which almost always means they produce more interesting art. The other thing I took away is that critical consumption of media is important and should be a constant process, but that sometimes it is just pretty cool to see Wonder Woman as a lesbian. Grace Montesano is a rising senior majoring in Feminist and Gender Studies as well as Political Science at Colorado College. They love discussing politics, and are known for making obscure references to various media that no one else has heard of. Grace is skeptical of the 9/11 story we have all been told, and believes the jury is definitely still out about the existence of mermaids. In recent years, the U.S. comics industry has generated increased critical, scholarly, and popular attention. The sheer strength, volume, and range of the comics produced, as well as the enthusiasm of fan culture, renders the industry a powerful ideology-producing tool. Although other publishers have experienced growth since the industry was conceived post-WWII, Marvel and DC Comics still comprise more than half the industry. What’s more, their success continues to grow as a result of the development of more accessible retail outlets for the medium: the Internet and cinema. In “Cultural Studies, Multiculturalism, and Media Culture,” Douglas Kellner explores how media—including radio, television, film, popular music, the Internet, and social networking sites—provide a cohesive text from which we “forge our very identities” (7). In many ways, he claims, the media shapes our “view of the world,” our “deepest values” (7), and even our morality. It is important, therefore, to consider whose perspective gets left out of—and often misrepresented by—the dominant narratives circulating mass media. So, what are the social and political implications of the conglomeration of Marvel and DC? To begin with, alternative media voices are left out of the equation and unable to question “fundamental social arrangements under the which the media owners are doing quite well” (37), as David Croteau, William Hoynes, and Stefania Milan point out in “The Economics of the Media Industry.” This, in turn, supports Western imperialism, further marginalizing a myriad of other cultural narratives. One response to this lack of diversity in the media environment—and specifically in the world of comics—is the growth of the African superhero universe. One prominent South African illustrator, Loyiso Mkize, says that he was first inspired by American superheroes, as American comics were the most widely available during his childhood. Mkize told Buzzfeed News, “Growing up, comic books had a huge interest for me. It wasn’t just the visuals—but the strong superheroes. I wanted to emulate them.” However, the template he was provided with was conspicuously lacking characters with whom he could identify. He continues, “I was thinking, where are the heros that look like me, speak like me, and share the same environment as me? I realized that we don’t have it—it came as a big shock.” Thus, the comic Kwezi was born. Mkize describes Kwezi, which means “star” in Xhosa and Zulu, as a “coming of age story about finding ones heritage.” The graphic narrative follows a confident, young boy as he embraces his superpowers in the context of the bustling, fictional metropolis “Gold City.” Perhaps the most significant aspect of the comic is its inclusion of “street” slang and popular culture references, which situates the story in a familiar setting for young South African readers. It is also significant that Kwezi (the hero) is fashionable, donning a contemporary haircut, and modern, using Twitter and other forms of social media as an activist. Ultimately, the recent rise in scholarly interest regarding graphic narratives has produced a catalytic effect with regard to the emergence of non-conventional, non-Western narratives. Over the last ten years, comic books have undergone a substantial change in terms of the type of content available and in their critical reception. That “said,” there is still a lot of progress to be made. U.S. comic culture does not just overlook and effectively erases narratives that fall outside the non-Anglophone world—the narratives of marginalized communities within the United States are absent as well, forcing women, LGBTQIA+ people, and people of color into weak, stereotyped roles. Of course, visibility is a complicated affair. “If representational visibility equals power,” claims Jay Clarkson in “The Limitations of the Discourse of Norms: Gay Visibility and Degrees of Transgression,” then “almost naked white women should be running Western Culture” (392). It is the hope of illustrators like Loyiso Mkize to depict the popular reality in his portrayal South African culture, and by doing so, achieve visibility in a way that benefits his culture and community.
<urn:uuid:04495a36-42ec-4ff1-8ed4-a0ed9d735a17>
CC-MAIN-2021-43
https://femgeniuses.com/tag/marvel-comics/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00551.warc.gz
en
0.966231
2,735
2.5625
3
US healthcare costs are rapidly rising but patient outcomes remain poor. Relative to other OECD countries, the US spends significantly more as a percentage of GDP, yet on average, we die sooner and lose more quality years of life. While the factors contributing to this paradox are complex, the combined objectives of achieving better outcomes and lower cost of care should impact behavior for all participants, including patients, payers and providers. Enabling much of this critical behavior change, new healthcare technology is equipping health care providers with actionable insights about patients before their health deteriorates. Innovations ranging from telemedicine technology to big data analytics in healthcare are enabling providers to more efficiently provide patient care, while avoiding wasteful costs and adverse patient outcomes. Many traditional healthcare companies are rapidly adopting digital technologies to strengthen their position. For example, leading pharma companies are applying big data to improve R&D yields, while clinical diagnostics companies are leveraging troves of patient data to deliver actionable information to physicians. In this article, we explore three such enabling technologies: telemedicine, data analytics and blockchain. Each of these innovations highlights how healthcare providers, in particular, are rapidly evolving, as they transition from fee-for-service to value based care, while meeting the growing needs of an aging and increasingly co-morbid population. Telemedicine - From Outback to Outpatient Telemedicine applies communication technology to enable physicians to remotely deliver clinical care to patients. The origins of telemedicine lie in the vast, unforgiving landscape of the Australian outback. There, during the 1920s, the Royal Flying Doctor Service faced the daunting challenge of providing medical services to a disparate population scattered over an area two thirds the size of the United States. To serve this unique market, they developed a network of over 3,000 pedal-operated generator and radio receivers, enabling remote consultation and the first large-scale telemedicine system. During the 1960s, NASA advanced telemedicine by building remote monitoring systems into astronaut suits to monitor vital signs and psychological status. However, it wasn’t until the 1980s that telemedicine found its first commercial application, when MedPhone developed a system using standard telephone lines to remotely diagnose and support treatment for patients requiring cardiac resuscitation. The ensuing telecom and internet revolution of the next two decades laid the infrastructure necessary for telemedicine as we know it today. Now, through a combination of technologies including phone, chat, text and video conferencing, doctors consult patients and confer with specialists to deliver remote care. Telecom Advances Enabled Telemedicine Emergence - faster internet connections - ubiquitous smartphones - changing insurance standards While it continues to primarily provide patients flexible, accessible clinical care, telemedicine now also serves the critical economic goal of saving patients, payers and providers money. Indeed, telemedicine is delivering the holy grail of healthcare: improved outcomes at a lower cost for all parties. Telemedicine Popular Applications The fastest growing segment of telemedicine connects patients with doctors they’ve never met. Led by companies such as Doctors on Demand, HealthJoy and Teladoc, on-demand medicine reaches consumers in two ways: direct to consumer relationships, and through payors. Now, anyone can simply download an app, pay a flat monthly subscription rate or nominal fee, and gain direct access to a physician. No appointments, insurance or lengthy paperwork. For non-emergency issues such as flu and skin rashes, most patients can receive call or video-based care that is comparable to in-person consultation. Not only more convenient, such consultations typically cost $45, a significant savings compared to a $100 doctor visit, or $750 emergency room visit. Recognizing the savings and improved service opportunity, many employer health plans now offer employees free virtual consultations. In fact, 90% of large US employers now offer telemedicine, up from 7% only five years ago. Further, large insurers such as United Health group and Aetna are partnering with telemedicine companies to provide employees access to remote care, and often for free or a modest co-pay. Chronic Disease Management Chronic disease management is one of the largest drivers of escalating healthcare costs in the US. Half of all adults suffer from at least one chronic condition, such as heart disease, diabetes or obesity. Care for these patients accounts for 86% of total healthcare spending. Ultimately the costs of most chronic diseases can be heavily mitigated by patient self-management. Routine lifestyle decisions involving diet, exercise and prescription medications coupled with remote monitoring of weight and blood pressure pay big dividends. Patients who manage these factors under daily supervision receive interventional care when needed, and as a result, avoid costly hospital admission. As a recent Wall Street Journal article notes, payor groups such as Partners Healthcare, a leading consortium of Boston hospitals, are experimenting with a combination of remote monitoring, behavior modification and personalized intervention. For example, they are providing remote blood pressure monitoring tools to hypertension patients, texting diabetics to encourage daily exercise, and providing heart-failure patients electronically monitored pillboxes, which drive better prescription compliance. Illustrating the impact of these approaches, Joseph Kvedar, vice president of connected health at Partners HealthCare, noted: “Digital medicine allows us to get into your life in a personal way, deliver interventions continuously and inspire you to be healthy in a way an office-based practice can’t” Enabling better collaboration and care Reminiscent of broader trends in the distributed workforce, telemedicine also enables doctors to more effectively collaborate and deliver optimal care to patients. Often, smaller clinical care settings such as community hospitals and remote locations lack specialists or sufficient resources to maintain continuous care. To supplement their staff, these sites increasingly rely on telemedicine services such as those provided by Mercy health system’s Virtual Care center, a “hospital without beds” that serves nearly 40 smaller hospitals throughout the south and midwest. The TeleICU team of Mercy supports ICU centers, carefully monitoring patient vital signs from their St. Louis headquarters. Accelerating intervention for deteriorating patients, Mercy’s team has helped its hospitals achieve a “35% decrease in patients’ average length of stay and 30% fewer deaths than anticipated” according to Mercy president Randy Moore. Healthcare stands to gain more from data analytics than perhaps any other industry. Measuring value in terms of lives impacted and dollars saved, it’s hard to imagine a bigger potential payoff. Suggesting the magnitude of national savings, the state of Minnesota, alone, estimated a $2 billion dollar annual savings opportunity through improved population health management, driven by data analytics. Three broad data-related trends stand to improve healthcare efficiency: More data sources: electronic medical records, new diagnostic tests, wearables Better analytical tools: AI and machine learning, big data analytics Better outcomes: disease prevention vs. treatment, earlier intervention, better R&D Exponential increases in patient data and analytical power are fundamentally changing clinical care. With more predictive, actionable data, doctors can now identify and treat at-risk patients before they develop full-blown disease. Patient data also enables drug developers to more precisely target test populations for experimental drugs, and doctors to more effectively target responders to specialized therapies. In both cases, data drives improved efficacy, reduced adverse events, and ultimately, improved odds of approving new medications and savings lives. To deliver the greatest impact, applying big data to the largest disease populations such as diabetes and cancer, seems like the obvious place to start. And indeed, data analytics stands to generate outsized returns in those populations. However, smaller acute needs present equally compelling opportunities, particularly when no apparent solutions exists. Case in point: the raging opioid epidemic. Applying Big Data Analytics to prevent opioid addiction The opioid epidemic is rapidly becoming one of our nation’s leading healthcare crises, having claimed over 20,000 lives in 2016. Faced with the daunting rise of addiction and overdose deaths, some state governments are turning to big data solutions to stem the tide by optimally allocating resources. The opioid epidemic consists of two fundamental challenges: curing addicts and preventing addiction in the first place. Currently, treatment facilities provide the best hope for curing addiction. However, access to these centers prevents many addicts from receiving treatment, increasing their risk of repeated overdose and death. Faced with rapidly rising opioid-related deaths, government agencies in Indiana turned to data analytics to identify optimal locations for new treatment facilities. Leading the effort, Darsham Shah, the Chief Data Officer, partnered with 16 government agencies, and software providers SAP and Tableau. His team collected and analyzed datasets that included various opioid-related activities such as drug arrests, overdose-related ambulance calls, and the use of the overdose-reversing drug naloxone. The resulting map directed the state to build 5 new treatment centers in areas where they would generate the highest impact. While more effective treatment will hopefully reverse the opioid-related mortality trend, the better solution is to prevent addiction in the first place. To that end, data scientists are leveraging insurance data and electronic health records to predict who may be most addiction-prone, and to prevent their use of opioids for pain management. For example, analysis of large data sets from pharmacy benefits manager Express Scripts surfaced several characteristics that increased patient risk of addiction. Some were obvious: chronic use of opioids, and non-opioid substance abuse. However, others were less-so: younger age, male gender and being unmarried. Although not a cure, data analytics helps “shine a light on potentially concerning patterns and allow for the identification of subpopulations who are at risk,” according to Caleb Alexander co-director of the Johns Hopkins University Center for Drug Safety and Effectiveness. While data analytics helps wring value from increasingly rich datasets, blockchain stands to change the fundamental value of the data, itself. Often referred to as a decentralized, permissioned ledger, the blockchain has two critical features that are distinct relative to traditional centralized databases: Shared, permissioned record: No single participant owns the blockchain or dictates additions to it. Rather, all participants own a copy and they must reach consensus for new information to be added. Immutable data store: Blocks of information cannot be changed once committed to the blockchain. Therefore, blockchain offers a tamper-proof record, impervious to assaults by bad actors. Given these attractive security features, many industries that depend on complex, trust-based contracts and transactions - including finance, manufacturing supply chains, and energy - have started investing in blockchain applications. In a similar theme, the healthcare industry has highlighted several similar use-cases including medical records, medical products supply chains and clinical trials management. Pharmaceutical Supply Chain Counterfeit drugs present a major challenge to both healthcare providers and consumers. According to the World Health Organization, annual sales of counterfeit prescription medications of $200 billion, roughly 20% of the $1.1 trillion global pharmaceutical sales. These products not only result in lost revenue for drug developers, but also - and more concerningly - an avoidable 1 million patient deaths per year. To combat counterfeit prescriptions, governments are setting “track and trace” requirements for companies participating in the drug supply chain. For example, the US Food and Drug Administration enacted the Drug Supply Chain Security Act (DSCSA) in 2013, requiring all supply chain participants - including manufacturers, wholesale distributors, third party logistics, pharmacies, and hospitals - to develop an “electronic, interoperable system to identify and trace certain prescription drugs as they are distributed in the United States.” Although tangible applications of blockchain to pharmaceutical supply chains remain either absent or unreported, industry participants are positioning themselves for implementation. In the first Pharma Supply Blockchain Forum, organized by the IEEE Standards Association (IEEE-SA), executives from healthcare leaders such as Johnson & Johnson, Pfizer and Amgen convened at Johns Hopkins University to prepare for the coming shift to blockchain. Leading the shift to implementation, logistics company DHL partnered with Accenture in a recent pilot that used blockchain to track pharmaceuticals from manufacturer to patient, covering six geographic locations. They created a system that provides product visibility to all stakeholders in the supply chain, including manufacturers, warehouses, distributors, pharmacies, hospitals, and doctors. Forecasting the potential impact of blockchain innovation, the CIO of DHL, Keith Turner noted: “By utilizing the inherent irrefutability within blockchain technologies, we can make great strides in highlighting tampering, reducing the risk of counterfeits and actually saving lives.” Payments administration accounts for 14% - about $460 billion - of the $3.3 trillion total US healthcare expenditure. Further, an estimated 5-10% of medical claims are fraudulent, resulting from over-billing or billing for a procedure that wasn’t performed. Combined, these inefficiencies account for nearly $800 billion of wasted spending. Blockchain stands to regiment claims management in two important ways: automation and transparency. By automating the adjudication and payment process, blockchain stands to eliminate intermediaries and their manual communication and reconciliation efforts. In a recent pilot application, for example, Gem and Capital One partnered to develop a blockchain based solution to streamline payment for healthcare providers. According to Capital One: “The result is a dramatically more efficient claims management process that eliminates the traditional claims clearinghouse and reconciliation layers and lowers administrative costs, compresses cash flow cycles, and reduces revenue loss,” While automating claims has clear economic benefits, due to expediency and a simplified process, combating medical fraud poses a greater challenge. Theoretically, blockchain should make it easier to flag fraudulent claims. By requiring all provider entities and patients to have authenticated identities within the blockchain, and by linking provider identities and medical history to patient identities, payers would have an immutable set of data relationships. Consequently, if unauthorized providers or suspect claims reach payers, their fraud detection systems should flag such suspicious requests. Clinical Trials - Runaway Cost Increases Demand Change Clinical trials constitute the multi-phase research process by which experimental drug candidates gain FDA approval for sale. Broadly speaking, drug development cost, largely driven by clinical trials - has increased dramatically over past 40 years, more than doubling the pace of inflation and resulting in the current $2.5 billion R&D price tag for a new therapy. In response to unsustainable cost increases, drug companies are applying various technologies, such as big data analytics, to reverse the trend. Within clinical trials, in particular, blockchain stands to decrease costs in two critical areas: patient enrollment and remote monitoring. High enrollment cost Patient enrollment accounts for 30-35% of clinical trial budgets, particularly in the larger phase II and III stages. A variety of factors influence recruitment ease, such as the protocol design, inclusion and exclusion criteria, and the quality of the recruitment plan. However, process-related inefficiencies such as access to patient electronic health records and informed consent could be dramatically reduced with blockchain. If patient demographic and health information such as genetic, therapeutic, demographic, and geographic criteria were added to the blockchain, researchers could more easily identify and recruit patients with the necessary characteristics. Further, if the informed consent process took place through the blockchain, the errors and mismanagement of informed consent, such as un-approved forms, or protocol changes would not prevent patients from participating in trials. Remote clinical trial monitoring As clinical trials progress, the sponsoring pharmaceutical companies or their contract research organization (CRO) carefully monitors the recruiting sites. Traditionally, monitoring has consisted of on-site visits, with the objective of ensuring compliance with the study protocol and proper reporting of adverse events, or negative patient reactions to the trial drug. To reduce the cost of trial monitoring, both sponsor companies and CROs have increasingly adopted remote monitoring. However, the challenge in remote trial monitoring lies in the availability and reliability of patient data. Often recorded on paper and entered into a database, this data is not uniformly reliable. As a result, prioritizing trial sites for closer attention and on-site visits is often challenging. Two companies - Florence and Verady - have partnered to develop applications that will make patient and clinical trial data more readily available for investigators. Working with 2,000 investigator sites, Florence has already built digital solutions to replace paper investigator site files. Verady provided a digital interface that enables Florence investigator customers to easily access patient information stored on the blockchain. In a joint press release, Florence and Verady executives noted that the combined “technology allows healthcare providers and pharmaceutical companies to fingerprint critical data and manage its use in a secure Blockchain system.” which in turn “allows research sites and pharmaceutical companies more control over patient data.” Many Options - Where to Start? The three examples of healthcare technology covered here are but a sample of the many options available to healthcare companies striving to improve patient experience and the economics of care delivery. Knowing where to start and which technologies deserve disproportionate investment presents a daunting decision. Indeed, with cutting edge solutions such as blockchain, virtually any new application constitutes a pioneering effort. To prioritize attention, companies are increasingly thinking from the patient perspective and backing into the technology solutions which can best serve their experience. For example, borrowing from her experience leading technology teams in consumer-facing companies, Angela Yochem, the Chief Digital Officer of Novant Health, notes how her company should meet its customers: “We care about engaging you the way you want to be engaged with new and emerging technology. Digital channels are part of the way many patient groups want to be engaged.” Indeed, all healthcare companies should consider how their patients engage with brands outside of healthcare, including media, finance, and e-commerce. As these customer experiences shape expectations, they also surely provide healthcare executives a rough roadmap with which to guide their own customer journey.
<urn:uuid:f5dc04da-fbd2-4dfc-b6b4-18055d704bf0>
CC-MAIN-2021-43
https://www.toptal.com/insights/innovation/three-healthcare-technology-innovations-driving-better-outcomes-and-lower-costs
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00111.warc.gz
en
0.923111
3,700
2.6875
3
The Way We Live Now Railways have been declining since the 1950s. There had always been competition for the traveler (and, though less marked, for freight). From the 1890s horse-drawn trams and buses, followed a generation later by the electric or diesel or petrol variant, were cheaper to make and run than trains. Lorries (trucks)—the successor to the horse and cart—were always competitive over the short haul. With diesel engines they could now cover long distances. And there were now airplanes and, above all, there were cars: the latter becoming cheaper, faster, safer, more reliable every year. Even over the longer distances for which it was originally conceived, the railway was at a disadvantage: its start-up and maintenance costs—in surveying, tunneling, laying track, building stations and rolling stock, switching to diesel, installing electrification—were greater than those of its competitors and it never succeeded in paying them off. Mass-produced cars, in contrast, were cheap to build and the roads on which they ran were subsidized by taxpayers. To be sure, they carried a high social overhead cost, notably to the environment; but that would only be paid at a future date. Above all, cars represented the possibility of private travel once again. Rail travel, in what were increasingly open-plan trains whose managers had to fill them in order to break even, was decidedly public transport. Facing such hurdles, the railway was met after World War II by another challenge. The modern city was born of rail travel. The very possibility of placing millions of people in close proximity with one another, or else transporting them considerable distances from home to work and back, was the achievement of the railways. But in sucking up people from the country into the town and draining the countryside of communities and villages and workers, the train had begun to destroy its own raison d’être: the movement of people between towns and from remote country districts to urban centers. The major facilitator of urbanization, it fell victim to it. Now that the overwhelming majority of nonelective journeys were either very long or very short, it made more sense for people to undertake them in planes or cars. There was still a place for the short-haul, frequently stopping suburban train and, in Europe at least, for middle-distance expresses. But that was all. Even freight transportation was threatened by cheap trucking services, underwritten by the state in the form of publicly funded freeways. Everything else was a losing proposition. And so railways declined. Private companies, where they still existed, went bankrupt. In many cases they were taken over by newly formed public corporations at public expense. Governments treated railways as a regrettable if unavoidable burden upon the exchequer, restricting their capital investment and closing “uneconomic” lines. Just how “inexorable” this process had to be varied from place to place. “Market forces” were at their most unforgiving—and railways thus most threatened—in North America, where railway companies reduced their offerings to the minimum in the years after 1960, and in Britain, where in 1964 a national commission under Dr. Richard Beeching axed an extraordinary number of rural and branch lines and services in order to maintain the economic “viability” of British Railways. In both countries the outcome was an unhappy one: America’s bankrupt railways were de facto “nationalized” in the 1970s. Twenty years later, Britain’s railways, in public hands since 1948, were unceremoniously sold off to such private companies as were willing to bid for the most profitable routes and services. In continental Europe, despite some closures and reductions in services, a culture of public provision and a slower rate of automobile growth preserved most of the railway infrastructure. In most of the rest of the world, poverty and backwardness helped preserve the train as the only practicable form of mass communication. Everywhere, however, railways—the harbingers and emblems of an age of public investment and civic pride—fell victim to a dual loss of faith: in the self-justifying benefits of public services, now displaced by considerations of profitability and competition; and in the physical representation of collective endeavor through urban design, public space, and architectural confidence. The implications of these changes could be seen, most starkly, in the fate of stations. Between 1955 and 1975 a mix of antihistoricist fashion and corporate self-interest saw the destruction of a remarkable number of terminal stations—precisely those buildings and spaces that had most ostentatiously asserted rail travel’s central place in the modern world. In some cases—Euston (London), the Gare du Midi (Brussels), Penn Station (New York)—the edifice that was demolished had to be replaced in one form or another, because the station’s core people- moving function remained important. In other instances—the Anhalter Bahnhof in Berlin, for example—a classical structure was simply removed and nothing planned for its replacement. In many of these changes, the actual station was moved underground and out of sight, while the visible building—no longer expected to serve any uplifting civic purpose—was demolished and replaced by an anonymous commercial center or office building or recreation center; or all three. Penn Station—or its near contemporary, the monstrously anonymous Gare Montparnasse in Paris—is perhaps the most notorious case in point.1 The urban vandalism of the age was not confined to railway stations, of course, but they (along with the services they used to provide, such as hotels, restaurants, or cinemas) were by far its most prominent victim. And a symbolically appropriate victim, too: an underperforming, market-insensitive relic of high modern values. It should be noted, however, that rail travel itself did not decline, at least in quantity: even as railway stations lost their charm and their symbolic public standing, the number of people actually using them continued to rise. This was of course especially the case in poor, crowded lands where there were no realistic alternatives—India being the best illustration but by no means the only one. Indeed, despite underinvestment and a degree of intercaste social promiscuity that renders them unappealing to the country’s new professionals, the railways and stations of India, like those of much of the non-Western world (e.g., China, Malaysia, or even European Russia), probably have a secure future. Countries that did not benefit from the rise of the internal combustion engine in the mid-twentieth-century age of cheap oil would find it prohibitively expensive to reproduce American or British experience in the twenty-first century. The future of railways, a morbidly grim topic until very recently, is of more than passing interest. It is also quite promising. The aesthetic insecurities of the first post–World War II decades—the “New Brutalism” that favored and helped expedite the destruction of many of the greatest achievements of nineteenth-century public architecture and town planning—have passed. We are no longer embarrassed by the rococo or neo-Gothic or Beaux Arts excesses of the great railway stations of the industrial age and can see such edifices instead as their designers and contemporaries saw them: as the cathedrals of their age, to be preserved for their sake and for ours. The Gare du Nord and the Gare d’Orsay in Paris; Grand Central Station in New York and Union Station in St. Louis; St. Pancras in London; Keleti Station in Budapest; and dozens of others have all been preserved and even enhanced: some in their original function, others in a mixed role as travel and commercial centers, others still as civic monuments and cultural mementoes. Such stations, in many cases, are livelier and more important to their communities than they have been at any time since the 1930s. True, they may never again be fully appreciated in the role they were designed to serve—as dramatic entrance portals to modern cities—if only because most people who use them connect from tube to train, from underground taxi rank to platform escalator, and never even see the building from the outside or from a distance, as it was meant to be seen. But millions do use them. The modern city is now so large, so far-flung—and so crowded and expensive—that even the better-heeled have resorted to public transport once again, if only for commuting. More than at any point since the late 1940s, our cities rely for their survival upon the train. The cost of oil—effectively stagnant from the 1950s through the 1990s (allowing for crisis-driven fluctuations)—is now steadily rising and unlikely ever to fall back to the level at which unrestricted car travel becomes economically viable again. The logic of the suburb, incontrovertible with oil at $1 a gallon, is thus placed in question. Air travel, unavoidable for long-haul journeys, is now inconvenient and expensive over medium distances: and in Western Europe and Japan the train is both a pleasanter and a faster alternative. The environmental advantages of the modern train are now very considerable, both technically and politically. An electrically powered rail system, like its companion light-rail or tram system within cities, can run on any convertible fuel source whether conventional or innovative, from nuclear power to solar power. For the foreseeable future this gives it a unique advantage over every other form of powered transportation. It is not by chance that public infrastructural investment in rail travel has been growing for the past two decades everywhere in Western Europe and through much of Asia and Latin America (exceptions include Africa, where such investment is anyway still negligible, and the US, where the concept of public funding of any kind remains grievously underappreciated). In very recent years railway buildings are no longer buried in obscure subterranean vaults, their function and identity ingloriously hidden under a bushel of office buildings. The new, publicly funded stations at Lyon, Seville, Chur (Switzerland), Kowloon, or London Waterloo International assert and celebrate their restored prominence, both architectural and civic, and are increasingly the work of innovative major architects like Santiago Calatrava or Rem Koolhaas. Why this unanticipated revival? The explanation can be put in the form of a counterfactual: it is possible (and in many places today actively under consideration) to imagine public policy mandating a steady reduction in the nonnecessary use of private cars and trucks. It is possible, however hard to visualize, that air travel could become so expensive and/or unappealing that its attraction for people undertaking nonessential journeys will steadily diminish. But it is simply not possible to envision any conceivable modern, urban-based economy shorn of its subways, its tramways, its light rail and suburban networks, its rail connections, and its intercity links. We no longer see the modern world through the image of the train, but we continue to live in the world the trains made. For any trip under ten miles or between 150 and 500 miles in any country with a functioning railway network, the train is the quickest way to travel as well as, taking all costs into account, the cheapest and least destructive. What we thought was late modernity—the post-railway world of cars and planes—turns out, like so much else about the decades 1950–1990, to have been a parenthesis: driven, in this case, by the illusion of perennially cheap fuel and the attendant cult of privatization. The attractions of a return to “social” calculation are becoming as clear to modern planners as they once were, for rather different reasons, to our Victorian predecessors. What was, for a while, old-fashioned has once again become very modern. The Railway and Modern Life Ever since the invention of trains, and because of it, travel has been the symbol and symptom of modernity: trains—along with bicycles, buses, cars, motorcycles, and airplanes—have been exploited in art and commerce as the sign and proof of a society’s presence at the forefront of change and innovation. In most cases, however, the invocation of a particular form of transport as the emblem of novelty and contemporaneity was a one-time thing. Bicycles were “new” just once, in the 1890s. Motorbikes were “new” in the 1920s, for Fascists and Bright Young Things (ever since they have been evocatively “retro”). Cars (like planes) were “new” in the Edwardian decade and again, briefly, in the 1950s; since then and at other times they have indeed stood for many qualities—reliability, prosperity, conspicuous consumption, freedom—but not “modernity” per se. Trains are different. Trains were already modern life incarnate by the 1840s—hence their appeal to “modernist” painters. They were still performing that role in the age of the great cross-country expresses of the 1890s. Nothing was more ultra-modern than the new, streamlined superliners that graced the neoexpressionist posters of the 1930s. Electrified tube trains were the idols of modernist poets after 1900, in the same way that the Japanese Shinkansen and the French TGV are the very icons of technological wizardry and high comfort at 190 mph today. Trains, it would seem, are perennially modern—even if they slip from sight for a while. Much the same applies to railway stations. The petrol “station” of the early trunk road is an object of nostalgic affection when depicted or remembered today, but it has been constantly replaced by functionally updated variations and in its original form survives only in nostalgic recall. Airports typically (and irritatingly) survive well past the onset of aesthetic or functional obsolescence; but no one would wish to preserve them for their own sake, much less suppose that an airport built in 1930 or even 1960 could be of use or interest today. But railway stations built a century or even a century and a half ago—Paris’s Gare de l’Est (1852), London’s Paddington Station (1854), Bombay’s Victoria Station (1887), Zurich’s Hauptbahnhof (1893)—not only appeal aesthetically and are increasingly objects of affection and admiration: they work. And more to the point, they work in ways fundamentally identical to the way they worked when they were first built. This is a testament to the quality of their design and construction, of course; but it also speaks to their perennial contemporaneity. They do not become “out of date.” They are not an adjunct to modern life, or part of it, or a byproduct of it. Stations, like the railway they punctuate, are integral to the modern world itself. We often find ourselves asserting or assuming that the distinctive feature of modernity is the individual: the unreducible subject, the freestanding person, the unbound self, the unbeholden citizen. This modern individual is commonly and favorably contrasted with the dependent, deferential, unfree subject of the pre-modern world. There is something in this version of things, of course; just as there is something in the accompanying idea that modernity is also a story of the modern state, with its assets, its capacities, and its ambitions. But taken all in all, it is, nevertheless, a mistake—and a dangerous mistake. The truly distinctive feature of modern life—the one with which we lose touch at our peril—is neither the unattached individual nor the unconstrained state. It is what comes in between them: society. More precisely civil—or (as the nineteenth century had it) bourgeois—society. The railways were and remain the necessary and natural accompaniment to the emergence of civil society. They are a collective project for individual benefit. They cannot exist without common accord (and, in recent times, common expenditure), and by design they offer a practical benefit to individual and collectivity alike. This is something the market cannot accomplish—except, on its own account of itself, by happy inadvertence. Railways were not always environmentally sensitive—though in overall pollution costs it is not clear that the steam engine did more harm than its internally combusted competitor—but they were and had to be socially responsive. That is one reason why they were not very profitable. If we lose the railways we shall not just have lost a valuable practical asset whose replacement or recovery would be intolerably expensive. We shall have acknowledged that we have forgotten how to live collectively. If we throw away the railway stations and the lines leading to them—as we began to do in the 1950s and 1960s—we shall be throwing away our memory of how to live the confident civic life. It is not by chance that Margaret Thatcher—who famously declared that “there is no such thing as Society. There are individual men and women, and there are families”—made a point of never traveling by train. If we cannot spend our collective resources on trains and travel contentedly in them it is not because we have joined gated communities and need nothing but private cars to move between them. It will be because we have become gated individuals who don’t know how to share public space to common advantage. The implications of such a loss would far transcend the demise of one system of transport among others. It would mean we had done with modern life. —This is the second part of a two-part essay. January 13, 2011
<urn:uuid:d1444ec4-e7b9-4514-84e8-b0149de58560>
CC-MAIN-2021-43
https://www.nybooks.com/articles/2011/01/13/bring-back-rails/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00231.warc.gz
en
0.971376
3,673
2.90625
3
Most of us know beautiful coleus plants as gorgeous additions to border plantings in gardens or bright pops of color in containers, but did you know that actually make great indoor plants as well? This article will tell you all you need to know about coleus plant care indoors, so you can brighten your indoor space with their stunning foliage. Coleus plant care indoors: Provide bright indirect sunlight, temperatures of 65-75°F, and moderate to high humidity levels. Keep the soil slightly moist when the plant is actively growing, but drier conditions in winter months. Fertilize lightly with a high nitrogen fertilizer every two weeks, and pinch back the stems to create a fuller, bushy plant. Coleus is an easy to grow plant and will thrive indoors if cared for properly. The following information covers all of the different aspects of caring for this beautiful houseplant. Coleus Plant Overview Also known as Painted Nettle, coleus (Solenostemon scutellarioides) is a member of the mint family and is native to Southeast Asia. Surprising to most people, it is technically an evergreen perennial that is primarily grown as an annual because of its sensitivity to frost. Because of their predisposition for warm climates, they make great indoor houseplants. Coleus Plant Indoor Care Summary |Scientific Name||Solenostemon scutellarioides| |Origin||Africa, Asia and Australia. Many cultivars grown for colorful foliage.| |Light Requirements||Bright, indirect light. Some direct morning sun is ok.| |Watering||Maintain lightly moist soil during the growing season. Allow top few inches to dry between watering in winter.| |Soil||Rapidly draining potting mix. Most general purpose potting mixes work well.| |Temperature||65°F-75ºF (24C°-27ºC). Very intolerant of cold and frost.| |Fertilizer||Half strength fertilizer applied every two weeks during the growing season. High nitrogen preparations are best to promote foliage and suppress flowering.| |Humidity||Medium to high humidity.| |Flowering||Tiny white or bluish flowers. Pinch off buds promptly to prevent flowering, or plant will go to seed and die.| |Pruning||Pinch back growing stems to maintain compact growth.| |Propagation||Stem cuttings can be propagated easily.| |Re-Potting||Repot every 1-2 years. Increase pot size only if a larger plant is desired.| |Diseases and Pests||Fairly resistant, but root rot, mold, mildew, aphids, spider mites and mealybugs can cause problems.| |Toxicity||Toxic to pets and mildly toxic to humans.| |Where To Buy||Buy Coleus online at Etsy (I buy most of my houseplants from Etsy).| Characteristics Of Coleus Plants - Grows from 6-inches to 3 ½ feet in height depending on the variety. - Plants grow 1 to 3-feet wide. - Leaves range from one to six inches long. - It can grow upright or trailing, depending on the variety chosen. Guide To Coleus Plant Care Indoors Coleus have gained their popularity as beautiful garden and landscape plants, but they are grown as annuals in all growing zones except USDA hardiness zone 11 because they are incredibly intolerant of frost and cold temperatures. Because of this, they are gaining popularity as indoor-grown container plants. When growing plants in your home, provide the following growing conditions to promote strong, healthy plants. When grown indoors coleus prefers bright, indirect light. It’s best if they can live on a windowsill that gets light in the morning or early hours of the day and has shade during the most intense sun exposure in the late afternoon. Some direct sun is okay, except intense summer sun which will scorch the leaves or cause the bright colors to fade. Too little light dulls leaf colors and may cause leaves to drop. You may need to supplement available light with artificial lights during the winter. Watch the plant closely. If the leaves fade and lose color, the plant is probably getting too much sunlight. However, if the plant is lackluster and drops its leaves, try giving it a little more light. Coleus Plant Temperature Range Tropical houseplants do well indoors because they prefer the same temperature range as we humans do. For best growth keep your plants in a room where temperatures are between 65°F-75ºF (24C°-27ºC) or even up to 85ºF, avoiding any sudden drops in temperature. Keep your plants in a spot where they are not exposed to drafts coming from leaky windows, opening/closing doors, or register vents blowing heat in the winter and cool air in the summertime. Coleus prefer environments with medium to high humidity levels. For most homeowners, low indoor humidity makes coleus plant care indoors a little more challenging than outdoor care. You’ll need to create a pocket of moist air for your coleus plants to really help them thrive. If your plants begin to show brown tips or crispy edges, signs your air is too dry, you can increase the humidity level by grouping plants together, or setting your plant in a tray containing pebbles and water. Don’t mist the leaves like you would with other tropical plants to avoid creating water spots on the velvety foliage. Soil For Coleus Plant Care Indoors Coleus plants prefer a soil that drains quickly and provides good aeration to the roots. Due to this most “all-purpose” commercial potting soils are suitable. Avoid anything specifically formulated for a given plant type such as acid-loving plants or succulents. Commercial potting soils are actually a “soilless” mix of peat moss, coconut coir, pine bark, and either perlite or vermiculite. Avoid using straight coconut coir or sphagnum peat moss in your containers; they retain too much water. To improve the drainage rate of the potting soil you can add extra perlite. Read my complete guide to choosing soil for your houseplants. This covers everything you need to ensure your plants are always in the best soil to help them thrive. During the active growing season in the spring and summer keep the potting soil slightly moist, although not soggy, at all times. Coleus plants do best if the soil isn’t dry or overly wet. During the winter when growth is slower, scale back slightly on watering. Allow the top ½ inch or so of potting soil to dry out completely before you give your plants a drink. Water with tepid water and avoid getting water on the velvety leaves. Hard water will leave water spots that are nearly impossible to remove. If you live in an urban or suburban area with treated water it is best the water is allowed to sit for a couple of days before using it to allow the chlorine to dissipate. This helps to lower the risk of chlorine toxicity in your plants. Fertilize every 1 to 2 weeks during the active growing season at about half the strength recommended on the fertilizer label. Do not fertilize when the plant isn’t actively growing during the colder winter months. To promote good foliage growth and minimize flowering you will want to purchase a quality fertilizer that is higher in nitrogen and lower in phosphorus. Avoid a balanced fertilizer that has an equal ratio of N-P-K such as a common 10-10-10 formulation. Look for a water-soluble or liquid all-purpose plant food and mix it at half the strength of the recommended dosage on the label or even slightly more diluted. Coleus will flower in the summer with racemes of tiny white or bluish flowers if given the correct care, but unfortunately, you should prevent flowering if you want to keep your plants around. If your plants do flower make sure to pinch the flower buds off immediately. Flowering triggers your plant into thinking it needs to go to seed. Once it goes to seed, it dies. So, keep pinching off the flower buds as they form to extend the life of your plant. As a side note, plants that have been propagated from stem cuttings typically won’t flower as often, if they do at all. Pinch back the stems of your coleus to keep the plant from getting too leggy. This triggers growth from growing points at the nodes on the stem, creating a fuller, bushier plant. You can pinch the stems back at any time but it’s best to do it when the plant is actively growing during the warmer months. When you pinch them, make sure to cut the stem cleanly immediately after a leaf node using either your fingernails or a sharp pair of clean scissors. In spring or early summer remove a 3 to 4-inch long stem tip cutting that has at least 3 leaves attached to the end piece. Cut just below a leaf node where a leaf is attached to the stem. You can then place the cutting in a jar or glass with clean water until roots form or immediately put it in a small container with moist potting soil. Over time your coleus may outgrow the container you have it growing in, and need to be moved to a bigger one. A plant that is root-bound in a container will have slower growth or the growth may be completely halted. If you don’t want your plant to get any larger it’s acceptable to keep it in the current pot, but you can remove it and add new potting soil every year or so. If you want it to grow more it’s best to repot it, putting it in a container that is 1 to 2-inches wider in diameter and about the same increase in container height. When repotting gently tease the roots with your fingers to loosen them up and then add fresh potting mix. Springtime is the best time to repot plants as they begin actively growing after the cooler winter months, and can bounce back from the shock of repotting quicker. An important note – when filling containers with growing media do not create a “drainage layer” in the bottom of the pot. For a long time, this was a highly recommended practice, taught to new gardeners. It’s been proven though that this practice is more detrimental than helpful. As water moves down through the soil profile via gravity, it stops when it encounters this drainage layer created by rocks or small stones. Before the water percolates into the layer, the entire potting soil must fill with water rendering the layer problematic instead of beneficial. Diseases And Pests Coleus plant care indoors is thankfully not troubled too much with disease and pest problems, although they do exist, unfortunately. One of the biggest culprits of both is overwatering so watering plants only when they need it will help prevent problems. Monitor your plants frequently to catch problems early and treat them before damage is extensive. Like most other potted plants, coleus is susceptible to root rot if overwatered. Plants will also occasionally have problems with downy mildew or powdery mildew. The most commonly seen problem with coleus is root rot, caused by overwatering, especially in the winter months. The roots then die back due to lack of oxygen or the overgrowth of a soil fungus. Soggy soils encourage the growth and multiplication of Pythium, Phytophthora, Rhizoctonia, or Fusarium fungi which spreads into the roots, infecting plants. Healthy roots begin to turn brown and mushy as they perish, unable to take in nutrients needed for growth. As root rot progresses leaves turn yellow, wilt, or droop and then become mushy as well. Once symptoms are visible in the leaves the problem may be past the point of rectifying, endangering the entire plant. If caught soon enough you can repot the plant, to try to save it. Remove as much of the infected soil as possible adding in fresh, clean potting soil. If root rot has spread significantly, dissect the plant, keeping only the healthy portions. If the whole base is affected, take stem cuttings from healthy foliage to propagate a new plant. This fungal disease occurs on the top of the coleus leaves in humid weather conditions. When infected, foliar symptoms include chlorosis, angular lesions, distortion (leaf curling), and leaf drop. Lower leaves are affected first and may develop as a downy gray to purplish growth on leaf undersides. Remove any infected or diseased plant tissue using sterilized scissors or a razor blade. Dispose of tissue in the trash. Consider treating with an appropriate fungicide. If the disease is severe it may be best to dispose of the entire plant. To prevent downy mildew, water at the soil level to prevent spores from splashing up onto foliage or neighboring plants. Powdery mildew presents as a white powdery film on the leaves and stems of your plants. It looks similar in nature to a dusting of flour. Over time it may darken in appearance to a grey color and may spread down to the soil. Powdery mildew impairs photosynthesis since it covers the leaves. This causes a stunting of the plant’s growth and can kill the plant if left untreated. Remove any infected or diseased plant tissue using sterilized scissors or a razor blade. Dispose of tissue in the trash. It is typically recommended that you spray infected plants with bicarbonate solution or a sulfur-based fungicide according to the label directions but this should be avoided with the coleus’ velvety leaves. Insect problems are going to be your biggest challenge with coleus plant care indoors, mainly if you have neighboring houseplants with aphids, spider mites, or mealybugs. Aphids are one of the most common insects affecting indoor plants. These tiny, pear-shaped insects attach themselves to the plant, sucking sap from the plant tissues, and then secreting “honeydew”. Symptoms appear as distorted foliage and leaf drop. Remove aphids by wiping the plants with a clean, soft cloth or spraying the plants with a mild solution of water containing a few drops of dish soap. These tiny sucking pests are found on the undersides of leaves, wreaking havoc on indoor houseplants. Spider mites feed on the fluids found inside the leaves of coleus, piercing the waxy coating to access the internal fluids. One of the biggest challenges with spider mites is their prolific nature; oftentimes a heavy infestation will occur, unnoticed, before plants begin to show physical symptoms of damage. With an infestation of spider mites, leaves may be stippled with discoloration or turning yellow overall. Plants may also exhibit a fine, spider-like webbing between the leaves or at the base of the plant. These pink, soft-bodied insects are covered with a white, waxy, almost cottony-like material. The cottony fluff protects them from moisture loss and excess heat. Mealybugs are usually found in colonies in somewhat protected areas of the coleus such as where the leaves attach to the stems. Symptoms show as stunted or deformed leaf growth, especially on new foliage as mealybugs inject a toxin into leaves when they feed on the plant’s fluid. They also excrete honeydew as they feed, encouraging the growth of sooty mold This is the one drawback to coleus plants: they are dangerous to pets and can trigger mild skin reactions in some people. The essential oils found in the plant’s foliage are toxic to dogs, cats, and other animals. For people with skin sensitivities, they can also cause contact dermatitis or other irritations. If you do have pets keep your coleus out of their reach if possible. If you’re looking for alternate pet-safe houseplants, take a look at some of my favorites in this article. Boundless varieties of coleus, literally hundreds of different ones, are available. Foliage colors include red, maroon, brown, cream, yellow, orange and green in an array of dramatic combinations and designs. Leaf edges may be scalloped or ruffled and have a contrasting color. Some of the most popular varieties include: - Kong Series are some of the best to grow indoors as they prefer filtered shade; large leaves grow up to 6-inches wide and have dramatic markings. - Wizard Series grow 12 to 14-inches tall and have a branching stature. - Superfine Rainbow Series have large, vibrant multicolored leaves, and grow bushy up to 15-inches tall. - Giant Exhibition Series grow up to 20-inches tall with large (6 to 7-inch long) leaves. - Premium Sun Series are vigorous, mounding, and well-branched, making them great for small spaces and garden borders. - Fairway Series are super showy and extra dwarf, growing only 8 to 10-inches tall. Problems With Coleus Plant Care Indoors Why are my coleus leaves curling? Curling leaves is a common issue with coleus plant care indoors more than outdoor care. Common causes are downy mildew or other diseases. Low humidity, underwatering and temperatures stress can also cause leaf curling. Curling leaves is a sign that something is not quite right with your plant and you should look closely for any problems. Go through the care summary at the top of this article and check you are meeting all of the basic care needs for your coleus. Why is my coleus drooping? Your coleus may be drooping because of improper watering. Leaves will droop if plants are underwatered and they are drought-stressed, or if the plant was over-watered and is experiencing root rot.
<urn:uuid:cada6cad-1ed6-435f-8f41-6555df131c37>
CC-MAIN-2021-43
https://smartgardenguide.com/coleus-plant-care-indoors/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00669.warc.gz
en
0.924062
3,803
2.921875
3
Welcome to MOSOunds! Today, we will talk to Dr. Zak Watson, associate professor of English and chair of the English and Philosophy Department at Missouri Southern State University and to Dr. Amy Gates, assistant professor of English. We will also be hearing from Elisa Bryant, a Development Officer at MSSU. The book Frankenstein or the Modern Prometheus was written in 1816, by Mary Shelley and published two years later. HOST: In the book, scientist Victor Frankenstein creates a humanoid figure as a result of a scientific experiment, which is somewhat different from the movie versions we have seen. Dr. Watson says an event dedicated to Frankenstein is scheduled for later this spring. Dr. Watson: So the name of it is Frankenstein Week. It is the first time we are doing it and we hope to continue doing it in future years. It will be held March 5th through 10, so that first full week of March, before spring break, when it is warming up a little bit, but when it is also in the springtime which is when the book was initially released back in 1818. HOST: We associate the book with castles and dark locales but the actual book starts somewhere near the North Pole, isn’t that right? AMY GATES: The book is structured as a frame narrative, so the outer frame is Walton who is doing his own discovery through trying to look for a north passage through the ice and he sees a strange creature and shortly thereafter he sees a sort of strange creature who is Victor Frankenstein himself. And then he writes letters to his sisters and within the letters, Victor Frankenstein tells his story and within Victor Frankenstein’s story, the creature himself gets to speak. HOST: Frame narrative: Could you define that? GATES: It means that the central core of the story is, in fact, is the creature’s story in his own words and containing that is a frame. In this case, it is that the scientist Victor Frankenstein telling his story and when he heard from him and then containing that – this is a double framed story – we have Walton’s letters holding the whole thing together. It is sort of a nested narrative. Host: We asked Dr. Gates what activities are planned for the week. GATES: Well, we are really excited that we have an outside guest speaker coming to campus. Her name is Dr. Elisa Beshero-Bondar and she’s a digital humanist. That means she works in digital humanities. She is working on updating Frankenstein, the digital version of Frankenstein from its original web one point O version to new coding that will be more accessible for new technologies. She is a romanticist, a scholar of Romanticism, so she knows Frankenstein well from that perspective but she also this interest in making it accessible through technology. HOST: Can studying a book with the name recognition of Frankenstein a good way to get students interested in literature? GATES: We think so. We hope even more than students, people in the community. Frankenstein feels very familiar to people because they know it from Scooby Doo and from movies, but the novel itself is much more complex so we have not only our guest speaker but we’ll have a panel of scientist and our philosopher who has talking about medical ethics. So we will talk about the history of anatomy in form and scientific education. We have other things planned that will appeal to different constituencies as well. HOST: The novel came to be written in a somewhat unusual fashion. Could you tell us about that? GATES: Well it was … it started in 1816, which is famously known as the year without a summer. The previous year Mount Tambura in Indonesia had the biggest volcanic eruption in history and sent lots of dust into the atmosphere. So the skies were dim. They had record colds that year. During this year, 1816, Mary Shelley, who was not yet Mary Shelley. She was still Mary Wellstone Craft Godwin. She and her lover, the poet Percy Shelly, her step-sister all went to the Swiss Alps. They ran into the poet Lord Byron and his friend and physician Dr. Polidori. And they were hanging out in the cold wet summer. So, they read, they talked about scientific experiments and then they decided to have a ghost-story writing contest to fill the time. And so the two famous poets didn’t do much of anything. Dr. Polidori wrote a novella. It became a novella known as “The Vampire” and the really famous successful story that came out of this was Mary Shelley’s Frankenstein. She published it 19 months later. HOST: Was the book an immediate hit? GATES: It was. It was published anonymously so people had thought a man had written it. Because it was so scientifically oriented they couldn’t imagine a woman could write it. But it was very popular and adaptations began rather quickly thereafter. [End at 6:27] HOST: The potential of what science could do provides a springboard for the events in Frankenstein. Talking about the creation of a human being outside the womb is a very modern thought isn’t it? WATSON: I’m trying to think of other novels of the time. Yea, I think it’s fairly new at that moment to be thinking about those scientific questions in the form of a novel, right? It is less domestic than other novels of the time. I mean, in ways. Yea, I think it is pretty different. To find other people writing sort of fictional texts about “sciency” ideas, you have to go almost a hundred years earlier to people like Jonathan Swift. He’s definitely not writing novels. [End at 7:28] HOST: The events of the novel – the creation of a human-like figure only referred to as “the creature” or “it” seems rather bleak. Does science come out on the good end in Frankenstein? WATSON: Well, one of the sort of most immediate responses and people still read it this way, is to think the whole problem is that Victor, the Scientist, is trying to play God. He’s trying to create a human in his own image and it goes horribly wrong so certain that is one prominent reading. But Mary Shelley herself and her friends were really interested in science and the potential for science. So it’s not an “anti-science” novel but it’s more about…… She’s interested in what happens when science isn’t part of a community….. part of conversation and discussion. Victor hides himself away and creates this without thinking or having any tempering conversations with people. Lots of questions in the novel about nurturing. He ends up being kind of a bad parent. She was a new mother and she understood what babies are like and the Creature is very childlike and charming, at first. He had a chance, had he had a better parent. Many people read this less about the science in some ways and more about nurturing. HOST: One particularly affecting part of the novel is when the Creature comes to Frankenstein and asks him to create a female counterpart for him. That desire to express love or to reach out is very touching. Is that really human moment for this humanoid something that “gets” to most readers? WATSON: Well yea, he gets started on it and then he destroys the female creature in a fit of rage before he is finished her. But that is an effective moment for students. They are usually surprised when they see that in the novel and its one of the things that makes the novel so different from the films is that human quality that the Creature has. We can see things from his perspective so much. HOST: And we also hear from him. GATES: But in the novel, he is quite articulate, well-read and makes a very strong case for himself when he asks Victor to create a companion for him. HOST: Now physically, the Creature is extremely tall – around eight feet and not physically attractive in any way, was he? GATES: According to Victor he is not. He was created from what Victor thought were the most beautiful parts but when he enlivened those parts don’t come together well, apparently. HOST: Gothic literature often uses dark scenery, melodramatic narrative devices and an atmosphere of dread to convey stories. The novel Frankenstein has a scientific emphasis to it but is it a Gothic novel, Dr. Watson? WATSON: Absolutely. The ah… particularly the nested stories is a prominent part of the Gothic Novel. The narrative structure of this looks very Gothic with the interruptions and the repeated shifting of narrators. I would say also the concern for social connection is an important part of the Gothic novel. That theme of isolation. That the real problem that Victor has is that he doesn’t have a lot of people around him is typically Gothic and maybe the third thing that qualifies it as a Gothic Novel is that the family structure turns into a site of pure horror. So we have this strange family, just a father, no mother, a strange creature that eventually wants to kill all of Victor’s family: This is a typically Gothic sort of family, I think. HOST: Dr. Gates says Frankenstein provided ripe subject matter for a drama. GATES: Well, I will tell you that the first stage adaptation was actually 1823 and Mary Shelley herself went to see it and approved of it. She thought the story wasn’t handled terribly well, but she liked the way the actor portrayed the creature and she wrote positively about that. WATSON: The earliest film adaptation was Edison’s adaptation in 1910. So the man who brought the light bulb to us wanted to sort of imitate Victor Frankenstein who brought life to us in this strange way. I think it is really interesting that … the beginning of narrative filmmaking, really. 1910. This was early and one of the first texts we have to put on film is Frankenstein. One theory about why this had to be done is because the text itself doesn’t describe the Creature much. It creates this great desire to see the Creature, but we don’t see him very often. So we almost need cinema to come and supply the images the book can’t – or the book actually teases us with and refuses to give us. We need movies to show us this is what Victor saw that night. This is that hideous creation that was a mistake, that wasn’t beautiful. HOST: What are two of the best films that are based on the book? WATSON: Well, we hope the two we’re going to show on our movie night are good. We have the 1931 James Whale Frankenstein and Mel Brooks’s 1974 Young Frankenstein. I think both of those are great films. There, they’re really provocative. They have things to say about how we represent the creature. We get pre-code horror violence from Whale. And actually from Brooks we get some surprising meditations on why we enjoy the creature. So we’re going to hope that both these films give the Creature a chance to put on the Ritz for us. HOST: When you teach this, do you have good discussions about robotics and genetic engineering and test-tube babies, and similar topics we hear about today? GATES: I teach this regularly in my British Literature 2 class, the survey class. We never have time to talk about all the things we would like to, but those questions certainly come up. At the end of the semester, I teach a book by Kazuo Ishiguro called Never Let Me Go, which is also about clones. It came out after the Dolly the Sheep cloning experiment. So we’re able to come back to Frankenstein and think about that we’re still grappling with what humans can do and what they should do and what the possibilities are so this does capture the imaginations of students, who … the literature students of course but even the students who are taking this class as a general education class have things to think about and say about this novel. HOST: Why, in the larger sense, are we having Frankenstein Week at Missouri Southern? WATSON: Well, Dr. Gates was the originator of the idea I think it’s going to be a worthwhile effort for us because it lets us connect. This connects thematically with Frankenstein itself. This is a book about a man who didn’t connect. This gives us a chance to connect across different departments, hopefully to the community in various ways through our story-writing contest, through our panel of experts who will speak on these topics and will give us a chance to think about the role literature plays in the topics we’re still sort of concerned about now. HOST: Activities for an event like Frankenstein Week have a monetary cost. It costs money to bring in a guest speaker, money to pay for travel and lodging and an honorarium. It also costs to obtain the movies the planners want to show and to pay for the rights to show these films. In order to raise the funds to meet these expenses, Missouri Southern is turning to something called crowdfunding. Elisa Bryant is a development officer at Missouri Southern State University. We asked her to explain just what crowdfunding involves. BRYANT: Crowdfunding is fundraising for a project or a venture with a larger audience that you can do across internet-based traffic so website, email, social media and so you raise a large amount of money from many different audiences. HOST: How is crowdfunding being put to work on this project? BRYANT: Frankenstein Week will need six thousand dollars to help bring the idea of Frankenstein Week to fruition. So, six thousand dollars will help with many events, [and so] we will do a wide amount of fundraising, the first being crowdfunding. We will also reach out to English alumni because the English and Philosophy Department will be fundraising for this. We will be fundraised on crowdfunding through many, many different forms. HOST: What will the crowdfunding pay for? BRYANT: The crowdfunding can and will pay for their speaker who is going to come in for the Frankenstein Week. It will pay for many of the activities and events. We’re reaching out to many different local high schools to get involved in the ghostwriting contest. [and so] It’ll help pay for many of the ideas and concepts of Frankenstein Week and it’ll help bring it all together. HOST: If someone wants to help to the crowdfunding effort, how can they do that? BRYANT: Yes, we are, we will have a page that will go live around February first. That page will be shared hopefully through the English Department Facebook page, our university and so you can just jump online and donate through our crowdfunding page. HOST: The words of Elisa Bryant of the MSSU Development Office. We have been talking to Dr. Zak Watson and Dr. Amy Gates of the Missouri Southern English & Philosophy Department about Frankenstein Week, scheduled for March 5 through 10 on the campus of Missouri Southern State University. To visit the crowdfunding page, go to lionspaw.mssu.edu/frankenstein. For Missouri Southern State University, I’m Stephen Smith.
<urn:uuid:31058a32-021f-4154-964c-1a022d3d739c>
CC-MAIN-2021-43
https://crossroads.mssu.edu/mosounds-episode-vi-literature-comes-alive-with-frankenstein-week/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00030.warc.gz
en
0.973044
3,221
2.734375
3
The job of the CNS is to protect the body from conscious and subconscious perceived threats to the – spinal cord, visceral organs, the brain, from pain, from soft tissue and ligamentous / tendonous damage.7 Similarly, if you condition using the “wrong” exercise, your CNS will ‘shut down’ and decrease its rate of firing and lengthen its regeneration in order to protect itself from further insult to the items mentioned in the above statement. When it ‘shuts down’, it will allow for the movement to continue, but will begin to recruit abnormal muscle chain firing (a decrease in the stored elastic energy output / coiling, thus requiring more metabolic muscular structures to be engaged …. more effort) , as a consequence, this can lead to inefficiency of movement, a loss in performance and not to mention the potential of acute and/or chronic injury. “You can teach your CNS to put the ‘brakes on’ (i.e. tense up) and fire slowly or you can teach it to relax and fire fast. The choice is yours” Let us first assume that we are dealing with an athlete whose bio-mechanical limit is below his/her physiological limit. Now, If this athlete decides to take his/her body past their bio-mechanical limit, this will be perceived by the CNS as a threat to the structural stability of the items mentioned in the earlier statement; hence it will proceed to ‘lockup’ certain areas of the body in order to protect these structures and to prevent further pain and potential soft tissue damage. “CNS prefers survival over Performance and Efficiency over Wasting energy & pain” Scar tissue from previous injuries, whether they be from overt tissue deformation or from more subtle fascial distortion, can similarly negatively impact the CNS, and thus ‘shut down’ any ability of the athlete to generate optimal performance. The scar tissue acts as a focal point of abhorrent energy transfer and abnormal sensory information, thus giving the postural system abnormal cues to firing the wrong chain of muscles and send and receive improper kinestic, chemoreceptive and barorecptive information amongst other types of sensory information. The location of the scar tissue(s) on the body can be critical to again further distort this situation. For example, your typical ‘garden variety’ ankle sprain can affect numerous chains in the body, potentially setting up the athlete for further injury very distant to the site of the original insult, not to mention, the potential of re-injury to the initial site itself! We have seen athletes come in with injuries that occurred several years prior, that were “Rehabbed”, only to surface as the hidden source of sub optimal CNS firing! “The less soft tissue restrictions, the greater ease of movement, the less CNS threat, the greater the CNS firing” The CNS is constantly governing the performance you are partaking in, making sure it is not threatening the Body. In an average person, this takes up to 90% of the CNS to detect this. In elite Athlete populations, this ranges from < 60% to as low as 40%. This allows the athlete to push beyond his/her mechanical limits. If you strengthen the ‘right’ muscles with the ‘right exercise’ (i.e. the correct balance between phasic and tonic muscles for each individual), you will place less threat on the CNS. Thus allowing for ease of movement and decreased injury potential. If, however, you choose to condition the body how 95% of coaches and athletes train themselves, you will encounter either immediate short term issues or literally set an athlete up for an injury! “A majority of self-sustained soft-tissue injuries, with the exception of blunt trauma, are the consequence of improper exercise selection & previous soft-tissue distortion, just to mention a few” Most coaches / athletes base their exercise selection criteria on what the sport requires of the athlete, as opposed to choosing the right exercise for the athlete (i.e. CNS enhancement). Both perspectives are true but the latter trumps the former a majority of the time. It is only with the elite few that the inverse is true.8 “Choose the Right exercise for the Right job, in the state you find the athlete in & ‘ NOT ‘ what the sport dictates” The problem, we at The System, encounter over and over again, is the implementation of conditioning methods based purely upon metabolic expenditure and / or , the worst in our opinion, “Sport-Specific” conditioning, which eludes to our previous point. The approach will only bring about an increase in potential injury by way of several factors (just to mention a few) - increase muscle stiffness, - increase muscle chain(s) weakness, - decrease limb mobility, but mainly - a drop in CNS firing. You may argue, “Oh, but my athlete is winning”, to this we say …”To what expense” or “Big fish in small pond” or “An accident waiting to happen” or “Could your performance be even better?” Our approach is somewhat esoteric for most coaches and athletes to understand, that’s fine, we are not here to discuss and be bogged down with training semantics, but rather we are focused on identifying and rectifying the “blockages” that are impeding your performance. This means entertaining tried and trued possibilities that are going to achieve results, and not what is considered colloquial conditioning methods. “Sometimes you need to go back to go forward” One of the main areas of CNS input, are the joints themselves. Each joint is encapsulated with a ‘rubber boot – like’ casing that fascially penetrates the joint three-dimensionally. Next, there is additional support provided by the ligamentous apparatus, which adds to the joint integrity. The muscles merely are used to help maintain joint centeredness as they respond to the displacement felt at the level of the joint capsule and ligaments. Your body wants to maintain 3-D joint centeredness before the execution of limb movement. By initiating this step, your body protects the joints from trauma. If by chance the external load is too great, i.e. the joint(s) displacement is too great, the surrounding joint musculature can not support the joint, this will be perceived as a threat to the CNS, and it will ‘shut-down’ any further movement by that limb(s) in order to protect the joint(s). An example of this would be performing a lift with a load beyond your 1RM. “Anchor then move. You can’t fire a cannon from a canoe, unless you have pontoons on the canoe” Therefore, if you want more strength, first you can not have any pain; second your ligaments can not be over extended. The consequence of this will result in a definite altered muscle firing pattern(s), altered length-tension relationships between antagonist / agonist / synergist muscles, altered inter-limb co-ordination, decreased performance and ultimately possible injury. “Ease of Movement relates to greater timed relaxation & contraction between intra and inter- muscle groups, which leads to greater force production with less conscious awareness…”When you seek it, you can not find it….movement just happens …Effortless Effort” Most therapists resort to Soft Tissue methods to obtain increased ROM in limbs, having said this, we are not here to discourage this practice nor are we here to say it is ALL about Decreased CNS firing, however, we are saying that if their is local scar tissue in the muscle belly, it definitely has to be worked out. But the other question we propose is…”How did the scar form in the first place?”. You might answer “from a previous injury to that area.” The rebuttal to this is …”How did they obtain the injury?”. Even if the scar was obtained from Blunt trauma or from a self-sustained injury, in either case the CNS still has its foot in the door. If it were due to the latter case, the question again would be “Why?”, which again takes us back to the CNS.6 “ROM is, more often than not, limited by a threat(s) to the CNS & not necessarily by muscle belly length” If the CNS becomes less ‘mindful’ about efficient movement patterning, due to habituated abhorrent motor patterning, by way of poor learned technical skill; it continues to use that movement pattern, until you give it the right movement. In this case, the CNS will ‘re-set’ itself to the correct movement because it is more efficient and perhaps causes less pain, discomfort, muscle weakness, joint instability, and muscle joint / tightness to the body. “Practice makes…..Permanent not Perfect, so be careful what & how you practice” Lets for a minute talk about stretching. The Current ‘wisdom’ we’ve seen by coaches and athlete’s alike, is the idea of “DYNAMIC FLEXIBILITY.” Let’s suppose that the athlete engages in such methods, and finds his/her hamstrings tight, by using this method you will temporarily lengthen the hams, but you could also be taking the SI joint beyond its joint centeredness, not to mention threatening to stretch the Sciatic nerve. Thus potentially creating a case for muscle guarding or spasm later on. Instead, lets say you told the athlete to engage in some static stretching with discomfort intensity between a 1 to 2 out of 10 (no higher than a 3) for a duration of 2 to 5 min. Yes you will decrease force out put by the ‘magical’ 20% due to the passive elongation of the muscle belly, but this can be brought back up within less than 2min by a skilled masseuse and a combination of some light PNF stretching. By doing stretching in this manner, you will not potentially threaten the CNS via the joint structures you are engaging. “Use what works, not what everyone else is doing blindly. Know what you are using, otherwise don’t use it” If the CNS is placed in a survival situation, for example having to lift a car up to save a loved one, in such cases of extreme stress, you could technically override the threat signals from the CNS, in order to achieve such performance. However, to an average person, this is a recipe for disaster, but to an elite world record holder, he/she could get away with it without possible injury, but the execution of this movement may not look pretty. One need not look far for this example, just watch some of the athletes at a Power-lifting meet as each individual encroaches close to the maximum load for a lift. In cases such as these, you can purposefully override the CNS threats, but you are risking it. “Your body can do more than it can do, but it just does not know it can” A good coach / athlete watches out for jaw clenching, technique failure and improper breathing, as these are subtle signs of the CNS shutting down. Therefore choose your activities wisely to prevent possible threats and improved performance.
<urn:uuid:7676e212-3a2c-4188-84ea-2bac879cebef>
CC-MAIN-2021-43
https://thesystem.ca/why-do-coaches-make-athletes-perform-exercises-that-shut-down-their-cns-system/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00391.warc.gz
en
0.949636
2,399
2.71875
3
The Clock Inside We have many ways of marking the passage of time. Saturday’s Winter Solstice, which marks not just the arbitrary beginning of a season, but also the slow return of daylight to the Northern hemisphere. Or the coming decade, as many reflect back on everything that’s happened since 2010, and prepare to mark the beginning of 2020—a completely human invention. And of course, the clock on the wall and on our smartphones reminds us a dozen times a day of the tasks we haven’t yet accomplished, the meetings we’ve committed to, and the routines of eating, sleeping, and working that all rely at least somewhat on what time it is. But there’s also an invisible timekeeper inside our cells, telling us when to sleep and when to wake. These are the clock genes, such as the period gene, which generates a protein known as PER that accumulates at night, and slowly disappears over the day, approximating a 24-hour cycle that drives other cellular machinery. This insight won its discoverers the 2017 Nobel Prize in Medicine and Physiology. These clock genes don’t just say when you snooze: from the variability of our heart rates to the ebbs and flows of the immune system, we are ruled by circadian rhythms. Erik Herzog, who studies the growing field of chronobiology at Washington University in St. Louis, explains how circadian rhythms are increasingly linked to more than our holiday jet lag or winter blues, but also asthma, prenatal health, and beyond. And he explains why the growing movement to end Daylight Savings Time isn’t just about convenience, but also saving lives. Invest in quality science journalism by making a donation to Science Friday. Erik Herzog is a professor of Biology at Washington University in St. Louis in St. Louis, Missouri. IRA FLATOW: This is Science Friday. I’m Ira Flatow. Chances are, you have many ways you keep track of time, right? You may be looking forward to tomorrow’s solstice, the official start of winter, the slow return of daylight. Or you may be thinking back on everything that’s happened in the last decade, as we prepare to enter the year 2020. And you may, this very moment, be looking at the clock and thinking about everything you still have to get done before the sun sets and the day is done. But you know there’s another timekeeper that you can’t see inside your cells, telling you when to sleep, when to wake up. These are the clock genes and they don’t just say when you snooze. They determine your heart rate or hormones. We’re ruled by circadian rhythms. There are cycles under scrutiny in the field of chronobiology. Yes. And my next guests say a better understanding of the clock genes might help us with more than our holiday jet lag. There are whole frontiers in health opening up as research and time march on. So let me introduce my guest. Dr. Erik Herzog, professor of biology at Washington University in St. Louis, president of the Society for Research on Biological Rhythms. Welcome to Science Friday. ERIK HERZOG: Hi, Ira. Thanks for having me. IRA FLATOW: You’re welcome. You know, I think it’s probably new to everybody that there’s a gene that keeps track of time of day. ERIK HERZOG: That’s right. There’s a handful of genes that we call clock genes that we say are essential for scheduling our day. IRA FLATOW: And how does that work? What’s going on there? ERIK HERZOG: Well, inside individual cells in our body, there are 19,000 genes, a subset of which are expressed in any given cell type. And of those expressed genes, there’s a handful that are really responsible for keeping near 24-hour time for those cells. So these are genes we call clock genes because when they’re mutated or messed up, the cells lose their ability to keep near 24-hour time. And the way the clock works is something we call the TTFL or the transcription translation feedback loop. Where a clock gene is turned on and it makes its message, that message then gets turned into its protein. Those proteins accumulate to a critical level, and then they go back into the nucleus of the cell to turn off the transcription of that clock gene. With that repression, the gene then turns off, the messages go away, the proteins go away. And about 24 hours later, the repression goes away and the gene can start expressing itself again. So this internal intracellular clock can keep near 24-hour time in just about every cell in our body. IRA FLATOW: So if each cell in our body has one of these timekeepers, are they all synced together? Because if they’re not, wouldn’t you have chaos? ERIK HERZOG: Yeah, exactly. So to be a good rhythmic person, sleeping at night, awake during the day, having hormones like cortisol rise just before we wake up, melatonin rise as we go to bed, we need to have a coherent rhythm amongst all of these cells. All the cells need to agree on what is local time. They need to be synchronized to each other. IRA FLATOW: And how does that happen? ERIK HERZOG: So synchrony seems to be mediated, the action is all in our brain. There’s a teeny tiny spot in the base of our hypothalamus, right on top of where our optic nerves cross. So if you follow your eyes back into your brain, sitting right on top of where your optic nerves would cross is a spot that’s about a millimeter by a millimeter by a millimeter called the suprachiasmatic nucleus, the SCN meaning sitting on top of the optic nerve crossing. And that spot is comprised of about 10,000 neurons on the left side of the brain, 10,000 neurons on the right side of the brain in the base of the hypothalamus. And that acts sort of like the atomic clock which synchronizes the clock in all of our alarm clocks and computers. It’s the atomic clock for our body. It is responsible for sending out timing signals to the brain and body to keep us as a coherent orchestra of clocks. IRA FLATOW: Now, we always hear about that you depend on sunlight to sort of reset that clock. Is that why the optic nerve or your eyes are connected to that, to know that there’s light out there? ERIK HERZOG: Exactly. So light enters through our eyes and it stimulates our rods and cones, the photoreceptors in our eye. But it also stimulates another population of photoreceptor cells, that were really only recently discovered. It’s pretty amazing that we’ve known all the cell types in the retina basically since 1865, but within the last 20 years, there was a new photoreceptor that was identified called a melanopsin cell. It expresses a special pigment called melanopsin. And those cells are the ones that actually project down your optic nerve and make synapses or connections in the body clock, the SCN, to communicate when it’s day and when it’s night, to synchronize your body clock to the local light dark cycle. IRA FLATOW: And the biggest 24-hour cycle most of us think about of course is sleeping. What about this process determines our sleep schedule? ERIK HERZOG: So the master clock in the SCN is intriguing in that it’s the same in diurnal and nocturnal organisms. It’s metabolically active during the day and relatively quiescent at night. But it’s sending out signals to different parts of the brain, we think, that are interpreted as now it’s time for you to sleep. And those signals include regulating things like the hormone melatonin, which is secreted by your pineal gland, and is a signal that helps to promote the onset of sleep. IRA FLATOW: So what happens when you have a night owl or somebody who sleeps, doesn’t go to bed before midnight, something like that? Does that upset the sleep center in the brain or the release of melatonin? What’s out of sync with that? ERIK HERZOG: Great. So first I think it’s important for us to say that it can be perfectly normal and healthy to be a night owl, to be a late bird, or an extreme early bird. There is natural variation amongst all of us. And at least some of that variation can be explained by variation in our clock genes. So there’s really beautiful work, for example, on one of our clock genes called the period 2 gene, showing that different mutations in that gene can turn you into either an extreme early bird or an extreme late bird. Just changes in the sequence of that one gene change how fast your clock runs. And if your clock runs with a short period, less than 24 hours, you tend to be an early bird. If your clock runs with a long period, longer than 24 hours, you tend to be a late bird, a night owl. So at least part of the difference between all of us who prefer to wake up either early or late can be explained by our genetics. That’s not the whole story. IRA FLATOW: And that’s interesting because we tend to think of, well, I’m abnormal because I’m an early bird or a late bird. But you’re absolutely normally just have variable genes that are doing it for you. The winter solstice is upon us, and for many of us in the northern hemisphere that means going to start seeing more daylight soon, though at least one listener is not excited by this. He’s Mark from Wisconsin on our Science Friday VoxPop app. MARK: I think I might be an outlier, but I actually prefer the shorter days and longer nights. I like the cold weather. And I sleep a lot better when the night is really long. During the summer I have trouble sleeping. It’s hard to go to bed when it’s light out and get up when it’s light out. IRA FLATOW: Erik Herzog, does that mean he has maybe a genetic variation to the norm? ERIK HERZOG: No, in fact we, like many of the animals on this planet, are seasonal creatures. So many of us will describe feeling differently in the summer than we do in the winter. Personal preference aside, I think it’s important for us to appreciate that lots of creatures on this planet are seasonal breeders, for example. They adjust to the long days by actually changes in their circadian system. So the circadian cells in our body are adjusting their relationship to each other to say it’s summer, compared to now is winter is coming. These cells are changing their relationship to each other to help us adapt to these seasonal challenges. And in the extreme, this clock can be related to things like hibernation and migration. IRA FLATOW: Let’s go to the phones, because we have a couple of interesting calls I want to get to. Elizabeth in Woodland, California. Hi, Elizabeth. ELIZABETH: Hi. I love your show. Happy new year and all. IRA FLATOW: Thank you, you too. ELIZABETH: Anyway, I’m calling about, he said light affects the brain. I have three questions. One, how does jet lag work when you’re going across the world? Or daylight savings. I have friends who say one hour makes a major difference. Or what about light for people in northern regions, where you have six months sun and six months not? IRA FLATOW: OK. Happy new year to you too. What do you say Erik? ERIK HERZOG: Those are three fantastic questions. The first question about how does light affect the brain. We think that it’s sending signals to indicate when dawn and dusk are occurring locally. So when we travel across time zones and we are suddenly challenged with the sun is coming up let’s say six hours early, because we just flew from St. Louis to Paris, we have not yet adapted to be able to adjust to that big change. We’re a species, just like every other species on the planet, that’s had to experience small changes in day length, like a minute or two each day. But now, with the ability to fly across time zones over the last 100 years or so, we are challenged to be able to shift our clock much bigger shifts, many more hours. And so we’re going to try and make our clock wake up six hours earlier when we fly east. And that’s why we feel jet lag. The clock, the internal clock system, is not able to make that shift completely in one day. And what we feel is sort of an internal desynchronization. The clocks in our body are not synchronized to each other and to local time, and until they get on local time, we can actually feel a form of depression, getting back to the caller from Wisconsin, how this clock system can really affect how you feel. IRA FLATOW: Let me go beyond our discussion about sleep, though. There’s been research on how we eat and circadian rhythms, and I’m thinking of a recent clinical study finding that eating within a 10-hour window could stave off diabetes, heart disease other problems. Are you familiar with that, and why would that work that way? ERIK HERZOG: Yeah. This is a really beautiful study from Dr. Satchin Panda’s lab and his colleagues at the Salk Institute. In that study, they asked whether just restricting your eating hours to 10 hours of your waking period would have any effect on your body weight. And this particular study was on people who have metabolic syndrome. So they have trouble processing food. And what he showed was that they could actually better manage the digestion of that food. They actually lost weight. They were able to stave off some of the symptoms of diabetes by just eating at the right time of day. So we like to say, it’s not just what you eat, but it’s also when you eat. And how that works is something that’s really still being actively studied, but I’d like you to think about it like this. You have evolved to eat during your waking periods and starve of all night long while you’re sleeping. And so your body has adapted to move the sugars around from when you’re eating, to processing those sugars to keep you going while you’re sleeping. Maybe an easy example is thinking about a plant which has to photosynthesize in the light and then starve all night long in the dark. IRA FLATOW: That’s quite interesting. Our number, (844) 724-8255. I’m Ira Flatow. This is Science Friday from WNYC Studios. Are there other medical applications to our understanding of circadian cycles, besides just talking about when we eat? ERIK HERZOG: Yeah. It’s really an exciting time for the field of circadian biology or chronobiology. Two years ago, three scientists in the field won the Nobel Prize for their discoveries of the molecular basis for how these rhythms get started. And I think in part they won the Nobel Prize in medicine or physiology because this really beautiful intracellular clock seems to be regulating so many aspects of our biology and our health. So there’s a very active area of biology that’s being applied to medicine called chronomedicine, where for example, drugs can be delivered at particular times of day to get better results. And nice examples of this are asthma medications that are designed to be slow release during the night and act while we’re sleeping when asthma attacks are more frequent. Or drugs that are used to treat heart disease and protect us against the increased risk of heart attacks just before we wake up in the morning. My lab is actually in collaboration with two oncologists here at Washington University. And we’re studying the potential of a drug that’s being used in treating brain cancers, glioblastoma. And we’ve shown that that drug is actually much more effective at killing the cancer at one time of day compared to at other times of the day. IRA FLATOW: That’s interesting– ERIK HERZOG: Another fun– IRA FLATOW: Yeah, go ahead. ERIK HERZOG: Yeah. One other fun example that I’ve really become very excited about is we have an ongoing collaboration here at Washington University with folks in the [? obstetrics ?] and gynecology, where we’re asking whether the risk for preterm birth might be associated with disruption of circadian rhythms. So we’ve been following 1,200 women here in St. Louis with funding from the March of Dimes, to ask whether their daily schedules are associated with their risk for delivering preterm. IRA FLATOW: That’s interesting. So if you look to make use of this knowledge, do you– what can you learn that could help us or make us healthier from your research? ERIK HERZOG: So I think the first thing that folks in the field would like everybody to think about is throwing away your alarm clock. If this biological clock is there to tell you when to wake up and go to sleep, and every time we use an alarm clock we’re waking up unnaturally, if we could just listen to our body clock, that probably would be a good step in moving towards a healthy lifestyle. The applications of this in terms of things like when schools should start are obvious and many people in our field are really working to start high schools a little bit later than they currently are so that kids can wake up naturally instead of by alarm clocks. We’re thinking a lot about lighting scenarios in medical settings like improving the lighting in hospitals. Dr. John Hogenesch in Cincinnati has worked really hard to help the hospitals there think about best lighting conditions for the patients and for the clinicians. IRA FLATOW: Fascinating. I want to thank you. Wow, thank you for taking time to be with us today, Dr. Herzog. ERIK HERZOG: It’s my pleasure. IRA FLATOW: And time, as they say, flies. So I’ll have to say goodbye. I’m sure you’re tired of hearing time jokes by now. Dr. Erik Herzog, professor of– ERIK HERZOG: I have jokes. IRA FLATOW: Professor of biology at Washington University in St. Louis and resident of the Society for Research on Biological Rhythms.
<urn:uuid:17d2133f-8cc5-4b05-acdc-0300ec77b7c9>
CC-MAIN-2021-43
https://www.sciencefriday.com/segments/chronobiology-2/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00031.warc.gz
en
0.942756
4,070
3.40625
3
The Fermi Paradox One day in the 1940's Dr. Enrico Fermi asked his fellow physicists, "Where the hell are all the aliens?" Sounds kind of weird doesn't it? But, at that point in time we had just invented rockets, computers, and shown that you could make self-replicating systems. So it's pretty obvious what we are going to do. We are going to keep sending these robots out into space and as soon as they can do repairs, we're going to let them replicate and fill up the galaxy. (Yes, we're an exploitative bunch.) And we will follow along shortly. So why hasn't this happened? Why don't we see other civilizations? The galaxy has been around almost 15 billion years, the earth has been around only about 5 billion years. Obviously if life can arise anywhere else but the earth, it should be already out there. And why should we be special? Of course life is out there. If it's out there it's been around a long time. If it's been around a long time it's got robots and/or colonies everywhere it wants to. So why don't we see them? Some people think we're the first intelligent life in this galaxy. Hogwash. I give that a zero percent chance. Or that intelligent species go through a winnowing out at certain points of technology discovery (nuclear weapons, biological weapons, grey goo...) Hogwash again. However, it's certainly possible that many civilizations are decimated by meteors if they don't invent rockets and telescopes. (Decimated like the dinosaurs were -- if you don't want to be like the dinosaurs then donate to b612 who have pledged to stop this.) This could have easily been our fate if we weren't lucky. Maybe having Saturn out there sweeping space clean of asteroids has helped us a bit. But surely there are other solar systems like ours that are billions of years older. So what gives? The solution to the Fermi ParadoxWhat gives is that the galaxy is really, really, really, really big. And I just don't mean big, I mean R-E-A-L-L-Y B-I-G. How big is it? It's more than 100,000 light years across. Sounds big. But how big is a light year really? Really, really big. Light travels really, really fast. Faster than anything else in the universe (as far as we know.) That's the law. A physical law based on Einstein's theory of special and general relativity which said that the speed of electromagnetic waves (light) is a constant (we physicists designate the speed of light by 'c' because that it is the initial letter of celeritas, the Latin word meaning speed...) The speed of light is 300,000 kilometers per second or 180,000 miles per second. So light gets to the moon in about two seconds. The fastest rocket (New Horizons that just went past pluto, by the way) took 3 hours (about 10,000 seconds) or so to get to the moon from the earth. So light is 5000 times faster than the fastest rocket we've ever made. That's really fast. The coolest way to remember the speed of light is to remember that it travels a foot in a nano-second (one billionth of a second.) That makes it physical. So how many feet are in a years worth of nano-seconds? Light travels a billion feet in a second and there are about 31 million seconds in a year. So a light year is 31 million billion feet. That's about 6,000 billion miles. The nearest star, Proxima Centauri, is 4.24 light years away. Or 26,00 billion miles. The fastest rocket we've ever made takes over 20 centuries to get there. More than 2,000 years. Is it any wonder the aliens aren't here yet? Now of course, those aliens probably have faster ships than us. But how much faster? It takes a lot of energy to get going faster, and it takes just as much energy to slow down. Energy that can be used to do a lot of useful things. It turns out that the fastest way to settle the galaxy is going to be to send out robots, then send out the genetic code of humans (or aliens) and grow them there. That way you can bypass all that weary travel that will be so boring to everyone. And pretty much the only way these settlements are going to be able to talk to each other is by radio or laser. The actual physical travel between the stars just isn't that useful. What would you trade between solar systems? Everything is way, way, way cheaper locally. The only thing you would trade would be information, which would travel at the speed of light. Think about it. We want to put some things on Mars (like people), but nobody wants to pay to bring them back. This goes double (uh, 26,000 billion over 300 million or 8000) times as much for the nearest star. It would be stupid to move anything between the stars that you didn't have to. And I'm pretty sure aliens are anything but stupid. In fact, there is no reason to travel to another star... except to annihilate any life living around that star. In other words to have a war, or as scientists like to say: evolution in action. And you wouldn't bother to send living things, killer robots can take care of it. Even Stephen Hawkings know this to be true. Although he's kind of late to the game since we've already broadcasted our presence to every alien out there... well, not quite. We've only been broadcasting electromagnetic signals for 60 years or so. This has consequences, which I'll talk about in a minute. But first... There is no paradox, we've just been lucky... so farIf you think about this, it's obvious why we don't see any aliens out there. Some other alien race saw them first and took care of them, either slavery, assimilation, reservationed or annihilated. And the alien race that was first is very careful to keep very quiet and watch everywhere so that it doesn't risk having the same thing done to it. (Thank GOD the galaxies are even further apart, that means we don't really have to worry about intergalactic war any time soon.) But why haven't the aliens already been here and taken care of us? Like I said, it's a really, really big universe. They just aren't here yet. It's obvious that they've got robots watching us, that'd be relatively cheap, but they wouldn't want to create a possible rival race, so these robots aren't going to do anything on their own (like turn on their masters.) So, the alien robots are waiting, probably out by the Oort cloud. They've sent signals home and are waiting for orders. So the orders from the aliens would only be here if they were closer than 30 light years. There are only 133 stars that are within 50 light years of the earth. There's probably a very small chance that one of these is settled by the master alien race that can decide to annihilate the earth; and then again, why would the aliens bother? We aren't going to be a threat to an alien planet for a long, long time. There's no hurry. Do you think aliens are afraid of our puny nuclear bombs? How does a nuclear bomb compare to the sun? The sun puts out the equivalent of a trillion nuclear bombs every second. I don't think a race that has been sentient for millions (if not billions) of years is very concerned that we might harm it somehow. It's also obvious it could hide as well as it wants to (look at how far our primitive cloaking technology has already come.) However, this progenitor alien race does need to act at some time to protect itself. Now when would that most likely be? It's going to be at least as fast as light can travel back and forth across the galaxy, which we know is <200,000 years, just because no matter where they are hiding, the signal that says 'humans are now a threat' is going to reach them by then and they can respond. In fact, we can write an equation that, given the density (number) of alien settlements in the galaxy, will predict how soon the aliens will be able to respond to us. First we assume that the number of settlements the aliens have in this galaxy is N. How many stars have they bothered to settle? It turns out to be a very lonely thing to settle stars (as discussed above.) My guess is that N turns out to be a small number. They'll pick the safest, most stable stars to settle around. No point in having your civilization annihilated in a supernovae, so they will spread out, but no reason to go every where when you can build your own planets where ever you want. Let's calculate the minimum N would have to be if the aliens were going to tell their robots to talk to us tomorrow. There would have to be at least one settled star within 30-50 light years of us. As mentioned before, this is about 133 stars. So if one of these stars was settled that would be ~1% of the stars would be settled. That means that N would be 1% of all the stars in the galaxy and since there are about 100 billion stars in the galaxy, N, the number of settled stars, would be 1 billion stars. Now that seems pretty wasteful. In fact, they've probably settled just enough stars to be close enough to everywhere in the galaxy to 'take care' of new intelligent races and protect themselves. How close do they need to be? How long can they risk us developing technology before we become a risk to them? Those dang Killer RobotsWhat's the worse scenario? Killer robots, of course. That's always the worst scenario. There's a chance we might be stupid enough to make killer robots that can reproduce and continue to evolve their technology and killing abilities. (Okay, that's sort of what we are, but we're not quite repairable enough to matter, but... it's probably about the same amount of time to make our lifetimes be long enough to be a threat, so we can do the calculation twice to see if we get comparable answers.) How long would it take to make killer robots that can reproduce themselves? Well, Ray Kurzweil thinks it's going to happen in about 25 years. Paul Allen thinks we will still be waiting for the singularity 85 years from now. Over 80% of the things Ray predicted have come true, on time. Nothing that Paul Allen has said will come true has ever come true. (I can give some very personal examples if you want me to.) And let's remember what we are trying to estimate here: How long from the invention of radio will a species be able to invent killer robots? Remember, this is an existential threat to aliens, they don't want to get it wrong. "Rats, was that a killer robot that just passed me? Dang, if I had just annihilated the human race a year earlier we wouldn't all have to die. Sorry about that, honey. I'll try harder next time." So they are going to be very, very conservative. Cripes! I'm starting to scare myself. Let's just assume, for sanity's sake, that Ray is crazy optimistic. So radios to killer robots takes at least 150 years. Shoot. The aliens would want to put small settlements within 75 light years of everywhere. Rats. I guess they do need to have 25 billion stars settled. Or at least that many outposts, maybe not that many settled stars. Get ready to kiss your ass goodbye. PredictionsSo this makes a few predictions that we can test. We could look for alien robots watching the solar system. How can we do that? That's another blog post, but the easiest thing would be to set up a huge radio telescope looking back towards the sun from far away. That way we could detect anything the robots were sending out. Sounds like a great project for the first Second, this line of reasoning predicts that we will be contacted by aliens before the singularity. Since that's going to be around 2045... they should be here any minute. There's a few other lines of reasoning that say the same thing, but that's another blog post (how can you predict the lifetime of something you find at random? Or why are you alive now and not in the future?) So, sorry to be such a downer, but despite Stephen Hawking's best intentions, it's already too late to change this. The aliens will be here and it will most likely be within your lifetime. I'm not sure how to get ready for THAT, except that it's another blog post... First Killer Robots Here's a picture of the first killer robot: automated radar guided gatling gun to shoot down jets. Installed on the Missouri battleship during Reagan's presidency. Looks frighteningly like a dalek, doesn't it? Thanks for reading!
<urn:uuid:8a63acb6-3afe-42d3-953d-1f790749c566>
CC-MAIN-2021-43
http://www.wiigf.com/2016/03/where-are-they-aliens-i-mean.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00631.warc.gz
en
0.975622
2,717
3.09375
3
in 1772, the sheriff of Culpeper County was ordered to arrest a Baptist minister for "unlawfull preaching" Source: Library of Congress, Religion and the Founding of the American Republic, Summons to Nathaniel Saunders, August 22, 1772 American citizens now assume they have an "inalienable right" to worship however they please or to choose not to worship at all. One standard joke illustrates the flexibility of American religious thought:1 Religious freedom, or even tolerance, was not supported by Virginia's government until 1776. Just as in England across the Atlantic Ocean, the power of Virginia's government was united with the power of the Church of England (Anglican church) as an "established" religion. Quakers were expelled from the colony by Gov. William Berkeley after he was restored to office in 1659, and non-Anglican preachers had to be licensed by the county court. The American Revolution disrupted that traditional government structure and led to disestablishment of the Anglican church and official separation of church and state. Thomas Jefferson and James Madison led the charge to create legal guarantees for freedom of religious thought and practice. Modern courts must interpret their language to assess whether a law crosses a line and unconstitutionally assumes governmental power to interfere with a religion, or to support a particular religion. Virginia was not settled by Europeans seeking to create a haven for religious liberty. The long history of European colonization in North America reveals that the desire for property and profit was the primary incentive for crossing the Atlantic Ocean. Though Virginia ended up being settled by members of the Church of England (Anglicans), the first colonists in North America and what became Virginia were Catholics. Ponce de Leon brought Catholic priests with him to Florida in 1521, as part of the first European colonization effort in North America. Lucas Vasquez de Ayllon brought Dominicans in 1526, when he started the San Miguel de Guadalupe colony in what is now Georgia.2 The first Europeans who tried to settle in Virginia also were Catholics. Spanish Jesuits led by Father Juan Baptista de Seguera started the Ajacan settlement, near modern-day Yorktown, in 1570. The Native Americans there killed 10 of the 11 Spaniards in 1571; the teenage boy they let survive was rescued by a Spanish ship in 1572.3 English colonization in Virginia was equivalent to Protestant colonization. One of the first actions by the initial English settlers when they arrived at Virginia was to build a wooden cross at Cape Henry. When Jamestown was founded in 1607, the Church of England (Anglican) was "established" in the colony of Virginia as the official church with King James I as the Defender of the Faith. Catholics would not be allowed to worship openly in Virginia until 1781, when French troops involved in the siege of Yorktown celebrated Mass in Alexandria.4 in 1935, National Society Daughters of the American Colonists installed a granite replica of the wooden cross erected in 1607 at Cape Henry Source: National Park Service, Cape Henry Memorial Cross Virginia's Protestant gentry became well-entrenched in county courts, the House of Burgesses, and also in Anglican vestries The vestry was the governing board of a parish. Members of the vestry consisted of the wealthy elite living within that parish. Because the vestry hired Anglican ministers on short term contracts, few ministers gained enough power to become independent of vestry control. If sermons within worship services or other activities of the minister were not sufficiently aligned with the perspectives of the local gentry, the minister's contract was not renewed. With a few exceptions, Puritan ministers were pushed out of Virginia quickly. In the colonial period, the Anglican church had a key role in what today would be considered fundamental government services. The vestry set the parish tax rates for maintaining the church buildings, paying the minister, and funding social welfare expenses such as caring for orphans, the indigent, and others unable to support themselves. Parish taxes were collected by the county sheriffs, along with the other taxes imposed by the county courts (equivalent to a combination of today's Board of County Supervisors and District judges). There was no separation of church and state; everyone, no matter what their personal beliefs, was required to pay taxes that funded Anglican activities. There also was no acceptance within the Virginia gentry of non-Anglican beliefs in the 1600's. Throughout the colonial period, only one Catholic family gained wealth and power, the Brents in Westmoreland and (after 1664) Stafford County. George Brent and his family had to worship privately. The county court in Stafford sought to increase acceptance of the Brents by issuing a certificate in 1668 stating that the Brent family had not tried to convert anyone to their Catholic faith for the last two decades.5 After George Brent had been elected to serve in the House of Burgesses, King James II was forced to leave England during the Glorious Revolution of 1688. In 1689, the Anglican minister of Overwharton Parish in Stafford County, Parson John Waugh, inflamed suspicion of the local Catholic leader. During the "tumult" created by his agitation, the Stafford County Court ordered the Brent family to stay at the home of a prominent local Protestant, William Fitzhugh, in a form of house arrest.6 Protestant and Catholic rivalries dated back a century to the reign of Henry VIII. He split from the church based on Rome in order to legitimize his first divorce, and declared that the King of England rather than the Pope was the top authority for religious decisions in England. Claiming that the national religion was based on the sovereign ruler's religion led to conflict when Henry VIII's daughter assumed the throne in 1553 She was Catholic, and married the Catholic son of the king of Spain. She had religious dissenters burned to death, and became know as "Bloody Mary" after graphic accounts and images were published in Foxe's Book of Martyrs. When she died in 1588, Henry VII's second daughter became queen. Elizabeth I was a Protestant, and the definition of heresy changed dramatically as she punished heretics who supported Catholic dogma and the role of the Pope. Catholic Spain tried to invade and conquer Protestant England in 1588, but the Spanish Armada was dispersed in the English Channel. Anti-Catholic bias became closely associated with English nationalism.7 Virginia was settled initially when James I was king, and grew during the reign of his brother Charles I. They were the head of the Church of England, but the forms of worship and the words used during religious services were contested by different factions within the church. In contrast to Virginia, Maryland and Pennsylvania were more tolerant of diverse religious beliefs. Maryland had been chartered in 1732 because King Charles I owed favors to the George Calvert, Baron of Baltimore, and his son Cecil Calvert. The Virginians based in Jamestown viewed Maryland as a rival, rather than as an ally in the isolated wilderness of North America. Virginians objected to the loss of land included within the boundaries of Virginia's 1612 charter and the seizure of William Claiborne's lucrative fur trading business based on Kent Island,. Virginians also objected because the Calverts were Catholic, and would fill Maryland with Catholic colonists. The Virginians had made their dislike of Catholics clear to Sir George Calvert clear in person. When Lord Calvert sailed from his failed colony in Newfoundland to Jamestown in 1629, he was unwilling to take the Oath of Supremacy that Charles I was the Supreme Head of the Church of England. Acting Governor John Pott forced Lord Calvert to sail back to England, where he then proceeded to obtain the charter for a new colony. Calvert named his new colony after Henrietta Maria, the Catholic wife of Charles I.8 Religious disputes between Puritans and the traditional Anglican leaders would lead to civil war in England and the execution of Charles I in 1649. Those conflicts were carried across the Atlantic Ocean to Maryland, which attracted a mix of both English Catholics and English Protestants. In 1649, the Maryland legislature approved the 1649 Maryland Act Concerning Religion, or Maryland Toleration Act which Calvert had prepared. By then, Catholics were a minority of the population in Maryland. Cecil Calvert, Lord Baltimore, had not defined an established church in his colony. The law applied the same punishment to Puritans, Anglicans, Catholics, and others who criticized a Christian faith:9 The Maryland Toleration Act was not sufficient. By 1676, only 25% of the residents in Maryland were Catholics, but they controlled most of the colony's political offices and collected fees from everyone. Colonists in Tidewater feared that the Catholics in the western backcountry might ally with French raiders because of a shared religion. The Calverts lost control over their proprietary colony in a Protestant-led coup in 1688. That occurred the same year as James II, England's last Catholic ruler, was forced from his throne in the Bloodless Revolution. In 1710, the Church of England became Maryland's established church, and Catholics were excluded from office.10 In Pennsylvania, William Penn managed to encourage religious toleration throughout the life of that colony. He issued a formal Charter of Privileges in 1701. Pennsylvania attracted a diverse set of settlers in addition to immigrants from England, including Swedes, Dutch, Finns, and refugees from many small principalities which would ultimately become part of Germany. Penn's charter gave monotheists the freedom of conscience, and allowed any Christian to hold public office:11 Virginia suppressed Quakers and Puritans as well as Catholics. What may be the first Society of Friends meetinghouse was built at mouth of Nassawaddox Creek in 1657, during the English Civil War. The Eastern Shore was physically isolated from Jamestown, and the extensive international trade brought sailors of different backgrounds to small communities along the Chesapeake Bay. Virginia officials did not tolerate the Quakers. By 1662, Col. Edmund Scarborough forced those on the Eastern Shore to move north of the Pocomoke River into Maryland. A year later, he led a raid across the border and attacked the Quaker settlements. That triggered a dispute with Lord Calvert in Maryland, followed by a 1668 survey to define the Virginia-Maryland border on the Eastern Shore. After George Fox came to Virginia in 1672, what is now "the oldest continuous congregation in Virginia" of Quakers started near the Dismal Swamp. That isolated area was a haven for Puritans as well. It was distant from the gentry who created plantation in Tidewater and ruled from Jamestown. South of the James River, tobacco grew poorly. Colonists traded less with England and more with other colonies in North America and with Caribbean islands. Greater business dealings with non-Anglicans led to a less-traditional culture around Suffolk and Norfolk.12 Puritans concentrated there as well. Philip and Richard Bennett came to Warrosquyoake around 1630 and developed Bennett's Welcome plantation. Puritans came to both Maryland and Virginia as conflicts in England grew more heated, and concentrated along the Nansemond River. They sought the freedom for themselves to worship in the Puritan style, but were not advocating that other religious groups have the freedom to worship in their own way. In 1642, Philip Bennett went to Massachusetts to recruit Puritan ministers to serve in parishes in Isle of Wight, Upper Norfolk/Nansemond, and Lower Norfolk counties. However, Governor William Berkeley came to Virginia in 1642. He was a strong supporter of Charles I, and viewed religious nonconformity as both heresy and political disloyalty. Under Berkeley, the colonial government in Jamestown began to demand standard use of the Book of Common Prayer in worship services. He forced the three Puritan ministers recruited by Philip Bennett to return to Massachusetts, and later banished other Puritan leaders. Most followers also left, migrating to Maryland by 1650. The 1649 Maryland Act of Toleration offered a clear contrast to Gov. Berkeley's religious intolerance. In 1652, after Parliament had seized power in England and executed Charles I, Gov. Berkeley was forced to step down. The General Assembly selected Richard Bennett to become the next governor, so between 1652-1655 a Puritan was the top official in Virginia. Bennett sought to impose Puritan control in Maryland as well. That triggered the Battle of the Severn between Catholic royalists and Puritans, while in Virginia there was no open warfare between the Anglican royalists and Puritans because most dissenters had left the colony.13 The Great Awakening began to affect Anglican domination of religious activity in Virginia in the 1740's. Unlicensed preachers began to offer independent services in private homes and scattered outdoor locations. Hierarchical control of culture by the gentry was threatened by evangelical preaching, emotional behavior during worship services, and new philosophies (such as baptizing believers only as adults, after they made a conscious choice). The outreach of dissenting religious leaders to African-American slaves was perceived as a particular challenge to the status quo.14 Colonial officials actively recruited non-Anglican Protestants to come to Virginia. Presbyterians dominated the Shenandoah Valley, after Scotch-Irish migrated to that region with encouragement in the 1720's from Governor Spotswood (who sought a buffer population between Native Americans and French Catholics in the Ohio River valley. Other immigrants west of the Blue Ridge belonged to German sects, including the Mennonites. Baptist groups developed in the Piedmont, plus areas of Tidewater dominated by traditional Anglican churches. Anglicans reacted by disrupting Baptist services, plus arresting - and even attacking - dissenting preachers.
<urn:uuid:e7a2fe87-50a7-49e7-a63f-ff399b2a37af>
CC-MAIN-2021-43
http://virginiaplaces.org/religion/religionfreebefore.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00230.warc.gz
en
0.974675
2,834
3.671875
4
Hello, aircraft fans! In this edition of the Plane Crash, we’ll look at the U.S. Navy’s WW2 top three: the Grumman F6f Hellcat, the Vought F-4U Corsair, and the Grumman F4f Wildcat. Wildcat: Before the greatness of aircraft like the Grumman Hellcat and Vought Corsair, the Grumman F4f Wildcat was a fine aircraft. First built in 1939, this rugged mid-wing 318-mph six machine-gun aircraft held a critical point in the U.S. Navy until better aircraft were supplied. For instance, Lieutenant Butch O’Hare destroyed five Japanese bombers in six minutes. Later, despite being shot down in the Pacific, the Chicago-O’Hare airport was named for him. The Wildcat had a crew of 1, one 895kW (1200hp) Pratt & Whitney R-1830-66 radial engine, a maximum speed of 512km/h(318mph), a range of 1239km (770 miles), and a service ceiling of 10,638m (34,900ft). Dimensions are as follows: Wingspan: 11.58m (38ft.). Length: 8.76m (28ft. 9in.). Height: 3.61m (11ft. 10in.). Armament: Six 12.77mm (0.50in.) machine guns in wings and an external bomb load of 91kg (200lb.). Total loaded weight was 3607 kg (7952lb.). Hellcat: The Hellcat flew for the first time on June 26, 1942. Many of its war abilities had been learned from its predecessor, the Wildcat. Specifications for this war-changing plane are as follows: Powerplant: one 1492 kW (2000hp) Pratt & Whitney R-2800-10W radial engine. Performance: Maximum speed: 612 km/h (380mph). Range: 1521km (945 miles). Service ceiling: 11,369m (37,300ft.). Dimensions: Wingspan: 13.05m (42ft10in.). Length: 10.24m(33ft.7in.). Height: 3.99m (13ft.1in.). Armament: six 12.7mm(0.50ibn.) machine guns in wings, or two 20mm(0.79in.) cannon and four 12.7mm(0.50in) machine guns, provision for two 453kg (1000lb) bombs or six 12.7cm (5in) RPs. Weight: 7025kg (15,487lb). In all, the Grumman F6f ran up a 19 to 1 kill ratio. And now: the Chance Vought F4U Corsair. The speed, strength, and firepower of the Corsair enabled it to dominate Japanese opposition, shooting down 2140 against a loss of 189. Its performance and dependability allowed great flight leaders like John Blackburn, John Smith, Marion Carl, Joe Foss, and Pappy Boyington to create legendary fighter squadrons. It was truly a superior aircraft. Have a great day! Hello, aircraft fans! In this edition of the Plane Crash, we’ll take a look at the aircraft of the BBMF, or ‘Battle of Britain Memorial Flight’ of the RAF. Now, we will take a look at the Avro Lancaster. Specifications are as follows: A crew of seven; four 1233kW (1640hp) Rolls-Royce Merlin 28 or 38 12-cylinder V-type engines; a maximum speed of 462km/h (287mph), a range of 2784km(1730miles), a service ceiling of 5790m(19,000ft); a wingspan of 31.09m(102 ft), a length of 21.18m(69ft 6in), and a height of 6.25m(20 ft 6 in), all adding up to a total loaded weight of 229,484kg(65,000lb). In addition, the armament was two 7.7mm (0.303in) machine guns in nose turret, two in dorsal turret and four in tail turret, and a maximum internal bomb load of 8165kg (18,000lb). It was a splendid aircraft, and the BBMF’s Lanc is still flying and is coded ‘PA474’. The Hawker Hurricanes: coded LF363 and PZ865. Well, despite all of its Battle of Britain fame, the two Hurricanes, Night Reaper and The Last of the Many, have both seen numerous disasters since rolling off the factory lines. Despite this, the little 1-seat, 1460hp Rolls-Royce Merlin-powered 322 mph fighter is still in use in air shows. Supermarine Spitfires P7350, AB910, MK356, PM631 and PS915 make up the most important part of the Flight. They had not nearly as many disasters as the Hawkers, and all of them, especially ‘THE LAST’, PS915, have been a great part of RAF history. With a crew of one; one 1074kW(1440hp) Rolls-Royce Merlin 45/46/50 V-12 engine; a maximum speed of 602km/h(374mph), range of 756km (470 miles), a service ceiling of 11280m(37,000ft); as well as two 20mm(0.79in) cannon and four 7.7mm(0.303in) machine guns. This all added up to a total loaded weight of 3078kg (6785lb). And now: The Douglas DC-3 Dakota (or C-47 Skytrain)-ZA947. The Flight’s DC-3 succeeded the de Havilland Devon as the main support in 1993. The Flight also uses the de Havilland Chipmunk. Have a great day! Hello, aircraft fans! In this edition of the Plane View, we’ll take a look at the long line of Grumman aircraft. From the ’31 FF-1 to the EA-6, we will see how Grumman has one of the longest lines, and also is one of the best. And now: The Grumman FF-1. The FF-1 was a Golden Age aircraft, and still served in the Spanish Civil War on the Republican side. It had a crew of one, a 709kW(950hp) Wright R-1820-22 Cyclone 9-cylinder radial engine, a maximum speed of 418km/h(260mph), a range of 1819km(1130 miles), a service ceiling of 9845m(32,300ft), as well as a wingspan of 9.75m(32ft.), a length of 7.01m(23ft), and a height of 2.84m(9ft.4in.). The weight was2155kg(4750lb) loaded; an armament of one 12.7mm(0.50in) and one 7.62mm(0.30in) machine gun in upper forward fuselage, as well as an external bomb load of 105kg(232lb.). The Grumman G-12 Goose was a high-winged, amphibious aircraft with retractable landing gear, as well as a crew of 2, and a variable payload, changing depending on whether passengers or freight was being carried. A few are still in service today, as they are a grand old plane, first built in 1937. Although the greatness of aircraft like the Grumman Hellcat and Vought Corsair, the Grumman F4f Wildcat was a fine aircraft. First built in 1939, this rugged mid-wing 318-mph six machine-gun aircraft held a critical point in the U.S. Navy until better aircraft were supplied. For instance, Lieutenant Butch O’Hare destroyed five Japanese bombers in six minutes. Later, despite being shot down in the Pacific, the Chicago-O’Hare airport was named for him. The Grumman TBF Avenger was an effective dive-bomber, being second only to the Douglas SBD Dauntless. On the fighter side, the Grumman F6f Hellcat, which won the war in the Pacific, the F7f Tigercat, and the F8f all proved to be at least worthy aircraft. Search and Rescue: The SA-16 Albatross of ‘47 and the S-2 of ’52 both were excellent, the SA-16 being S&R and the S-2 being submarine-killer. But the E-2 of ’60 surpassed both, in the way of searching for enemy aircraft. Back to fighters: the F9f, F11f, and F-14 all proved to be sufficient for their time. The F-14 Tomcat had more than 30 years of service, but has now been replaced by the Boeing/McDonnell Douglas F-18 Hornet. Lastly: the Grumman A-6 Intruder and EA-6 Prowler are the best attack-radar jamming aircraft ever. The current Prowler is greatly needed, as skies are again becoming hostile (get ready for World War III! Hope you enjoyed this post. Have a great day! Hello, aircraft fans! In this edition of the Plane Crash, you’ll find out about the Lockheed P-38 Lightning, which was one of the greatest aircraft of WWII. Get ready, because as of Super Bowl week, I’m going to be writing a football blog post. So everybody root for San Francisco, and rejoice that the Patriots won’t make it to Super Bowl XLVII. Jack Harbaugh must be pretty darn excited. On January 20th, 1939, one of the greatest aircraft of all time, had its first test flight. The programme had begun in 1937, due to a USAAC requirement. This aircraft could go an amazing 360 M.P.H. at 20,000 feet, and 290 M.P.H. at sea level. It had a crew number of one, a maximum speed of 414 M.P.H., a range of 2,260 miles, a service ceiling of 44,000 feet, and a weight of 21,600 pounds (loaded). It had an outstanding armament of one 20mm cannon, four 12.7mm machine guns; along with a bomb and rocket load of 4,000 pounds. Despite its superiority, it has always tended to be overshadowed by Republic’s P-47 Thunderbolt and the P-51 Mustang of North American. That is mainly because both other aircraft did best in both theatres of the war, but the P-38 was mainly used in the Pacific Theatre. But there were still those pilots like Robin Olds. The Lightning was adequately named, for it immediately set speed records. A loopy pilot Lieutenant (later Brigadier General) Benjamin S. Kelsey had logged just 7 hours in the XP-38 when he decided to try to break Howard Hughes’s transcontinental flight time record of seven hours, twenty-eight minutes, and thirty seconds. Kelsey took-off on February 11th, 1939, and the aircraft blazed across the country. But on his descent to Mitchell Army Air Field on Long Island, New York, disaster struck. After seven hours and two minutes of flight, carburetor icing took away both engine’s power, and the aerocraft crashed on a golf course. Kelsey came out splendidly, but the aircraft was damaged beyond repair. Despite the tragedy, it brought the government’s and the public’s attention to their new 414-M.P.H. fighter. There were only a few downsides with the P-38, them being maneuverability, engine number, and the two 1063kW (1425hp) Allison V-1710-91 12-cylinder Vee-type unreliable engines. Even though the two engines were crucial to speed, descent had to be started much earlier than in most other aircraft. The Allison engines were hard to operate in cold weather, but the P-38 was still used often flying from Normandy or other Allied bases, including Andover in Hampshire, down to the Deutschland region of Europe. Lockheed surprisingly made the only U.S. fighter that was in production before and after the war. Major Richard I. Bong, the highest-scoring pilot in U.S.A.F. history, shot down a total of 40 aircraft; while Tommy McGuire shot down 38 before being shot down over the Philippines in 1945. Also, the amazing feat of killing Admiral Isoroku Yamamoto was flown by P-38s. They flew from Guadalcanal to destroy Yamamoto’s aircraft over Kahili Atoll. Making the 1,100 round-trip was no easy feat. It was truly a WWII classic. Have a great day! Hello, aircraft fans! This report is on the Attack on Pearl Harbor, due to the recent holiday, Pearl Harbor Remembrance Day. Hope you enjoy reading it! It was said by many men such as General Billy Mitchell that early some Sunday morning, the Japanese would attack Pearl Harbor. On December 7th, 1941, disaster struck. American commanders were Husband Kimmel and Walter Short, and the Japanese had Chuichi Nagumo and Isoroku Yamamoto. In the American mobile unit, there were 8 battleships, 8 cruisers, 30 destroyers, 4 submarines, 1 USCG (United States Coast Guard) Cutter, 49 other ships, and 390 aircraft. But the Japanese had 6 aircraft carriers, 2 battleships, 2 heavy cruisers, 1 light cruiser, 9 destroyers, 8 tankers, 23 fleet submarines, 5 midget submarines, and 414 aircraft. The American losses were 4 battleships sunk, 3 battleships damaged, 1 battleship grounded, 2 other ships sunk, 3 cruisers damaged, 3 destroyers damaged, 3 other ships damaged, 188 aircraft destroyed, 159 aircraft damaged, 2,402 killed, and 1,282 wounded. Japan still had major losses: 4 midget submarines sunk, 1 midget submarine grounded, 29 aircraft destroyed, 64 killed, and 1 captured. Of course, that was a 4,065 to 99 casualty ratio. The Japanese used 353 aircraft. Unfortunately for the Japanese, all 5 midget submarines were destroyed. A Gallup Poll before the attack found that 52% of Americans expected war, 27% did not expect war, and 21% had no opinion. The downside of attacking Pearl Harbor was that none of the American aircraft carriers were in the bay. Due to Japanese expansion into French Indochina, the USA stopped oil exports to Japan in July of 1941. Then, the Japanese planned to take over Dutch East Indies, which was very oil-rich. Japan was forced to either withdraw from China and lose face or take over the European controlled countries of Southeast Asia. On November 26th, 1941, the Japanese Striking Force of the aircraft carriers Akagi, Kaga, Sōryū, Hiryū, Shōkaku, and Zuikaku left northern Japan towards a position northwest of Hawaii. They hoped to use aircraft to attack Pearl Harbor easily, as they had 408 aircraft. The first of the two waves was to take out all primary targets, with the second finishing them off. At 3:42 AM Hawaiian Time, the American minesweeper Condor spotted a midget submarine periscope west of Pearl Harbor entrance buoy and radioed this to the destroyer Ward. It may have entered the harbor; however, Ward sank a midget submarine at 6:37 AM in the first American shots in the Pacific Theatre. A midget submarine north of Ford Island missed the seaplane tender Curtiss with her first torpedo and then missed the destroyer Monaghan with her other before being sunk by the Monaghan at 8:43 AM. Another midget submarine grounded two times, with one member swimming ashore to become the first prisoner of war from Japan. The boat was captured on December 8th. The USS West Virginia may have been hit by a midget submarine’s torpedo. Slow, vulnerable torpedo bombers led the first wave, exploiting the first moments of surprise to attack the most important ships present (the battleships), while dive bombers attacked U.S. air bases across Oahu, starting with Hickam Field, the largest, and Wheeler Field, the main U.S. Army Air Force fighter base. The 171 planes in the second wave attacked the Air Corps’ Bellows Field near Kaneohe on the windward side of the island, and Ford Island. The only aerial opposition came from a handful of P-36 Hawks, P-40 Warhawks and some SBD Dauntless dive bombers from the carrier USS Enterprise. Most of the ships had crews that were asleep, so they showed little resistance. The entire attack lasted a stunningly short ninety minutes. Of the 402 American aircraft in Hawaii, 188 were destroyed, and 159 damaged, with 155 of them on the ground. Almost none were actually ready to take off to defend the base. Eight Army Air Corps (Air Force) pilots managed to get airborne during the battle and six were credited with downing at least one Japanese aircraft during the attack, 1st Lt. Lewis M. Sanders, 2nd Lt. Philip M. Rasmussen, 2nd Lt. Kenneth M. Taylor, 2nd Lt. George S. Welch, 2nd Lt. Harry W. Brown, and 2nd Lt. Gordon H. Sterling Jr. Sterling was shot down and killed by friendly fire returning from the fight. Of 33 PBY Catalinas in Hawaii, 24 were destroyed, and six others damaged beyond repair. The three on patrol returned undamaged. Friendly Fire brought down some U.S. planes on top of that, including five from an inbound flight from Enterprise. Japanese attacks on barracks killed additional personnel. Fifty-five Japanese airmen and nine submariners were killed in the action, and one was captured. Of Japan’s 414 available planes, 29 were lost during the battle, with nine in the first attack wave, and 20 in the second. Another 74 were damaged by antiaircraft fire from the ground. Despite many of the Japanese crewmen’s wishes, a third wave was not carried out. Here is a list of some of the main aircraft. The Nakajima B5N2 “Kate” torpedo bomber was actually the 2nd most important Japanese aircraft of the fight, only surpassed by the “Betty” bomber. The Aichi D3A “Val” dive bomber was also important, but many were destroyed later in Kamikaze missions. But on the American side, the main aircraft were the Curtiss P-36, Curtiss P-40, and the Douglas SBD Dauntless. The Dauntless was one of 4 aircraft that turned the war in the Pacific around, with the Lockheed P-38 Lightning, Curtiss P-40, and the Grumman F6f Hellcat. Here are some photos of the American ships after the attack. The USS Arizona Memorial on the island of Oahu honors lives lost during the attack. Pearl Harbor Remembrance Day , December 7th,is perhaps the largest holiday in the Hawaiian Islands. Hawaii is still the largest military disaster on a land to become American. Have a great day! Isaiah S. Casey
<urn:uuid:96d86a77-883b-4092-94ed-9b97e68030f9>
CC-MAIN-2021-43
https://isaiahsairplaneblog.com/tag/wwii-aircraft/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00271.warc.gz
en
0.955685
4,094
2.96875
3
Terror groups have always demonstrated the ability to quickly and effectively adapt new technologies for their activities. New communication technologies have played a central role in this regard, used by them to propagate, recruit, organize, arrange financing and transfer knowledge and skills. For example, already in the late 1990s and despite being headquartered in Afghanistan, Al-Qaida members began using shared email accounts, a relatively new technology at the time, to communicate. In order to increase communications security and to avoid electronic detection of their communication, Al-Qaida fighters shared the login data of free email accounts and exchanged messages via the draft folders to avoid electronic detection . Extremists’ use of IT In July 2010, Al-Qaida in the Arabian Peninsula (AQAP) started publishing the first global Al-Qaida internet magazine, called Inspire. Inspire magazine is not only a propaganda tool for AQAP, but also from the outset aimed at motivating terror attacks on a global scale, including dangerous practical advice. Already in its first edition, Inspire Magazine published the now famous “Make a bomb in the Kitchen of Your Mom” article which subsequently was used by the Boston Marathon bombers for the design of their bombs in 2013 . The magazine also provided suggestions for targets, such as airlines and airports and gave advice on how to defeat security measures in place . With the rise of the Islamic State in Iraq and the Levant (ISIL) from 2014 onwards, the issue of the misuse of modern communication technology via the internet and social media came to the forefront of the public mind. While in general, the activities of ISIL members online where only a continuation of the technological adaptation already started by the global Al-Qaida network, ISIL’s propaganda and communications skills took this adaptation to new heights, in part reflecting the tech-savvy skills of the younger generation of radicals that joined the terror group. ISIL professionalized not only its online propaganda but also its recruitment activities and used the global reach of social media tools to inspire attacks outside Iraq and the Syrian Arab Republic and to raise funds . The recent terror attack in Christchurch, New Zealand makes it clear that the misuse of internet and social media services is not limited to organisations such as Al-Qaida or ISIL but extremist right-wing terrorists have “discovered” the opportunities of such services for their activities to share information and connect on an international scale . The Counter Extremism Project (CEP) has been monitoring and documenting the increasing online activities of right-wing extremist groups around the globe for a number of years already . Counter Extremism Project CEP is a not-for-profit, non-partisan, international policy organization formed to combat the growing threat from extremist ideologies. Its work spans various forms of extremist behavior from religiously-based terrorism to right wing extremist groups around the world focusing on combating extremism by pressuring financial and material support networks; countering the narrative of extremists and their online recruitment; and advocating for smart laws, policies, and regulations. CEP maintains a series of partnerships and a network of representatives in Washington, New York, London. Brussels, Paris, Rome and is in the process of establishing a new representation in Berlin. In Europe and the United States, CEP has acted as specialist advisory body for political decision makers in parliaments and government departments on the question of the threat posed by the misuse of internet and social media activities by terrorist and extremist organizations, the use of new technology to counter this threat as well as the design of effective and smart regulations of cyber space. CEP, in cooperation with Prof. Dr. Hany Farid , developed eGLYPH, a smart hashing technology that allows the identification of previously defined terrorism related content on any platform. CEP released eGLYPH in 2016 to demonstrate that efficient and cost-effective technological solutions exist that would allow internet and social media service providers to monitor and police their platforms in a more diligent and effective manner as is currently the case. In order to increase its global reach, CEP recently established a formal partnership with the United Nations Office of Counter Terrorism, centered on raising awareness of new technological solutions to counter the threat of terrorism in cyber space . New regulatory developments The first attempt of a European country to require platform providers to remove illegal content including terrorism related content was the Netzwerkdurchsetzungsgesetz (NetzDG) in Germany, which came into force in 2018 and was . Germany decided to opt for a regulatory approach after self-regulatory attempts did not result in satisfactory results . In a first of its kind study, CEP in cooperation with the Center for European Policy Studies (CEPS) released a study analyzing the effects of the NetzDG in December 2018 . The study, based on extensive data collection and interviews with representatives from the tech-industry demonstrated that the concerns raisedbefore NetzDG was passed, did not materialize. NetzDG did not provoke mass take-downs and therefore clearly did not limit freedom of expression in Germany. Furthermore, the new law did not impose significant costs for platforms. Even small companies were able to implement the provisions of the law at a cost of 1% of their annual revenue. Efforts to develop similar regulation in this regard are currently undertaken in the United Kingdom . The recent initiative of France and New Zealand, which convened a meeting between governments and major tech companies in Paris on 15 May 2019 calling for the elimination of extremist content online, emphasized again the importance of regulatory approaches and the increasing willingness of governments to work toward this goal . EU regulation under way In September 2018, the European Commission initiated a proposal for a regulation on “preventing the dissemination of terrorist content online” (European regulations apply directly and uniformly to all EU countries if adopted). Different from the German NetzDG, the proposed regulation is focused solely on terrorism related content . The proposal required platform providers to remove notified content within one hour and to take measures to prevent the re-upload of removed content . These two provisions would have put effective barriers in the way of terrorists attempting to misuse internet and social media services. On the 17th of April 2019, the European Parliament, in its final session before the European elections approved a position on the proposed regulation in the first reading with some amendments . While the one-hour deadline for removal was maintained, the obligation of platform providers to take measures to prevent the re-upload of removed terrorist content is no longer obligatory. This change reflected intense lobby efforts by tech companies. The industry seems still hesitant to accept anything akin to compliance standards, something that has been employed for decades to prevent the misuse of services by terrorist organisations in other sectors, such as for example the financial industry. This means that each re-upload of the same terror related content may have to be notified to the platform providers again, because measures by platform providers against the re-uploading of terrorist content are merely voluntary. Despite this weakness, the parliamentary vote was a significant step to tackle the misuse of internet and social media services by terrorist groups within the European Union. This vote ensured that the legislative process can continue seamlessly following the upcoming European elections with the approval process in the European Council. Following the elections to the European Parliament 23-26 May 2019, the proposed regulation may then enter further negotiations within the co-decision procedure between the European Commission, the Parliament and the Council. Given that the tech industry will likely seek to further watering down the provisions of the regulation, stakeholders across the European Union should engage to ensure that the new measures remain an effective counter-terrorism tool. Terrorist financing within cyber space The financing of terrorism within cyber space and through the use of new technologies, including crypto-currencies, are a growing concern. Significant amounts of money, obtained by ISIL in Iraq and Syria over the last few years remain missing, and according to estimates by the United Nations, ISIL retains access to between 50 and 300 million US-Dollars . ISIL needs to safeguard these funds in the current situation after it lost control over physical territory and consequently the ability to generate significant amounts of income, in Iraq and Syria , but also in 2016 in Libya with its defeat in the city of Sirte and the Philippines with its ouster from the city of Marawi in 2017 . The recent attacks in Sri Lanka demonstrated the new paradigm of ISIL’s global network in which local groups, with the support of the wider ISIL network, will conduct and attempt to conduct international terror operations . Continued and secured access to funds and the ability to disseminate them will remain an essential element for ISIL’s global strategy . First arrests and convictions in the United States demonstrate a sustained interest of ISIL supporters to use crypto-currencies . Currently, the threat is in its initial development . Mainly due to the difficulties to convert crypto-currencies into fiat currency in conflict zones, terror groups may choose not to use crypto currencies as a finance mechanism in these areas. However, they can be attractive as a value storage structure, in particular for larger amounts. Crypto-currencies due to their global accessibility, their enhanced privacy measures and lack of transparency, combined with a general lack of counter terrorism financing regulations for this technology offer the opportunity for the group to store and retain value of significant amounts of funds in an environment that remains fairly removed from the ability of regulators to threaten these funds and disrupt access to them with counter measures. CEP has recognized the challenge posed by crypto-currencies, is engaged in researching and analyzing this threat and aims to develop effective and efficient regulatory proposals for consideration by political decision makers. Dr. Hans-Jakob Schindler, Senior Director, Counter Extremism Project https://www.cfr.org/backgrounder/terrorists-and-internet See for example: https://www.counterextremism.com/content/guide-white-supremacy-groups https://www.counterextremism.com/global_extremist_groups [13 ] https://germanlawarchive.iuscomp.org/?p=1245 Commission Draft Article 4, 2. Commission Draft Article 6, 2. https://undocs.org/S/2019/50, para 75 https://www.cnbc.com/2018/11/26/new-york-woman-pleads-guilty-to-using-bitcoin-to-launder-money-for-isis.html, https://www.justice.gov/opa/pr/virginia-man-sentenced-more-11-years-providing-material-support-isil https://www.loc.gov/law/help/cryptocurrency/map2.pdf, https://www.loc.gov/law/help/cryptocurrency/map1.pdf
<urn:uuid:46919c47-409c-472f-9b2f-406fc5e745c1>
CC-MAIN-2021-43
https://www.globalriskaffairs.com/2019/05/terrorism-and-the-tech-industry/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00630.warc.gz
en
0.91188
2,211
2.734375
3
|Common Name:||Singing Parrot| |Scientific Name:||Geoffroyus heteroclitus| |Size:||10 inches and 6.8 oz| |Habitat:||Subtropical and tropical lowland forests| |Country of Origin:||The Bismarck Archipelago and Bougainville Island in Papua, New Guinea| Singing Parrot Information The Singing Parrot or the Song Parrot is a type of parrot that is native to tropical and subtropical areas in the Bismarck Archipelago and in certain parts of Papua, New Guinea. These birds prefer to live in moist forests with a lot of rainfall. As such, they are best kept in environments that are close to tropical in terms of temperature and humidity levels. The Singing Parrots are small types of parrots that rarely grow bigger than 10 inches. When it comes to size, you can distinguish the Singing Parrot from all of the other types of parrots that grow in the wild. Compared to its macaw cousins, the Singing Parrot is quite small in size and barely grows longer than 10 inches. In that sense, they are considered as mini parrots due to how small they are in comparison to other parrots. The color of a Singing Parrot is actually predominantly green but with different shades of it. Its back and the outer portion of its wings are more than likely to have a standard shade of green. Meanwhile, its belly is usually a lighter shade of green. However, it may have a bluish-gray chest area. The body and the head are separated by a bluish and grayish band behind the back of the neck. However, this band will gradually fade into a greenish color as it merges with the body. The underside of the wing of the Singing Parrot actually departs from the usual green coloration in the sense that it looks bluish and may sometimes be close to violet. This is more evident when it flaps its wings or if you happen to be under one when it is flying. An amazing part about the Singing Parrot is that this bird actually changes color schemes depending on the sex. Males have the usual green body, and the bluish underside of the wing, but its head, which is separated by a bluish and grayish band, is actually light yellow. Its tail may also have a yellowish color in the underside portion. A female Singing Parrot is similar to a male in terms of almost every aspect you can think of. However, female Singing Parrots do not have the bluish-gray collar or mantle that males have. Instead, the greenish body will immediately transition into a bluish or grayish brown head that looks totally different compared to the male Singing Parrot’s light yellow head. Females also have cheeks that are grayish in terms of appearance and may sometimes look olive. Young Singing Parrots may also have their own distinct appearance. These birds can look more like adult females than adult males, but they do have a few aspects that make them look unique. For one, young Singing Parrots have crowns and napes that are similar to adult males in the sense that they are bluish-gray. Another difference you can clearly notice is the fact that young Singing Parrots have lower beaks that look brownish or grayish but may also have hints of yellow at the base. That is in contrast to the usual predominantly yellow beaks that adult male and female Singing Parrots have. The Singing Parrot is aptly named as such because of its tendency to “sing.” However, this is not the type of singing you might be used to when you are listening to conventional music as the Singing Parrot is known for its melodic sounds and very delightfully harmonious chirps. The Singing Parrot is actually very vocal regardless of where it is. So, whether it is perched on top of a branch or in its cage or whether it is in flight, it will most likely chirp and make the musical noises that made it so popular as a pet for different kinds of bird lovers out there. And the surprising part about the Singing Parrot is that it can even make such noises at night when it is supposed to be not as active as it should be. But the one thing that should be noted about the Singing Parrot’s calls is that these songs may not be suitable for everyone. Some would say that this bird is so raucous and loud that it can get too annoying or irritating. That is because the Singing Parrot’s calls or songs are mostly high-pitched and consists of only two notes that are both high enough to cause a bit of irritation to the most sensitive types of ears. As such, those who do not like the noise emitted by this loud type of parrot should best stay away from the Singing Parrot. But if you do not mind the noise that it tends to make or if you actually like it, this bird might be suitable for you. In terms of its personality, the Singing Parrot is not expected to be as friendly and as compassionate as some of the other species of parrots. It is not the smartest type of parrot, but it actually is still pretty smart compared to some of the other species of birds out there. It also is not the most playful parrot, but it is friendly enough to want to make you play with it from time to ,time depending on the situation. When owning a Singing Parrot, the most important part you should take note of is that they are easily stressed. Singing Parrots are actually pretty shy, especially if they are wild-caught or if they are still adjusting to a new environment. As mentioned, they are not particularly very friendly and will easily get stressed if forced to be friendly due to how shy their nature tends to be. In that sense, a lot of Singing Parrots die for unknown causes that are believed to be most likely due to the stress they undergo when adjusting to life in captivity or when forced to interact with humans and other types of animals in their new environment. The sad part about the Singing Parrot is that it is believed to be not as hardy and as resilient as other types of parrots. They do not get to live for as long as some of their cousins do, but there is no certainty as to how long Singing Parrots actually get to live. Nevertheless, the general consensus or belief is that these parrots have a particularly shorter lifespan and are not expected to be able to live longer than perhaps a decade. There are actually many reasons as to why Singing Parrots do not get to live very long. The one thing that is noteworthy is the fact that they are very susceptible to stress. In captivity, Singing Parrots tend to struggle to adjust to an entirely new environment, especially when they are wild-caught and if the climate conditions are not similar to what they are used to in the wild. This tends to stress them out too much and will cause a lot of health complications that most vets find difficult to remedy. In that regard, it is better to go for a captive-bred Singing Parrots, which are more used to life in captivity but are still just as susceptible to stress as wild-caught ones. Meanwhile, those found in the wild naturally do not live very long due to the fact that their small sizes make them susceptible to getting preyed on by all sorts of predators such as snakes, large lizards, and feral creatures found in the wild. In the wild, Singing Parrots usually breed during the wet seasons in tropical climates and regions. This usually starts sometime during the month of October and may go on for about three more months. The pregnant female Singing Parrot usually nests in the holes of dead limbs of different types of trees found in their natural habitat. They may also small branches of dead trees for nesting. Singing Parrots are not the most physically active types of birds regardless of whether they are in the wild or in captivity. They are mostly found perched on top of a branch or in an elevated spot in their cage. They may fly around from time to time, but they usually like staying still and are not always on the go, unlike some other types of parrot species. But what the Singing Parrot lacks in physical activity, it more than makes up for with the noise it makes. This bird is so loud and noisy that it does not matter whether or not you like the sounds that it emits. Singing Parrots will make you know that they are around by producing high-pitched noises that may or may not be appealing to the ears, depending on the person. This is where the “singing” in its name comes from as the Singing Parrot will always find every opportunity it can get to boasts its vocal cords. Regardless of whether it is in place or in flight, the Singing Parrot will sing. At times, this bird may even get too annoying not only because of the high-pitched noise it makes but also because it may try to sing in the evening. As for their singing, these parrots can easily pick up different types of tunes and even play them back if they get used to the song. Some animal trainers love Singing Parrots because of how intelligently they can pick up certain melodies and then play them back. Singing Parrots are well-behaved birds in captivity except for their singing. While some may mistake their lack of activity to their nature, the most common reason as to why they tend to behave in captivity is that their health may not be holding up well due to the stress they are under. Singing Parrots easily get stressed in captivity. That means that potential pet owners should be wary of where they get their parrots from and of how they are planning to take care of these birds. While birds are known to be generally omnivorous eaters, the Singing Parrot is primarily a herbivore and will most likely prefer to eat anything that is not meat-based. That means that its diet is regularly composed of different types of fruits, vegetables, grains, seeds, and nuts found in the wild. In their natural habitat, Singing Parrots love to eat fruits. Since these birds are usually perched high up on trees, they will most likely eat any kind of tropical fruit they can get their beaks on. That means that they are very fond of any kind of banana and also will not mind eating mangoes, berries, apples, oranges, and pears. They will also feast on the seeds of the fruits they eat and may even eat greens from time to time. When feeding your Singing Parrot in captivity, make sure you give it a very healthy diet to make up for the stress it may be undergoing. In some cases, a good diet can help get it through stressful periods. Fruits such as bananas, mangoes, apples, pears, and oranges should regularly be a big part of their diet, but you can also feed them with other types of food such as oats. Some pet owners love to make their own porridge made out of oats, multigrain flakes, honey, pollen, and fruits so that their parrots get a complete type of meal that has all of the essential vitamins and nutrients they need. In some cases, biscuits or crackers softened in milk may also be good for Singing Parrots. Like any pet bird or parrot, it is essential to keep a clean bowl or dish of water inside the Singing Parrot’s cage as getting a good drink of water to rehydrate itself can help it cope up with stress. Also, those vocal cords need to be rehydrated from time to time. Singing Parrots are not too particular when it comes to the type of cage you provide them. A standard birdcage that can fit a 10-foot long parrot may be good enough for this type of bird. You may opt to use a cage that is about 2.5 feet long and 1.5 feet wide as the Singing Parrot is not particularly active and will most likely love to spend its time perched instead of flying around. Decorations are not essential. Just provide your Singing Parrot with a good spot to perch, and it will be just fine. These birds are extremely shy and are not too fond of playing with toys, unlike their macaw cousins. As such, providing them with chew toys or puzzle toys is all up to your discretion, but you will find that giving them such will not make too much of a difference since the Singing Parrot is not particularly playful. Availability – Where to Get one? While Singing Parrots are part of the least concerned types of animals and are fairly common in the wild, it may take some deep digging to find one available for sale either online or in your locality. These birds are not too common as pets for many households, particularly because of how noisy they can get. As such, they are not popular pets for breeders to try to breed for profit. But, even so, you may be able to find some in parrot specialty stores or if you ask for some references from bird stores that may know someone who breeds these types of parrots. How to Care for a Singing Parrot? Singing Parrots are not the easiest types of pet birds to take care of. They are very susceptible to stress and are easy to get sick or weak for reasons that are quite difficult to trace. In that sense, it is important to provide them with a good home and environment that will not easily stress them out. Also, you can forget about giving your Singing Parrot the best kind of diet because this can help keep their health up when they are undergoing periods of stress. As much as possible, try not to force handle your Singing Parrot or force it to play with you. These birds are very shy and are not the friendliest parrots around. In that case, if you force them to socialize with you or with other animals, it can only put them under a lot of stress. This can perhaps lead to poor health. Multivitamins are quite important when it comes to your Singing Parrot’s overall health because of how such supplements help keep their health up. A stressed Singing Parrot may be able to cope up with stress more if you provide it with a good and balanced type of diet that is complete in terms of nutrition and vitamins. Most people mix multivitamin supplements together with the Singing Parrot’s porridge. Does the Singing Parrot talk? The Singing Parrot may or may not talk, but the one thing that is sure is that it is good at mimicking all sorts of musical sounds. Can the Singing Parrot eat meat? Singing Parrots prefer a diet that is based mostly on fruits and grains and is not known to enjoy eating meat-based food. Are Singing Parrot’s good pets to have? Generally, Singing Parrots are difficult to take care of because of how they are susceptible to stress. For most people, Singing Parrots are not the best types of pets because of how noisy they can get at night. Do Singing Parrots love to play? As shy as they are, Singing Parrots are not the most playful types of birds, but there are instances wherein such parrots may, in fact, play with their owners once they are already used to the environment.
<urn:uuid:09547dda-1e30-4061-8f56-fac6e8c90e9b>
CC-MAIN-2021-43
https://birdscoo.com/care/parrots/singing
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00310.warc.gz
en
0.971651
3,235
3.59375
4
With over 2,500+ species, it can be challenging to identify a palm tree. So, it is not surprising that I get a lot of emails asking for help with palm tree identification. Since different palms require different care, it is important to know its species. Some palms like a lot of sun and warm climate, others prefer shade and can tolerate cold temperatures. In my opinion, cold tolerance is one of the most important characters of the palm, since it can make a huge difference in the location it can be grown. When buying a palm from a home improvement store or a garden center, keep in mind, that it can be mislabeled. That is one of the reasons I always recommend buying palms from a reputable nursery. In this post, I want to highlight key characteristics that I use to distinguish palm species. Palms can be separated by leaf shape, trunk type, flower, fruits and size. How to Identify a Palm Tree Step-By-Step Step 1: Leaf Type. When I am trying to identify a palm tree, I start with the leaves. Are the leaves fan-shaped or feather-shaped? Oher shape? Step 2: Crownshaft. Does the palm has a crownshaft (which is elongated leaf base)? What color is the crownshaft? Step 3: Leaf Stems. Do leaf stems have teeth? Are they smooth? What color? Step 4: Trunk Type. How many trunks? Is the trunk smooth or covered with old leaf bases? Does it have a unique texture? Is it swollen? Step 5: Fruits and Flowers. What color are the flower? What type of fruits it’s producing? What is the fruit color? Step 6: Palm Size. How tall is the palm? Palm Identification By Leaf Type Palm leaves consist of three parts: leaf base, leaf stem, and leaf itself. The leaf base is the part where the stem attaches to the trunk. On many palms the leaf base remains on the trunk even after the frond drops off creating a distinctive pattern. But the easiest place to start are the leaves. Palms can be categorized into three main leaf types which are: pinnate (feather-shaped), palmate (fan-shaped) and entire (simple). Most of the palms fall into pinnate or palmate categories. simple leaf type is rare. Another two subcategories that are worth mentioning are costapalmate and bi-pannate. - Pinnate Leaf (Feather-shaped) - Palmate Leaf (Fan-shaped) - Costapalmate Leaf - Bi-Pinnate Leaf - Entire Leaf (Simple) 1. Pinnate Leaf (Feather-shaped) Pinnate (Feather like) fronds consist of separate long leaflets that grown from the central stalk. Some of the popular palm trees with pinnate leaves are Buccaneer Palm, Majesty Palm, Queen Palm, and Sylvester Date Palm. 2. Palmate Leaf (Fan-shaped) Fan-shaped fronds radiate from a central point along the stem like fingers on your hand. Some of the popular palms with fan-shaped leaves are Everglades Palm, European Fan Palm, Windmill Palm, and Ruffled Fan Palm. 3. Costapalmate Leaf While the Costapalmate palms are a subcategory of the fan-shaped palm, they look like a cross between palmate and pinnate. They have a short midrib instead of a center point from which the leaf segments radiate. They are often twisted and folded sharply along the tip of the of the leaf stem. Some of the palms with costapalmate leaf types are Blue Latan Palm, Red Latan Palm, Fiji Fan Palm, and Chinese Fan Palm. 4. Bi-Pinnate Leaf These palms have a secondary leaf stem that is attached to the primary one, and the leaflets are connected to the secondary stem at regular intervals. This type of palms are very rare occurring only in single type of palm called Fishtail Palm (Caryota). 5. Entire Leaf Also a very rare is a entire (simple) leaf type which composed of a single leaflet or a blade. It’s not divided and doesn’t have individual leaflets. A good example are Joey Palm and Miniature Fishtail Palm. Palm Identification By Crownshaft On some palms leaf bases create a waxy and smooth structure called crownshaft. Crownshaft can differ in color from the trunk, like in case of a Lipstick Palm that has striking red crownshaft or a King Palm that has reddish purple crownshaft. Palms with crownshaft tend to be “self cleaning” meaning the dying leaves fall to the ground without pruning. Palm Identification By Leaf Stems Leaf stems can also be an identifying factor since they can also vary in color, size and formation. Some can be armed with sharp teeth along the edges. Traveler’s Palm and Triangle Palm have very unique leaf formation at the base reminding me of the peacock tail. Bismarck Palm has a very large dramatic crown with distinctive silver-green leaf stems and fronds. Taraw Palm, Bailey’s Copernicia Palm, and Date Palm all have stems with sharp spines along the margins. Palm Identification By Trunk Type With over 2,500 palm species, you can find any trunk imaginable. Trunks differ in size, color, shape, number of trunks, texture and other characteristics. Let’s start with number of trunks. Palms can have three different trunk types: solitary trunk, multi-trunk or even no trunk. Solitary Trunk Palms Most palms have single trunk. Some of the most popular palms with single trunk are Foxtail palm, Blue Hesper Palm, Princess Palm, and Bismarck Palm. Usually, multi-trunk palms are shorter and grow slower. A good sample would be a Seashore Palm and Lady Palm. Both are slow growing shrubby looking plants. They are great for creating a privacy wall and can also be used for foundation plantings and in outdoor tubs and planters. Acai Palm and Areca Palm growth rate is faster, but are both clustering palms with straight clean trunks. No Trunk Palms Some palms develop a small trunk after many many years or have no trunk at all like Cat Palm and Needle Palm. Palms With Self Cleaning Trunks Furthermore, trunk surfaces also vary. Some have a smooth surface covered with scars from old leaves, others have rough surface that is covered with old leaf bases in a crisscross pattern. A good example of palms with “self cleaning” trunks are Carpentaria palm, Alexander Palm. A “self cleaning” means the leaves fall off without pruning. Old Leaf Bases Trunks After leaves die, they drop off sometimes leaving a leaf base. Some of the palms with trunks covered with old leaf bases are Bailey Copernicia Palm, Cabbage Palm, Sylvester Date Palm, Caranday Palm, and Date Palm. These trunks looks especially attractive when regularly pruned. Different Trunk Texture Texture of the trunks also vary. There are palms that have fiber covering, peg-like leaf bases or even spines. Old Man Palm has a trunk covered with fibers, hence its common name. The thickness of the trunk can also differ. Bottle Palm, for example, has a smooth bottle shaped trunk that is wider at the bottom. Cuban Belly Palm’s trunk is thin at the base and swollen in the middle, hence its common name. Spindle Palm has a ridged trunk that is narrow at the base and widens in the middle resembling a spindle. Ponytail palm has a swollen at the base trunk that stores water making it highly drought tolerant. Palm Identification By Flowers and Fruits Palms have very insignificant flowers that range in color from yellowish-green to light green. Generally, they grow in clusters on the long stems among the canopy or from below the crown shaft. Flowers are usually followed by fruits that come as berries or nuts. Most of them not edible except for some. Fruits can come in any color and is also a good identification characteristic. Palms With Edible Fruits The six palms below are the most popular palms with edible fruits. Coconut Palm is one of the most recognizable palms around the world because of its fruits “coconuts”. Date Palm is widely known for it’s sweet fruits “dates” which have high nutritional value and are important source of food in some countries. Acai Palm produces black-purple tasty fruits that are packed full of antioxidants, amino acids and essential omegas. Jelly Palm produces orange fruits that has sweet pineapple/banana like flavor and are great for making jelly. Not as popular as other palms, Saw Palmetto Palm produced black-blue berries that are used to make medicine for kidney, prostate urinary problems. Guadalupe Palm produces small black fruits with a taste resembling dates. Palm Identification By Size Palms come in different sizes. Some small palms, also called dwarf, can only reach 10 ft. Others can get very tall up to 100ft. While small palms usually are slow growing, tall palms grow at a faster rate. The older the palm, the easier it is to identify. Young trees that are only a few months old, have not developed their distinct features and all look the same. As you can see, each palm has its own features that make it unique. Here are some of the characteristics for the popular palms I’ve mentioned above: Areca Palm (Chrysalidocarpus lutescens) – Multi-trunk palm that grows in clusters forming think clumps. It has yellow-green feather-like leaves. Produces bright yellow flowers that are followed by yellow-orange berries. It can grow up to 20ft tall. Bottle Palm (Hyophorbe lagenicaulis) – The main characteristic of this tree is its bottle shaped single trunk. It has bright green crownshaft with feather-like green leaves. Produces small white flowers that are followed by black berries. This is a dwarf palm that can only grow up to 15ft tall. Coconut Palm (Cocos nucifera) – This palm of course is most known for its delicious coconuts. It has single smooth trunk and feather-like fronds. Produces sweet-scented yellow flowers that are followed by brown coconuts. Can grow up to 100 ft. Date Palm (Phoenix dactylifera) – Single trunk covered with ornamental diamond-shaped pattern of leaf scars. Feather-like dark green fronds. Produces white or yellow flowers that are followed by edible fruit, called ‘dates’. This is a large tree that can grow up to 50ft tall. Queen Palm (Syagrus romanzoffiana) – Smooth single trunk, feather-like fronds. Produces creamy flowers that are followed by orage fruits. This is very inexpenisve tree that is widely used in tropical climates. It can quickly grow up to 40ft. Old Man Palm (Coccothrinax crinita) – This is a rare and very expensive tree that is known for its fiber covered trunk which looks like a beard of an old man. It has fan-shaped, stiff fronds. Produces yellow flowers that are followed by dark purple fruits. Grows up to 20ft tall. With so many palm tree types, identifying a palm can be challenging. Start with the fronds which should be the easiest part. Next, look at the trunk and the stems. Then, look at the fruits it’s producing. Finally, after you narrow down the parameters, look to see if you can find a similar trees around. Most likely, the palm you are trying to identify is common for you area.
<urn:uuid:4eb1bc44-40d9-4606-ba43-8cdc61f882f0>
CC-MAIN-2021-43
https://www.florida-palm-trees.com/identify-palm-trees/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00431.warc.gz
en
0.935637
2,536
3.296875
3
| Lower Brewers Lock 45 | location map | lockstation information | At Brewer's Lower Mills (called "New Mills" by Samuel Clowes in 1824) the Cataraqui River fell 13.5 feet (4.1 m) over a series of rapids. About a mile and half (2.4 km) below Brewer's Lower Mills were Billidore's Rifts (now the area of the upper end of River Styx). These "rifts" were a set of small rapids, dropping four feet (1.2 m) over the course of a mile (1.6 km). About 5 miles (8 km) below these, about a mile and half (2.4 km) upstream of Kingston Mills were Jack's Rifts, a gentle series of small rapids, about 500 feet (152 m) long and 1 foot 6 inches feet (0.5 m) deep, at the outlet of a marshy area (now the lower end of River Styx). It was noted that much of the Cataraqui River, outside of the rifts, was about 4 feet (1.2 m) deep, almost navigable water depth. The plan submitted by Samuel Clowes in 1824 consisted of single locks at Brewer's Lower Mills, Billidore's Rifts and Jack's Rifts (three locks in total). In between these locks, the line of the canal would be straightened, cutting off all the curves of the natural route of the Cataraqui River. John Burrows, in the diary of his second survey of the Rideau route in July 1827, in commenting on Clowes' plan, wrote: "… it was with a feeling of regret that [we] found Mr. Clows has laid out and done considerable work in forming the line of the canal from Kingston Mills to Brewers Mills in direct hostility to the kindness of Divine Nature, who has formed the canal, with but few exceptions for the whole of that distance, in gentle curves, as Hogarth has it 'lines of beauty'. This to the modern engineer is gall and bitterness, apart from the immense expense incurred. Nature may be improved by Science, but never altered." The original plan was similar to that put forward by Clowes, with the exception that more of the Cataraqui channel was to be used, with only a few sharp bends to be excavated. At Brewer's Lower Mills, a canal cut was to be excavated around the mill works so as to leave them undisturbed. A lock of 10 feet 7 inches (3.2 m) depth was proposed. The mill dam, which held back 11 feet (3.4 m) of water, was to be maintained. At Billidore's Rifts a lock with a lift of 5 feet (1.5 m) and wing walls to block the channel (instead of a dam) was proposed. Exactly the same was proposed for Jack's Rifts. Before the contracts were released, plans had changed. It was realized that extensive rock work would be required at Kingston Mills. By decided to add an extra lock and increase the height of the dam at Kingston Mills so that he could build his locks on top of the rock, rather than having to excavate down through it. This proposal put enough head of water over Jack's Rifts and Billidore's Rifts as to preclude the need for dams and locks at these locations. Building the Locks The first contractor was Samuel Clowes, the civil engineer and surveyor for the Macaulay Commission. He was awarded the contract for the construction of the dam and locks at Brewer’s Lower Mills, and for excavation work to straighten and widen the channel for six miles (10 km) downstream. Clowes died in September 1828 and the works at Lower Brewer’s Mills were taken over by Robert Drummond, the contractor at Kingston Mills. It was originally planned to keep the existing mill structures intact by using an artificial canal cut to bypass the mill buildings. The first problem was not with nature but with the operator of the mills, a Colonel McLean who apparently wasn't too keen on the canal being built near his works. In a letter to Colonel By dated July 25, 1827, Clowes wrote: "I have nothing but trouble, Colonel McLean is doing all he can to put a stop to the Canal, he has overflowed the whole work 4 times and me and eighty men have had to stand for two or three days' each time till the water was drained off, and there it leaves the land all over mud; and makes it so sickly it is not possible to keep labourers on the ground. This is just the situation I am in. …" By ordered guards to be posted to prevent McLean from flooding the works again. It was also discovered that the original survey between Upper and Lower Brewer's Mills had an error of 1 foot, 7 inches (0.5 m). In order to accommodate this error, and also save expenses involved in deepening the channel between Lower and Upper Brewers, By decided to raise the elevation of the lock by 3 feet 7 inches (1.1 m). In the end, By ended up making the lock 13 feet, 2 inches (4.0 m) in elevation, sufficient to throw a 7 foot (2.1 m) depth of water into the lower lock at Upper Brewers. By eventually purchased the mill buildings so that he could create his works through the area with no conflicts. He replaced the original mill dam with a 13 foot (4.0 m) high timber waste weir, located slightly downstream on a ridge of bedrock. By noted that the weir was constructed such that the water above the lock could be drawn down to below the level of the upper sill, so that future repairs could more easily be made. It was originally proposed to build the lock on inverted stone arches, but the quality of the clay foundation and the cost saving entailed made By choose to put in a wood floor. He noted that wood, when kept under water, was as durable as stone. The lock was built somewhat in haste because of the severeness of malaria that would strike down most of the workforce every summer. The southern Rideau was particularly hard hit by seasonal outbreaks of malaria. As much work as possible was done in the winter, and during summer, some corners were cut to speed up construction. |Brewer's Lower Mill; Masonry of the Lock nearly completed, Excavation for the Canal in progress, 1831-32 Thomas Burrowes, watercolour, Archives of Ontario The lock, constructed “in the dry” in the canal cut is almost complete. A coffer dam at the lower end keeps the water from the original channel of the Cataraqui River out of the works. At the head of the former channel a waste weir has been constructed to control the level of water above the lock. On the left side, labourers are hand dredging the riverbank to provide a direct navigation entrance to the lock. The stones for the lock are being lifted into place using a shear leg system (2 timbers supported by ropes), the stones lifted using rock tongs at the end of a block and tackle pulley. The dimples you see today in original stones are the chiselled out spots for the tongs to grab onto the stone. The Cataraqui River in the pre-canal era was a meandering creek. The present straight navigation channel (marked with buoys) through the upper end of Colonel By Lake and River Styx represents a 200 foot wide swath of forest being cut down prior to the area being flooded. The satellite photo below of the upper end of River Styx shows this quite clearly: This 2005 satellite image from Google Earth shows part of the meandering Cataraqui River now sitting under 7 to 8 feet of water, the flooding caused by the dam and berms at Kingston Mills. The original Cataraqui River was about 60 feet wide and only a few feet deep. The cut channel through the now drowned forest is 200 feet wide. The view from the water is of red and green navigation buoys marking the cleared channel. Through the Years This lock caused many of problems due to its poor foundation. It leaked badly and required constant grouting and pointing. It was recommended as early as 1840 that the lock be completely rebuild. However, the Board of Works in its attempts at economy did not authorize a complete rebuilt. In 1860, some reconstruction was started with the rebuilding of one the lower wing walls. In 1861, the east wall, which was being supported by iron straps bolted to posts driven into the earth embankment, collapsed. It was reported that the only thing holding up the lock wall was the wooden lock gate, which also received damage. More problems plagued the lock. In 1874, the west (northwest) chamber wall was rebuilt. In 1905-06 and 1906-07 the lock was again rebuilt. Superintendent Phillips stated "This lock has always given trouble, as it is located in the wrong place, and is built on cross timbers bedded into a very poor foundation of soft clay and sand." Repairs to the lock were a constant necessity through the years. This led finally to a total reconstruction of the lock in 1977 using the original stone. In the early 1840s a defensible stone lockmaster's house, 27 feet (8.2 m) on a side, was built here. In about 1899, a second framed wooden storey was added to the building. In 1861, a settler named James C. Foster built a grist mill at this site. He later added a woollen mill and store. A storage elevator was built beside the grist mill in about 1865. A timber swing bridge was built across the lock in 1872. It was an unequal arm, center bearing timber swing bridge (sometimes erroneously called a Kingpost Truss swing bridge). It has been repaired/replaced over the years while maintaining the same design. It is one of only four such timber swing bridges remaining on the Rideau Canal (the others are at Brass Point, Kilmarnock and Nicholsons). The name Washburn was coined by the first postmaster (the lockmaster, John McGillivray) in about 1873. The name means "weirstream," combining the Scottish word "burn," meaning stream and "wash" for the old word "bywash" meaning water flowing from a waste weir. The names Washburn and Lower Brewers Mills have been used interchangeably since that time. The clapboard house at this site was built in about 1930. In 1942 a hydroelectric station was built to use the surplus waste weir water. The old grist mill was torn down and the hydro station built on its foundation. The milling machinery was moved into the old storage elevator and operated, serving local needs, up to the 1960s. The hydroelectric station operated until 1970. Both the hydro station and the old storage elevator are still standing. |Aerial View of Lower Brewers photo by: Simon Lunn, 1998 This photo shows the configuration of the lock placed in an artificial channel, with the weir behind it, blocking the original route of the Cataraqui River. The old power station sits below the weir and the defensible locksmaster’s house sits beside the lock. The Lockmasters to 2000 The first lockmaster, recommended by Colonel By, was Thomas Green, a corporal in the 7th Company, Royal Sappers and Miners. He seems to have been replaced in about 1834-45 by James Callaway, also of the 7th Company, Royal Sappers and Miners. For a brief period, 1844 to mid-1846, Thomas Richey of First Rapids (Poonamalie) was lockmaster. He was dismissed (unfit for duty) in 1846 and replaced by Richard Carey, who was himself replaced in 1851 by William Beal. In 1855 Beal was succeeded by William Robinson who was transferred to Kingston Mills in 1856. His successor was John McGillivray who retired in 1882, and was replaced by his son Henry McGillivray. Henry died in 1891 and was replaced by William Glenn, a lock labourer at the station. He was discharged in 1896 for political patronage reasons but was reinstated in 1897 and served until his death in 1902. Henry McBroom, recommended by a local politician, became lockmaster in 1903 and served until his retirement in 1933. He was followed by S. Woodstock from 1934 to 1937; unknown from 1938 to 1943; Frasier Ball from 1943 to 1944; Percy Gilbert from 1944 to 1954; H.A. Cooper from 1954 to 1956; unknown from 1957 to 1959; J.L. Jones from 1960 to 1965; Byron L. Nixon from 1964 to 1973; Terry Carlo from 1973 to 1979; Albert Mills from 1980 to ?; Doug Langille, acting in the mid 1980s; and P.J. O'Meara from 1986 to 2000.
<urn:uuid:a8112d1a-bf4a-4693-8e69-d694d9cc1658>
CC-MAIN-2021-43
http://www.rideau-info.com/canal/history/locks/h45-lowerbrewers.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00711.warc.gz
en
0.978017
2,715
3.09375
3
Meta analysis with 234 retrospective studies from 1986 to 2007 revealed that 6.3 out of 10 employees working in front of computer screen feel ocular-visual (cons 4 of 10 among other employees) problems, the median age being at 57 years. In DuPont Inc, United States, 535 accidents counted during two years (1999 and 2000), eighty-one percent were related directly or indirectly to CTS (carpal tunnel syndrome). The Northern California Kaiser Medical Care Program conducted a study of 1583 pregnant women in 1988. The study revealed a significant number of obstetric problems in women who worked over 20 hours per week on a computer during the first three months of pregnancy. Primordial prevention of the problems Numerous scientific studies have shown that when we do not meet certain ergonomic constraints, the computer may become hazardous to our health. This danger increases significantly when we spend more than four hours per day watching our screen. Every part of the body is exposed to the risks of the computer. You should know what the conditions that are related are, what causes it, how to protect yourself by using ergonomic equipment, exercises, changing job pattern and what the laws are to pretect employees.[Soft Break] Internet Muscular tension When you are browsing the Internet or you are playing your favorite sport, if you apply the same muscles repeatedly and inappropriately, you will experience a problem: muscle pain, eyestrain or another, either now or later. Good posture, good habits and proper work environment can help you effectively minimize these, though cannot eliminate the risks. They are professional hazards. Your body will never adapt to your desktop computer or laptop computer usage or work, but rather the contrary is the truth. The mission of several institutes is to prevent muscle imbalances and relief through massage therapy and related disciplines. Its action is to manually perform acts stimulating the self-correcting mechanisms to prevent tissue degradation and in restoring their functions. The practitioner’s hands mobilize and stimulate different tissues depending on the type of aggression. This technique is applicable to all ages, for therapeutic purposes or not. Psychology: The Internet Addiction Comparing Internet addiction with the addiction to alcoholism, drug addiction and pathological gambling creating familial disharmony and broken marriages, career crumbling, young people turned to professional criminals; some research suggests that six or even ten percent of Internet users should be seriously affected. Internet addiction as seen by a psychologist Internet addiction is a relatively new phenomenon. An individual who faces a problem of addiction is an individual who suffers from obsessive-compulsive disorder. What is Obsessive Compulsive disorder? When you go to bed at night, you may forget whether you locked the door or not. It is natural to get right out of the bed and check once. Think of someone who is periodically getting up (say every fifteen minutes) from the bed to check the lock. He understands it is wrong (Insight is present and that is why he is not considered “mad”, in common people™s term.) You might have recalled some real person from your real life, who practices this type of repetitive action. The most common is washing to get rid of dirt. Site on the Internet for addiction The classification criteria of this disease in order to create a differential diagnostic that can be found on the net; A critical research conducted by Kimberly Young, statistics that show the evolving phenomenon of the Internet addiction. Most common practices are: - (Data presented with one hundred twenty-five (125) IT professionals, conducted by All India Institute of Medical Science, the paper is yet to be published so, the SD, Standard error of mean, P value are not given) [Soft Break] - Checking e-mail unnecessarily more than five times a day. The compulsion increases with “no new e-mail). œIn our study, 5 males checked up to 28 times per day on average, when emails to his account were blocked. Finally, they started to check Junk box, deleted items. After 65 days of study, 70% was opening typical spam emails with potential risks, 5% even stopped their antivirus software as the programs were disturbing them, as per their version.”[Soft Break] - Surfing social websites, spending more than two hours per day; chatting with friends who can easily be talked to over phone or in person.[Soft Break] - Surfing adult websites, even websites loaded with known risks.[Soft Break] - Solitary surfing (Feels disturbed when someone is at the room where the computer is, when the affected user is surfing). Satisfaction or compulsion (How to stop addiction?) Simply, if you feel you are getting addicted, go on a small vacation without any device with you that allows Internet communication. If it does not control the behavior, you should consult a Psychiatrist. Living with your computer This new media responds with a thirst for communication that challenges all systems of information transmitted earlier. However, the dream of direct expression through a technology that can make everyone an expert leads to many people very frustrated. Other psychological problems include: - Increased risk of Alzheimer disease - Frank Psychosis (Schizophrenia) Ergonomics and computers Problems in the workplace: – 43% of the posts causes postural constraints. – 37% have a screen brightness set too high or too low. – 24% have an inadequate seat: typically small or slender. – 22% do not have a work plan. – 21% of screens have glare. – 9% have a screen too close to the eyes. Ergonomic equipments are good for health? The hardware can cause various physical problems, particularly due to poor posture. Some manufacturers now offer hardware “ergonomic hardware. With these products, the doctor may well be safe than be sorry … starting with himself when he hooked the microphone. (From Article by Dr.Herve Cassagne, JAMA) Back pain, vision problem. It accuses the computer of all evils. Often exaggerated. Still, at home or office, kids and adults spend time watching TV. Here are the rules for work or play more comfortably. The ergonomics of your equipment and its positioning Principles to be: screen, keyboard, mouse, chair, table, documents, and phone — the distances and angles in respect to the workstation. Causes and solutions to your problems with fatigue, visual, cervical, at the shoulders, on the arms. (Advice given by a professor at the school Cégep du Vieux Montréal.) As personal computers were introduced into the workplace, ways have been found to reduce various problems caused by the intensive use of such equipment. The two main components are posture and vision. Better to use ergonomic mouse and glare protecting computer screen. Further research and development of non-flat keyboard is needed for normal human hand posture. Special note on orthopedic problems: Carpal tunnel syndrome, tarsal tunnel syndrome, low backache are the commonest problems in I.T. sector arises due to wrong posture or badly designed human interface hardware. Just an example how ergonomics can prevent these conditions. Most of the cardiologists and radiologists perform echocardiography with a geometrical average of 55 patients per day in India. The patients have a median age of 47yrs. Ninety-seven percent do not complain of any orthopedic problem. We came to the conclusion that the excellent design of the console (clickable double track ball, easy GUI of the Operating System, hot keys, ease of use involving both hands for the Echocardiography and USG machines) along with the chair, positioning of the patient prevents it. However, almost 4-5 high-end servers can be bought with that price against one echo machine. We cannot expect such high-end Human interface device for general computers. Still trackball with neutral positioning of wrist can be expected. -Prolong sitting can cause hernia (Right: It is itself not a risk, but may aggravate. -I wear glasses, so my eyes are already weak, using computer can harm more (Right: Depends on the disease. If it is simple myopia, wearing glass in fact acts as a barrier for the rays) Prolong computer usage sickness: Typically occurs among Windows users, however; Windows users constitutes the majority of computer users today, so the data may be erroneous (Alpha error). Symptoms are Nausea, dizziness, eye soreness, a sense of dissociation from time and space. Another symptom can also be an ill-defined headache. “…this is the most dangerous condition as thought of today. I personally believe this is an alarm from the reticular formation that the hippocampus is being overloaded. Numerous patients of younger age group, particularly those uses laptop or desktop heavily complains too. We can inform our patients to stay completely away from any form of computer for at least 5 days after it has started. For now, only symptomatic treatment is done. Cohort studies are needed to establish the definite relation with cerebella degeneration….” (Respected Dr. Casper; November 2010, The Health, for Internal circulation only). To make it easy: Hippocampus is an important part of our brain that plays important role in memory and other functions of the body. Reticular formation can be thought as a processor cum cable connecting it other parts. There are a typical square feet of floor area, typical cubic feet of air space, maximum hours of work allocated for any employee as per law. You can search for the relevant laws in your country. Remember, we work for pleasure and to remain healthy. If your job is making you ill, either you need to change the job role to suit your mental and physical health or work to change the conditions. Note: Please do not self diagnose your disease. This article is intended to give a general idea about the common health related problems arises due to overuse of computers. Always consult a health professional for any specific problem.
<urn:uuid:120e388a-7bcf-401c-ac84-23a7b07c5a0e>
CC-MAIN-2021-43
https://thecustomizewindows.com/2011/01/risks-related-to-prolonged-usage-of-computers-and-how-to-avoid-them/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00471.warc.gz
en
0.930905
2,081
2.53125
3